Search Results

Search found 8042 results on 322 pages for 'days'.

Page 315/322 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • How to add/remove rows using SlickGrid

    - by lkahtz
    How to write such functions and bind them to two buttons like "add row" and "remove row": The now working example code only support adding new row by editing on the blank bottom line. <!DOCTYPE HTML> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <title>SlickGrid example 3: Editing</title> <link rel="stylesheet" href="../slick.grid.css" type="text/css"/> <link rel="stylesheet" href="../css/smoothness/jquery-ui-1.8.16.custom.css" type="text/css"/> <link rel="stylesheet" href="examples.css" type="text/css"/> <style> .cell-title { font-weight: bold; } .cell-effort-driven { text-align: center; } </style> </head> <body> <div style="position:relative"> <div style="width:600px;"> <div id="myGrid" style="width:100%;height:500px;"></div> </div> <div class="options-panel"> <h2>Demonstrates:</h2> <ul> <li>adding basic keyboard navigation and editing</li> <li>custom editors and validators</li> <li>auto-edit settings</li> </ul> <h2>Options:</h2> <button onclick="grid.setOptions({autoEdit:true})">Auto-edit ON</button> &nbsp; <button onclick="grid.setOptions({autoEdit:false})">Auto-edit OFF</button> </div> </div> <script src="../lib/firebugx.js"></script> <script src="../lib/jquery-1.7.min.js"></script> <script src="../lib/jquery-ui-1.8.16.custom.min.js"></script> <script src="../lib/jquery.event.drag-2.0.min.js"></script> <script src="../slick.core.js"></script> <script src="../plugins/slick.cellrangedecorator.js"></script> <script src="../plugins/slick.cellrangeselector.js"></script> <script src="../plugins/slick.cellselectionmodel.js"></script> <script src="../slick.formatters.js"></script> <script src="../slick.editors.js"></script> <script src="../slick.grid.js"></script> <script> function requiredFieldValidator(value) { if (value == null || value == undefined || !value.length) { return {valid: false, msg: "This is a required field"}; } else { return {valid: true, msg: null}; } } var grid; var data = []; var columns = [ {id: "title", name: "Title", field: "title", width: 120, cssClass: "cell-title", editor: Slick.Editors.Text, validator: requiredFieldValidator}, {id: "desc", name: "Description", field: "description", width: 100, editor: Slick.Editors.LongText}, {id: "duration", name: "Duration", field: "duration", editor: Slick.Editors.Text}, {id: "%", name: "% Complete", field: "percentComplete", width: 80, resizable: false, formatter: Slick.Formatters.PercentCompleteBar, editor: Slick.Editors.PercentComplete}, {id: "start", name: "Start", field: "start", minWidth: 60, editor: Slick.Editors.Date}, {id: "finish", name: "Finish", field: "finish", minWidth: 60, editor: Slick.Editors.Date}, {id: "effort-driven", name: "Effort Driven", width: 80, minWidth: 20, maxWidth: 80, cssClass: "cell-effort-driven", field: "effortDriven", formatter: Slick.Formatters.Checkmark, editor: Slick.Editors.Checkbox} ]; var options = { editable: true, enableAddRow: true, enableCellNavigation: true, asyncEditorLoading: false, autoEdit: false }; $(function () { for (var i = 0; i < 500; i++) { var d = (data[i] = {}); d["title"] = "Task " + i; d["description"] = "This is a sample task description.\n It can be multiline"; d["duration"] = "5 days"; d["percentComplete"] = Math.round(Math.random() * 100); d["start"] = "01/01/2009"; d["finish"] = "01/05/2009"; d["effortDriven"] = (i % 5 == 0); } grid = new Slick.Grid("#myGrid", data, columns, options); grid.setSelectionModel(new Slick.CellSelectionModel()); grid.onAddNewRow.subscribe(function (e, args) { var item = args.item; grid.invalidateRow(data.length); data.push(item); grid.updateRowCount(); grid.render(); }); }) </script> </body> </html>

    Read the article

  • Returning Json object from controller action to jQuery

    - by PsychoCoder
    I'm attempting to get this working properly (2 days now). I'm working on a log in where I'm calling the controller action from jQuery, passing it a JSON object (utilizing json2.js) and returning a Json object from the controller. I'm able to call the action fine, but instead of being able to put the response where I want it it just opens a new window with this printed on the screen: {"Message":"Invalid username/password combination"} And the URL looks like http://localhost:13719/Account/LogOn so instead of calling the action and not reloading the page it's taking the user to the controller, which isn't good. So now for some code, first the controller code [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl = "") { if (ModelState.IsValid) { var login = ObjectFactory.GetInstance<IRepository<PhotographerLogin>>(); var user = login.FindOne(x => x.Login == model.Username && x.Pwd == model.Password); if (user == null) return Json(new FailedLoginViewModel { Message = "Invalid username/password combination" }); else { if (!string.IsNullOrEmpty(returnUrl)) return Redirect(returnUrl); else return RedirectToAction("Index", "Home"); } } return RedirectToAction("Index", "Home"); } And the jQuery code $("#signin_submit").click(function () { var login = getLogin(); $.ajax({ type: "POST", url: "../Account/LogOn", data: JSON.stringify(login), dataType: 'json', contentType: 'application/json; charset=utf-8', error: function (xhr) { $("#message").text(xhr.statusText); }, success: function (result) { } }); }); function getLogin() { var un = $("#username").val(); var pwd = $("#password").val(); var rememberMe = $("#rememberme").val(); return (un == "") ? null : { Username: un, Password: pwd, RememberMe: rememberMe }; } In case you need to see the actual login form here that is as well <fieldset id="signin_menu"> <div> <span id="message"></span> </div> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm("LogOn", "Account", FormMethod.Post, new { @id = "signin" })) {%> <% ViewContext.FormContext.ValidationSummaryId = "valLogOnContainer"; %> <%= Html.LabelFor(m => m.Username) %> <%= Html.TextBoxFor(m => m.Username, new { @class = "inputbox", @tabindex = "4", @id = "username" })%><%= Html.ValidationMessageFor(m => m.Username, "*")%> <p> <%= Html.LabelFor(m=>m.Password) %> <%= Html.PasswordFor(m => m.Password, new { @class = "inputbox", @tabindex = "5", @id = "password" })%><%= Html.ValidationMessageFor(m => m.Password, "*")%> </p> <p class="remember"> <input id="signin_submit" value="Sign in" tabindex="6" type="submit"/> <%= Html.CheckBoxFor(m => m.RememberMe, new { @class = "inputbox", @tabindex = "7", @id = "rememberme" })%> <%= Html.LabelFor(m => m.RememberMe) %> <p class="forgot"> <a href="#" id="forgot_password_link" title="Click here to reset your password.">Forgot your password?</a> </p> <p class="forgot-username"> <a href="#" id="forgot_username_link" title="Fogot your login name? We can help with that">Forgot your username?</a> </p> </p> <%= Html.ValidationSummaryJQuery("Please fix the following errors.", new Dictionary<string, object> { { "id", "valLogOnContainer" } })%> <% } %> </fieldset> The login form is loaded on the main page with <% Html.RenderPartial("LogonControl");%> Not sure if that has any bearing on this or not but thought I'd mention it. EDIT: The login form is loaded similar to the Twitter login, click a link and the form loads with the help of jQuery & CSS

    Read the article

  • JTabbedPane: only first tab is drawn, the second is always empty (newbie Q)

    - by paul
    I created a very simple JTabbedPane by first creating an empty JTabbedPane object, then 2 JPanels that I later add. Each JPanel is holding a object that extends JButton and implements MouseListener. Each of these holds a different image loaded from a file; the image is held locally as a buffered image and as an image icon, etc., all of which works great. The point of all that is to allow resizing of the image when the button is resized (using getscaledinstance()), because the panel is resized, because the JTabbedPane is resized, etc., within the JFrame that holds everything. I override paintComponent() to accomplish this in the class that extends JButton. I am using MigLayout Manager, and all is well on that front controlling layout constraints, growing, filling, initial sizes, preferred sizes, etc. The images the buttons hold are of different sizes and proportions, but this caused no trouble before. Up until 2 days ago everything worked fairly well. I made some changes trying to tweak some resizing issues as I was picking up MigLayout manager. At the time I was playing around with setting various min, max, and preferred sizes using the methods provided for by the components, not the layout manager. I also fooled a bit with pack(), validate(), visible(), opaque() etc., and yes I read the article about Swing and AWT painting here: http://java.sun.com/products/jfc/tsc/articles/painting/ , and I switched to relying more and more on MigLayout. On an unrelated note, it appears JFrame's do not honor maxsize? Somehow, today, with and without using any of these methods provided by swing, with or without using MigLayout manager to handle some of these matters instead, I now have a JTabbedPane that correctly displays the FIRST JPanel I add, but NOT THE SECOND JPanel--which, while present as a tab--does not show when selected. I have switched the order of which panel is added first, and this still holds true regardless of which JPanel I add first, telling me the JPanels are ok, and the problem is most likely in the JTabbedPane. I click on the second tab, the JTabbedPane switches, but I have what appears to be a blank button in the JPanel. A few console system-out statements reveal the following: a) that the second panel and its button are constructed b) no mouse events are being captured when I click on where the second panel and button should reside, as if it didn't exist at that point; c) when I switch to the second tab, the overrided paintComponent() method of the button within that second JPanel is never called, so it is in fact never being painted despite the tab in which it resides becoming visible; d) the JTabbpedPane getComponentCount() returns a correct value of 2 after adding the 2nd panel; e) MigLayout manager actually rocks, but I digress... I cannot now revert to my older code, and despite my best efforts to undo whatever changes caused this, I cannot fix my new problem. I've commented out everything but the most essential calls: constructors for each object--with MigLayout; add() for placing the buttons on the panels using string-arguments appropriate for MigLayout; add() for placing the panels on the JTabbedPane, also with MigLayout string arguments; setting the default op on close for the JFrame; and setting the JFrame visible. This means I do not fiddle with optimization settings, double buffering settings, opaque settings, but leave them as default, and still, no fix; the second panel will not show itself. Each panel, I should add, when it is the first to be loaded, works fine, again re-affirming that the panels and buttons are themselves ok. Here is part of what I am doing: //Note: BuildaButton is a class that merely constructs my instances File f = new File("/foo.jpg"); button1 = new BuildaButton().BuildaButton(f).buildfoo1Button(); f = new File("/foo2.jpg"); button2 = new BuildaButton().BuildaButton(f).buildfoo2Button(); MigLayout ml = new MigLayout("wrap 1", "[fill, grow]0[fill, grow]", "[fill, grow]0[fill, grow]"); MigLayout ml2 = new MigLayout("wrap 2", "[fill, grow]5[fill, grow]", "[fill, grow]0[fill, grow]"); foo1panel = new JPanel(ml); foo1panel.add(button1, "w 234:945:, h 200:807:"); foo2panel = new JPanel(ml); foo2panel.add(button2, "w 186:752:, h 200:807:"); tabs.add("foo1", foo1panel); tabs.add("foo2", foo2panel); System.out.println("contents of tabs: " + tabs.getComponentCount() + " elements"); mainframe.setLayout(ml2); mainframe.setMinimumSize(new Dimension(850,800)); mainframe.add(tabs, "w 600:800:, h 780:780:"); //controlpanel is a still blank jpanel that holds nothing--it is a space holder for now & will be utilized mainframe.add(controlpanel, "w 200:200:200, h 780:780:"); mainframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); mainframe.setVisible(true); Thank you in advance for any help you can give.

    Read the article

  • (C++) Linking with namespaces causes duplicate symbol error

    - by user577072
    Hello. For the past few days, I have been trying to figure out how to link the files for a CLI gaming project I have been working on. There are two halves of the project, the Client and the Server code. The client needs two libraries I've made. The first is a general purpose game board. This is split between GameEngine.h and GameEngine.cpp. The header file looks something like this namespace gfdGaming { // struct sqr_size { // Index x; // Index y; // }; typedef struct { Index x, y; } sqr_size; const sqr_size sPos = {1, 1}; sqr_size sqr(Index x, Index y); sqr_size ePos; class board { // Prototypes / declarations for the class } } And the CPP file is just giving everything content #include "GameEngine.h" type gfdGaming::board::functions The client also has game-specific code (in this case, TicTacToe) split into declarations and definitions (TTT.h, Client.cpp). TTT.h is basically #include "GameEngine.h" #define TTTtar "localhost" #define TTTport 2886 using namespace gfdGaming; void* turnHandler(void*); namespace nsTicTacToe { GFDCON gfd; const char X = 'X'; const char O = 'O'; string MPhostname, mySID; board TTTboard; bool PlayerIsX = true, isMyTurn; char Player = X, Player2 = O; int recon(string* datHolder = NULL, bool force = false); void initMP(bool create = false, string hn = TTTtar); void init(); bool isTie(); int turnPlayer(Index loc, char lSym = Player); bool checkWin(char sym = Player); int mainloop(); int mainloopMP(); }; // NS I made the decision to put this in a namespace to group it instead of a class because there are some parts that would not work well in OOP, and it's much easier to implement later on. I have had trouble linking the client in the past, but this setup seems to work. My server is also split into two files, Server.h and Server.cpp. Server.h contains exactly: #include "../TicTacToe/TTT.h" // Server needs a full copy of TicTacToe code class TTTserv; struct TTTachievement_requirement { Index id; Index loc; bool inUse; }; struct TTTachievement_t { Index id; bool achieved; bool AND, inSameGame; bool inUse; bool (*lHandler)(TTTserv*); char mustBeSym; int mustBePlayer; string name, description; TTTachievement_requirement steps[safearray(8*8)]; }; class achievement_core_t : public GfdOogleTech { public: // May be shifted to private TTTachievement_t list[safearray(8*8)]; public: achievement_core_t(); int insert(string name, string d, bool samegame, bool lAnd, int lSteps[8*8], int mbP=0, char mbS=0); }; struct TTTplayer_t { Index id; bool inUse; string ip, sessionID; char sym; int desc; TTTachievement_t Ding[8*8]; }; struct TTTgame_t { TTTplayer_t Player[safearray(2)]; TTTplayer_t Spectator; achievement_core_t achievement_core; Index cTurn, players; port_t roomLoc; bool inGame, Xused, Oused, newEvent; }; class TTTserv : public gSserver { TTTgame_t Game; TTTplayer_t *cPlayer; port_t conPort; public: achievement_core_t *achiev; thread threads[8]; int parseit(string tDat, string tsIP); Index conCount; int parseit(string tDat, int tlUser, TTTplayer_t** retval); private: int parseProto(string dat, string sIP); int parseProto(string dat, int lUser); int cycleTurn(); void setup(port_t lPort = 0, bool complete = false); public: int newEvent; TTTserv(port_t tlPort = TTTport, bool tcomplete = true); TTTplayer_t* userDC(Index id, Index force = false); int sendToPlayers(string dat, bool asMSG = false); int mainLoop(volatile bool *play); }; // Other void* userHandler(void*); void* handleUser(void*); And in the CPP file I include Server.h and provide main() and the contents of all functions previously declared. Now to the problem at hand I am having issues when linking my server. More specifically, I get a duplicate symbol error for every variable in nsTicTacToe (and possibly in gfdGaming as well). Since I need the TicTacToe functions, I link Client.cpp ( without main() ) when building the server ld: duplicate symbol nsTicTacToe::PlayerIsX in Client.o and Server.o collect2: ld returned 1 exit status Command /Developer/usr/bin/g++-4.2 failed with exit code 1 It stops once a problem is encountered, but if PlayerIsX is removed / changed temporarily than another variable causes an error Essentially, I am looking for any advice on how to better organize my code to hopefully fix these errors. Disclaimers: -I apologize in advance if I provided too much or too little information, as it is my first time posting -I have tried using static and extern to fix these problems, but apparently those are not what I need Thank you to anyone who takes the time to read all of this and respond =)

    Read the article

  • how to cout a vector of structs (that's a class member, using extraction operator)

    - by Julz
    hi, i'm trying to simply cout the elements of a vector using an overloaded extraction operator. the vector contians Point, which is just a struct containing two doubles. the vector is a private member of a class called Polygon, so heres my Point.h #ifndef POINT_H #define POINT_H #include <iostream> #include <string> #include <sstream> struct Point { double x; double y; //constructor Point() { x = 0.0; y = 0.0; } friend std::istream& operator >>(std::istream& stream, Point &p) { stream >> std::ws; stream >> p.x; stream >> p.y; return stream; } friend std::ostream& operator << (std::ostream& stream, Point &p) { stream << p.x << p.y; return stream; } }; #endif my Polygon.h #ifndef POLYGON_H #define POLYGON_H #include "Segment.h" #include <vector> class Polygon { //insertion operator needs work friend std::istream & operator >> (std::istream &inStream, Polygon &vertStr); // extraction operator friend std::ostream & operator << (std::ostream &outStream, const Polygon &vertStr); public: //Constructor Polygon(const std::vector<Point> &theVerts); //Default Constructor Polygon(); //Copy Constructor Polygon(const Polygon &polyCopy); //Accessor/Modifier methods inline std::vector<Point> getVector() const {return vertices;} //Return number of Vector elements inline int sizeOfVect() const {return vertices.size();} //add Point elements to vector inline void setVertices(const Point &theVerts){vertices.push_back (theVerts);} private: std::vector<Point> vertices; }; and Polygon.cc using namespace std; #include "Polygon.h" // Constructor Polygon::Polygon(const vector<Point> &theVerts) { vertices = theVerts; } //Default Constructor Polygon::Polygon(){} istream & operator >> (istream &inStream, Polygon::Polygon &vertStr) { inStream >> ws; inStream >> vertStr; return inStream; } // extraction operator ostream & operator << (ostream &outStream, const Polygon::Polygon &vertStr) { outStream << vertStr.vertices << endl; return outStream; } i figure my Point insertion/extraction is right, i can insert and cout using it and i figure i should be able to just...... cout << myPoly[i] << endl; in my driver? (in a loop) or even... cout << myPoly[0] << endl; without a loop? i've tried all sorts of myPoly.at[i]; myPoly.vertices[i]; etc etc also tried all veriations in my extraction function outStream << vertStr.vertices[i] << endl; within loops, etc etc. when i just create a... vector<Point> myVect; in my driver i can just... cout << myVect.at(i) << endl; no problems. tried to find an answer for days, really lost and not through lack of trying!!! thanks in advance for any help. please excuse my lack of comments and formatting also there's bits and pieces missing but i really just need an answer to this problem thanks again

    Read the article

  • SDL+OpenGL app: blank screen

    - by Lococo
    I spent the last three days trying to create a small app using SDL + OpenGL. The app itself runs fine -- except it never outputs any graphics; just a black screen. I've condensed it down to a minimal C file, and I'm hoping someone can give me some guidance. I'm running out of ideas. I'm using Windows Vista, MinGW & MSYS. Thanks in advance for any advice! #include <SDL/SDL.h> #include <SDL_opengl.h> size_t sx=600, sy=600, bpp=32; void render(void) { glEnable(GL_DEPTH_TEST); // enable depth testing glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // clear to black glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear color/depth buffer glLoadIdentity(); // reset modelview matrix glColor3b(255, 0, 0); // red glLineWidth(3.0); // line width=3 glRecti(10, 10, sx-10, sy-10); // draw rectangle glFlush(); SDL_GL_SwapBuffers(); } int input(void) { SDL_Event event; while (SDL_PollEvent(&event)) if (event.type == SDL_QUIT || (event.type == SDL_KEYUP && event.key.keysym.sym == SDLK_ESCAPE)) return 0; return 1; } int main(int argc, char *argv[]) { SDL_Surface* surf; if (SDL_Init(SDL_INIT_EVERYTHING) != 0) return 0; if (!(surf = SDL_SetVideoMode(sx, sy, bpp, SDL_HWSURFACE|SDL_DOUBLEBUF))) return 0; glViewport(0, 0, sx, sy); // reset the viewport to new dimensions glMatrixMode(GL_PROJECTION); // set projection matrix to be current glLoadIdentity(); // reset projection matrix glOrtho(0, sx, sy, 0, -1.0, 1.0); // create ortho view glMatrixMode(GL_MODELVIEW); // set modelview matrix glLoadIdentity(); // reset modelview matrix for (;;) { if (!input()) break; render(); SDL_Delay(10); } SDL_FreeSurface(surf); SDL_Quit(); exit(0); } UPDATE: I have a version that works, but it changes orthographic to perspective. I'm not sure why this works and the other doesn't, but for future reference, here's a version that works: #include <SDL/SDL.h> #include <SDL_opengl.h> size_t sx=600, sy=600, bpp=32; void render(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); // set location in front of camera glTranslated(0, 0, -10); glBegin(GL_QUADS); // draw a square glColor3d(1, 0, 0); glVertex3d(-2, 2, 0); glVertex3d( 2, 2, 0); glVertex3d( 2, -2, 0); glVertex3d(-2, -2, 0); glEnd(); glFlush(); SDL_GL_SwapBuffers(); } int input(void) { SDL_Event event; while (SDL_PollEvent(&event)) if (event.type == SDL_QUIT || (event.type == SDL_KEYUP && event.key.keysym.sym == SDLK_ESCAPE)) return 0; return 1; } int main(int argc, char *argv[]) { SDL_Surface *surf; if (SDL_Init(SDL_INIT_EVERYTHING) != 0) return 0; if (!(surf = SDL_SetVideoMode(sx, sy, bpp, SDL_OPENGL))) return 0; glViewport(0, 0, sx, sy); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, (float)sx / (float)sy, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glClearColor(0, 0, 0, 1); glClearDepth(1.0); glEnable(GL_DEPTH_TEST); for (;;) { if (!input()) break; render(); SDL_Delay(10); } SDL_FreeSurface(surf); SDL_Quit(); return 0; }

    Read the article

  • Php mail pulling form data from previous page

    - by Mark
    So I have a form being filled out on one php like so: <p> <label for="first_name">First Name: </label> <input type="text" size="30" name="first_name" id="first_name"/> </p> <p> <label for="last_name"> Last Name:</label> <input type="text" size="30" name="last_name" id="last_name"/> </p> <p> <label for="address_street">Street:</label> <input type="text" size="30" name="address_street" id="address_street"/> </p> <p> <label for="address_city">City:</label> <input type="text" size="30" name="address_city" id="address_city"/> </p> <p> <label for="address_state">State/Province:</label> <input type="text" size="30" name="address_state" id="address_state"/> </p> <p> <label for="email">Your e-mail: </label> <input type="text" size="30" name="email" id="email"/> </p> <p> <label for="phone">Your phone number: </label> <input type="text" size="30" name="phone" id="phone"/> </p> This is on one php page. From here, it goes to another php which part of it contains script to send a html email to recipient. Problem is, I cannot seem to get it to pull the variables even though I thought I declared them correctly and mixed them into the html correctly. <?php $first_name = $_POST['first_name']; $last_name = $_POST['last_name']; $to = "[email protected], [email protected]"; $subject = "HTML email for ALPS"; $message .= ' <html> <body> <div style="display: inline-block; width: 28%; float: left;"> <img src="http://englishintheusa.com/images/alps-logo.jpg" alt="ALPS Language School" /> </div> <div style="display: inline-block; width: 68%; float: right;"> <p style="color: #4F81BD; font-size: 20px; text-decoration: underline;">Thanks You For Your Inquiry!</p> </div> <div style="padding-left: 20px; color: #666666; font-size: 16.8px; clear: both;"> <p>Dear $first_name $last_name ,</p> </br > <p>Thank you for the following inquiry:</p> </br > </br > </br > </br > <p>****Comment goes here****</p> </br > </br > <p>We will contact you within 2 business days. Our office is open Monday-Friday, 8:30 AM - 5:00 PM Pacific Standard Time.</p> </br > <p>Thank you for your interest!</p> </br > </br > <p>Best Regards,</p> </br > </br > <p>ALPS Language School</p> </br > </br > <p>430 Broadway East</p> <p>Seattle WA 98102</p> <p>Phone: 206.720.6363</p> <p>Fax: 206. 720.1806</p> <p>Email: [email protected]</p> </div> </body> </html>'; // Always set content-type when sending HTML email $headers .= "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=UTF-8" . "\r\n"; // More headers mail($to,$subject,$message,$headers); ?> So you see where I am trying to get first_name and last_name. Well it doesn't come out correctly. Can someone help here?

    Read the article

  • Javascript: selfmade methods not working correctly

    - by hdr
    Hi everyone, I tried to figure this out for some days now, I tried to use my own object to sort of replace the global object to reduce problems with other scripts (userscripts, chrome extensions... that kind of stuff). However I can't get things to work for some reason. I tried some debugging with JSLint, the developer tools included in Google Chrome, Firebug and the integrated schript debugger in IE8 but there is no error that explains why it doesn't work at all in any browser I tried. I tried IE 8, Google Chrome 10.0.612.3 dev, Firefox 3.6.13, Safari 5.0.3 and Opera 11. So... here is the code: HTML: <!DOCTYPE HTML> <html manifest="c.manifest"> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta charset="utf-8"> <!--[if IE]> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/chrome-frame/1/CFInstall.min.js"></script> <script src="https://ie7-js.googlecode.com/svn/version/2.1(beta4)/IE9.js">IE7_PNG_SUFFIX=".png";</script> <![endif]--> <!--[if lt IE 9]> <script src="js/lib/excanvas.js"></script> <script src="https://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> <script src="js/data.js"></script> </head> <body> <div id="controls"> <button onclick="MYOBJECTis.next()">Next</button> </div> <div id="textfield"></div> <canvas id="game"></canvas> </body> </html> Javascript: var that = window, it = document, k = Math.floor; var MYOBJECTis = { aresizer: function(){ // This method looks like it doesn't work. // It should automatically resize all the div elements and the body. // JSLint: no error (execpt for "'window' is not defined." which is normal since // JSLint does nor recognize references to the global object like "window" or "self" // even if you assume a browser) // Firebug: no error // Chrome dev tools: no error // IE8: that.documentElement.clientWidth is null or not an object "use strict"; var a = that.innerWidth || that.documentElement.clientWidth, d = that.innerHeight || that.documentElement.clientHeight; (function() { for(var b = 0, c = it.getElementsByTagName("div");b < c.length;b++) { c.style.width = k(c.offsetWidth) / 100 * k(a); c.style.height = k(c.offsetHight) / 100 * k(d); } }()); (function() { var b = it.getElementsByTagName("body"); b.width = a; b.height = d; }()); }, next: function(){ // This method looks like it doesn't work. // It should change the text inside a div element // JSLint: no error (execpt for "'window' is not defined.") // Firebug: no error // Chrome dev tools: no error // IE8: no error (execpt for being painfully slow) "use strict"; var b = it.getElementById("textfield"), a = [], c; switch(c !== typeof Number){ case true: a[1] = ["HI"]; c = 0; break; case false: return Error; default: b.innerHtml = a[c]; c+=1; } } }; // auto events (function(){ "use strict"; that.onresize = MYOBJECTis.aresizer(); }()); If anyone can help me out with this I would very much appreciate it. EDIT: To answer the question what's not working I can just say that no method I showed here is working at all and I don't know the cause of the problem. I also tried to clean up some of the code that has most likely nothing to do with it. Additional information is in the comments inside the code.

    Read the article

  • C programming: hashtable insertion/search

    - by Ricardo Campos
    Hello i have a problem with my hash table its implemented like this: #define HT_SIZE 10 typedef struct _list_t_ { char key[20]; char string[20]; char prevValue[20]; struct _list_t_ *next; } list_t; typedef struct _hash_table_t_ { int size; /* the size of the table */ list_t ***table; /* first */ sem_t lock; } hash_table_t; I have a Linked list with 3 pointers because i want a hash table with several partitions (shards), here is my initialization of my Hash table: hash_table_t *create_hash_table(int NUM_SERVER_THREADS, int num_shards){ hash_table_t *new_table; int j,i; if (HT_SIZE<1) return NULL; /* invalid size for table */ /* Attempt to allocate memory for the hashtable structure */ new_table = (hash_table_t*)malloc(sizeof(hash_table_t)*HT_SIZE); /* Attempt to allocate memory for the table itself */ new_table->table = (list_t ***)calloc(1,sizeof(list_t **)); /* Initialize the elements of the table */ for(j=0; j<num_shards; j++){ new_table->table[j] = (list_t **)calloc(1,sizeof(list_t *)); for(i=0; i<HT_SIZE; i++){ new_table->table[j][i] = (list_t *)calloc(1,sizeof(list_t )); } } /* Set the table's size */ new_table->size = HT_SIZE; sem_init(&new_table->lock, 0, 1); return new_table; } Here is my search function to search in the hash table list_t *lookup_string(hash_table_t *hashtable, char *key, int shardId){ list_t *list ; int hashval = hash(key); /* Go to the correct list based on the hash value and see if key is * in the list. If it is, return return a pointer to the list element. * If it isn't, the item isn't in the table, so return NULL. */ sem_wait(&hashtable->lock); for(list = hashtable->table[shardId][hashval]; list != NULL; list =list->next) { if (strcmp(key, list->key) == 0){ sem_post(&hashtable->lock); return list; } } sem_post(&hashtable->lock); return NULL; } And my insert function: char *add_string(hash_table_t *hashtable, char *str,char *key, int shardId){ list_t *new_list; list_t *current_list; unsigned int hashval = hash(key); /*printf("|%d|%d|%s|\n",hashval,shardId,key);*/ /* Lock for concurrency */ sem_wait(&hashtable->lock); /* Attempt to allocate memory for list */ new_list = (list_t*)malloc(sizeof(list_t)); /* Does item already exist? */ sem_post(&hashtable->lock); current_list = lookup_string(hashtable, key,shardId); sem_wait(&hashtable->lock); /* item already exists, don't insert it again. */ if (current_list != NULL){ strcpy(new_list->prevValue,current_list->string); strcpy(new_list->string,str); strcpy(new_list->key,key); new_list->next = hashtable->table[shardId][hashval]; hashtable->table[shardId][hashval] = new_list; sem_post(&hashtable->lock); return new_list->prevValue; } /* Insert into list */ strcpy(new_list->string,str); strcpy(new_list->key,key); new_list->next = hashtable->table[shardId][hashval]; hashtable->table[shardId][hashval] = new_list; /* Unlock */ sem_post(&hashtable->lock); return new_list->prevValue; } My main class runs some of tests by executing the insertion / reading / delete from the elements of the hash table the problem is when i have more than 4 partitions/shards the tests stop at the first reading element saying it returned the wrong value NULL on the search function, when its less than 4 it runs perfectly well and passes all the tests. You can see my main.c in here if you want to give a look: http://hostcode.sourceforge.net/view/1105 My complete Hash table code: http://hostcode.sourceforge.net/view/1103 And other functions where hash table code is executed: .c file http://hostcode.sourceforge.net/view/1104 .h file http://hostcode.sourceforge.net/view/1106 Thank for you time, i appreciate any help you can give to me this is a college important project that I'm trying to solve and I'm stuck here for 2 days.

    Read the article

  • Hi i am creating a php calendar i have a Problem in that

    - by udaya
    Hi i am creating a calendar i which i filled the year and date like this <<<<< Year <<<<< month by clicking on the arrow marks the year and month can be increased and decreased now i have to fill the dates for the year and month selected I calculated the first day of month and last date of the month The dates must be start filling from the first day Say if the first day is thursday the date 1 must be on thursday and the next days must follow that till the last date These are my functions in my controller " function phpcal() { $month=04; $day=01; $year=2010; echo date("D", mktime(0,0,0,$month,$day,$year)); //here i am calculating the first day of the month echo '<br>lastdate'.date("t", strtotime($year . "-" . $month . "-01"));'' here i am calculating the lasdt date of the month //echo '<br>'.$date_end = $this->lastOfMonth(); $this->load->view('phpcal'); } function firstOfMonth($m1,$y1) { return date("m/d/Y", strtotime($m1.'/01/'.$y1.' 00:00:00')); } function lastOfMonth() { return date("m/d/Y", strtotime('-1 second',strtotime('+1 month',strtotime(date('m').'/01/'.date('Y').' 00:00:00')))); } function phpcalview() { $year = $this->input->post('yearvv'); $data['year'] = $this->adminmodel->selectyear(); $data['date'] = $this->adminmodel->selectmonth(); //print_r($data['date'] ); $this->load->view('phpcal',$data); } This is my view page <table cellpadding="2" cellspacing="0" border="1" bgcolor="#CCFFCC" align="center" class="table_Style_Border"> <? if(isset($date)) { foreach($date as $row) {?> <tr> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate2'];?></td> <td><?= $row['dbDate3'];?></td> <td><?= $row['dbDate4'];?></td> <td><?= $row['dbDate5'];?></td> <td><?= $row['dbDate6'];?></td> <td><?= $row['dbDate7'];?></td> </tr> <tr bgcolor="#FFFFFF"> <td><?= $row['dbDate8'];?></td> <td><?= $row['dbDate9'];?></td> <td><?= $row['dbDate10'];?></td> <td><?= $row['dbDate11'];?></td> <td><?= $row['dbDate12'];?></td> <td><?= $row['dbDate13'];?></td> <td><?= $row['dbDate14'];?></td> </tr> <tr> <td><?= $row['dbDate15'];?></td> <td><?= $row['dbDate16'];?></td> <td><?= $row['dbDate17'];?></td> <td><?= $row['dbDate18'];?></td> <td><?= $row['dbDate19'];?></td> <td><?= $row['dbDate20'];?></td> <td><?= $row['dbDate21'];?></td> </tr> <tr bgcolor="#FFFFFF"> <td><?= $row['dbDate22'];?></td> <td><?= $row['dbDate23'];?></td> <td><?= $row['dbDate24'];?></td> <td><?= $row['dbDate25'];?></td> <td><?= $row['dbDate26'];?></td> <td><?= $row['dbDate27'];?></td> <td><?= $row['dbDate28'];?></td> </tr> <tr> <td><?= $row['dbDate29'];?></td> <td><?= $row['dbDate30'];?></td> <td><?= $row['dbDate31'];?></td> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate1'];?></td> <td><?= $row['dbDate1'];?></td> </tr> </tr> <? }} ?> </table> How can i insert the dates starting from the day i have calculated in the function phpcal

    Read the article

  • Re-opening closed dialog puts it in the start position after moving it

    - by semmelbroesel
    I'm using multiple instances of the JQuery-UI dialog along with draggable and resizable. Whenever I drag or resize one of the dialog boxes, the position and dimensions of the current box are saved to a database and then loaded the next time the page opens. This is working well. However, when I close a dialog box and re-open it using a button, jQuery sets the position of the box back to its original location in the center of the screen. Furthermore, I use a show effect to slide the box off to the left on closing and in from the left on re-opening. I found two ways to update its position when the slide in animation is done, however, it still slides into the center of the screen, and I have yet to find a way to get it to slide in towards the location it is supposed to have. Here are the parts of the code that play a part in this: $('.box').dialog({ closeOnEscape: false, hide: { effect: "drop", direction: 'left' }, beforeClose: function (evt, ui){ var $this = $(this); SavePos($this); // saves the dimensions to db }, dragStop: function() { var $this = $(this); SavePos($this); }, resizeStop: function() { var $this = $(this); SavePos($this); }, open: function() { var $this = $(this); $this.dialog('option', { show: { effect: "drop", direction: 'left'} } ); if (init) // meaning only load this code when the page has finished initializing { // tried it both ways - set the position before and after the effect - no success UpdatePos($this.attr('id')); // I tried this section with promise() and effect / complete - I found no difference $this.parent().promise().done(function() { UpdatePos($this.attr('id')); }); } } }); function UpdatePos(key) { // boxpos is an object holding the position for each box by the box's id var $this = $('#' + key); //console.log('updating pos for box ' + boxid); if ($this && $this.hasClass('ui-dialog-content')) { //console.log($this.dialog('widget').css('left')); $this.dialog('option', { width: boxpos[key].width, height: boxpos[key].height }); $this.dialog('widget').css({ left: boxpos[key].left + 'px', top: boxpos[key].top + 'px' }); //console.log('finished updating pos'); //console.log($this.dialog('widget').css('left')); } } The button that re-opens the box has this code on it to make that happen: var $box = $('#boxid'); if ($box) { if ($box.dialog('isOpen')) { $box.dialog('moveToTop'); } else { $box.dialog("open") } } I don't know what jQuery-UI does to the box as it hides it (other than display:none) or to make it slide in, so maybe there's something I'm missing here that might help... Basically, I need JQuery to remember the box' position and put the box back into that location when it is re-opened. It took me days to make it this far, but this is one obstacle I have yet to overcome. Maybe there's a different way I can re-open the box? Thanks! EDIT: Forgot - this issue ONLY happens when I use my UpdatePos function to set the location of a box (i.e. on page load). When I drag a box with my mouse, close it, and re-open it, everything works. So I'm guessing there's one more storage location for the box' position that I'm missing here... EDIT2: After more messing with it, my code for debugging now looks like this: open: function() { var $this = $(this); console.log('box open'); console.log($this.dialog('widget').position()); // { top=0, left=-78.5} console.log($this.dialog('widget').css('left')); $this.dialog('option', { show: { effect: "drop", direction: 'left'} } ); if (init) { UpdatePos($this.attr('id')); $this.parent().promise().done(function() { console.log($this.dialog('widget').position()); // { top=313, left=641.5} console.log($this.dialog('widget').css('left')); UpdatePos($this.attr('id')); console.log($this.dialog('widget').position()); // { top=121, left=107} console.log($this.dialog('widget').css('left')); }); } The results I'm getting are: box open Object { top=0, left=-78.5} -78.5px Object { top=313, left=641.5} 641.5px Object { top=121, left=107} 107px So looks to me as if the widget is being moved off screen (left=-78.5) and then moved for the animation, and then my code moves it into the location that it should be in (121/107). The position() results for $box.position() or $box.dialog().position() do not change during this debugging section. Maybe this will help someone here - I'm still out of ideas here... ... and I just discovered that when I drag the item around myself, then close and re-open it, it is very unpredictable. Sometimes, it will end up in the correct location horizontally, but not vertically. Sometimes, it will end up back in the center of the screen...

    Read the article

  • high tweet status IDs causing failed to open stream errors?

    - by escarp
    Erg. Starting in the past few days high tweet IDs (at least, it appears it's ID related, but I suppose it could be some recent change in the api returns) are breaking my code. At first I tried passing the ID as a string instead of an integer to this function, and I thought this worked, but in reality it was just the process of uploading the file from my end. In short, a php script generates these function calls, and when it does so, they fail. If I download the php file the call is generated into, delete the server copy and re-upload the exact same file without changing it, it works fine. Does anyone know what could be causing this behavior? Below is what I suspect to be the most important part of the individual files that are pulling the errors. Each of the files is named for a status ID (e.g. the below file is named 12058543656.php) <?php require "singlePost.php"; SinglePost(12058543656) ?> Here's the code that writes the above files: $postFileName = $single_post_id.".php"; if(!file_exists($postFileName)){ $created_at_full = date("l, F jS, Y", strtotime($postRow[postdate])-(18000)); $postFileHandle = fopen($postFileName, 'w+'); fwrite($postFileHandle, '<html> <head> <title><?php $thisTITLE = "escarp | A brief poem or short story by '.$authorname.' on '.$created_at_full.'"; echo $thisTITLE;?></title><META NAME="Description" CONTENT="This brief poem or short story, by '.$authorname.', was published on '.$created_at_full.'"> <?php include("head.php");?> To receive other poems or short stories like this one from <a href=http://twitter.com/escarp>escarp</a> on your cellphone, <a href=http://twitter.com/signup>create</a> and/or <a href=http://twitter.com/devices>associate</a> a Twitter account with your cellphone</a>, follow <a href=http://twitter.com/escarp>us</a>, and turn device updates on. <pre><?php require "singlePost.php"; SinglePost("'.$single_post_id.'") ?> </div></div></pre><?php include("foot.php");?> </body> </html>'); fclose($postFileHandle);} $postcounter++; } I can post more if you don't see anything here, but there are several files involved and I'm trying to avoid dumping tons of irrelevant code. Error: Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include() [function.include]: Failed opening 'head.php' for inclusion (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 4 To receive other poems or short stories like this one from escarp on your cellphone, create and/or associate a Twitter account with your cellphone, follow us, and turn device updates on. Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Fatal error: require() [function.require]: Failed opening required 'singlePost.php' (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 7 <?php function SinglePost($statusID) { require "nicetime.php"; $db = sqlite_open("db.escarp"); $updates = sqlite_query($db, "SELECT * FROM posts WHERE postID = '$statusID'"); $row = sqlite_fetch_array($updates, SQLITE_ASSOC); $id = $row[authorID]; $result = sqlite_query($db, "SELECT * FROM authors WHERE authorID = '$id'"); $row5 = sqlite_fetch_array($result, SQLITE_ASSOC); $created_at_full = date("l, F jS, Y", strtotime($row[postdate])-(18000)); $created_at = nicetime($row[postdate]); if($row5[url]==""){ $authorurl = ''; } else{ /*I'm omitting a few pages of output code and associated regex*/ return; } ?>

    Read the article

  • Perl - Reading .txt files line-by-line and using compare function (printing non-matches only once)

    - by Kurt W
    I am really struggling and have spent about two full days on this banging my head against receiving the same result every time I run this perl script. I have a Perl script that connects to a vendor tool and stores data for ~26 different elements within @data. There is a foreach loop for @data that breaks the 26 elements into $e-{'element1'), $e-{'element2'), $e-{'element3'), $e-{'element4'), etc. etc. etc. I am also reading from the .txt files within a directory (line-by-line) and comparing the server names that exist within the text files with what exists in $e-{'element4'}. The Problem: Matches are working perfectly and only printing one line for each of the 26 elements when there is a match, however non-matches are producing one line for every entry within the .txt files (37 in all). So if there are 100 entries (each entry having 26 elements) stored within @data, then there are 100 x 37 entries being printed. So for every non-match in the: if ($e-{'element4'} eq '6' && $_ =~ /$e-{element7}/i) statement below, I am receiving a print out saying that there is not a match. 37 entries for the same identical 26 elements (because there are 37 total entries in all of the .txt files). The Goal: I need to print out only 1 line for each unique entry (a unique entry being $e-{element1} thru $e-{element26}). It is already printing one 1 line for matches, but it is printing out 37 entries when there is not a match. I need to treat matches and non-matches differently. Code: foreach my $e (@data) { # Open the .txt files stored within $basePath and use for comparison: opendir(DIRC, $basePath . "/") || die ("cannot open directory"); my @files=(readdir(DIRC)); my @MPG_assets = grep(/(.*?).txt/, @files); # Loop through each system name found and compare it with the data in SC for a match: foreach(@MPG_assets) { $filename = $_; open (MPGFILES, $basePath . "/" . $filename) || die "canot open the file"; while(<MPGFILES>) { if ($e->{'element4'} eq '6' && $_ =~ /$e->{'element7'}/i) { ## THIS SECTION WORKS PERFECTLY AND ONLY PRINTS MATCHES WHERE $_ ## (which contains the servernames (1 per line) in the .txt files) ## EQUALS $e->{'element7'}. print $e->{'element1'} . "\n"; print $e->{'element2'} . "\n"; print $e->{'element3'} . "\n"; print $e->{'element4'} . "\n"; print $e->{'element5'} . "\n"; # ... print $e->{'element26'} . "\n"; } else { ## **THIS SECTION DOES NOT WORK**. FOR EVERY NON-MATCH, THERE IS A ## LINE PRINTED WITH 26 IDENTICAL ELEMENTS BECAUSE ITS LOOPING THRU ## THE 37 LINES IN THE *.TXT FILES. print $e->{'element1'} . "\n"; print $e->{'element2'} . "\n"; print $e->{'element3'} . "\n"; print $e->{'element4'} . "\n"; print $e->{'element5'} . "\n"; # ... print $e->{'element26'} . "\n"; } # End of 'if ($e->{'element4'} eq..' statement } # End of while loop } # End of 'foreach(@MPG_assets)' } # End of 'foreach my $e (@data)' I think I need something to identical unique elements and define what fields make up a unique element but honestly I have tried everything I know. If you would be so kind to provide actual code fixes, that would be wonderful because I am headed to production with this script quite soon. Also. I am looking for code (ideally) that is very human-readable because I will need to document it so others can understand. Please let me know if you need additional information.

    Read the article

  • Min-Ordered Bionomial Heap Insertion java

    - by Charodd Richardson
    Im writing a java code to make a min-ordered Binomial Heap and I have to Insert and Remove-min. I'm having a very big problem inserting into the Heap. I have been stuck on this for a couple of days now and it is due tomorrow. Whenever I go to insert, It only prints out the item I insert instead of the whole tree (which is in preorder). Such as if I insert 1 it prints (1) and then I go to insert 2 it prints out (2) instead of (1(2)) It keeps printing out only the number I insert last instead of the whole preordered tree. I would be very grateful if someone could help me with this problem. Thank you so much in advance, Here is my code. public class BHeap { int key; int degree;//The degree(Number of children) BHeap parent, leftmostChild, rightmostChild, rightSibling,root,previous,next; public BHeap(){ key =0; degree=0; parent =null; leftmostChild=null; rightmostChild=null; rightSibling=null; root=null; previous=null; next=null; } public BHeap merge(BHeap x, BHeap y){ BHeap newHeap = new BHeap(); y.rightSibling=x.root; BHeap currentHeap = y; BHeap nextHeap = y.rightSibling; while(currentHeap.rightSibling !=null){ if(currentHeap.degree==nextHeap.degree){ if(currentHeap.key<nextHeap.key){ if(currentHeap.degree ==0){ currentHeap.leftmostChild=nextHeap; currentHeap.rightmostChild=nextHeap; currentHeap.rightSibling=nextHeap.rightSibling; nextHeap.rightSibling=null; nextHeap.parent=currentHeap; currentHeap.degree++; } else{ newHeap = currentHeap; newHeap.rightmostChild.rightSibling=nextHeap; newHeap.rightmostChild=nextHeap; nextHeap.parent=newHeap; newHeap.degree++; nextHeap.rightSibling=null; nextHeap=newHeap.rightSibling; } } else{ if(currentHeap.degree==0){ nextHeap.rightmostChild=currentHeap; nextHeap.rightmostChild.root = nextHeap.rightmostChild;//add nextHeap.leftmostChild=currentHeap; nextHeap.leftmostChild.root = nextHeap.leftmostChild;//add currentHeap.parent=nextHeap; currentHeap.rightSibling=null; currentHeap.root=currentHeap;//add nextHeap.degree++; } else{ newHeap=nextHeap; newHeap.rightmostChild.rightSibling=currentHeap; newHeap.rightmostChild=currentHeap; currentHeap.parent= newHeap; newHeap.degree++; currentHeap=newHeap.rightSibling; currentHeap.rightSibling=null; } } } else{ currentHeap=currentHeap.rightSibling; nextHeap=nextHeap.rightSibling; } } return y; } public void Insert(int x){ /*BHeap newHeap = new BHeap(); newHeap.key=x; if(this.root==null){ this.root=newHeap; return; } else{ this.root=merge(newHeap,this.root); }*/ BHeap newHeap= new BHeap(); newHeap.key=x; if(this.root==null){ this.root=newHeap; } else{ this.root = merge(this,newHeap); }} public void RemoveMin(){ BHeap newHeap = new BHeap(); BHeap child = new BHeap(); newHeap=this; BHeap pos = newHeap.next; while(pos !=null){ if(pos.key<newHeap.key){ newHeap=pos; } pos=pos.rightSibling; } pos=this; BHeap B1 = new BHeap(); if(newHeap.previous!=null){ newHeap.previous.rightSibling=newHeap.rightSibling; B1 =pos.leftmostChild; B1.rightSibling=pos; pos.leftmostChild=pos.rightmostChild.leftmostChild; } else{ newHeap=newHeap.rightSibling; newHeap.previous.rightSibling=newHeap.rightSibling; B1 =pos.leftmostChild; B1.rightSibling=pos; pos.leftmostChild=pos.rightmostChild.leftmostChild; } merge(newHeap,B1); } public void Display(){ System.out.print("("); System.out.print(this.root.key); if(this.leftmostChild != null){ this.leftmostChild.Display(); } System.out.print(")"); if(this.rightSibling!=null){ this.rightSibling.Display(); } } }

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Why is Java EE 6 better than Spring ?

    - by arungupta
    Java EE 6 was released over 2 years ago and now there are 14 compliant application servers. In all my talks around the world, a question that is frequently asked is Why should I use Java EE 6 instead of Spring ? There are already several blogs covering that topic: Java EE wins over Spring by Bill Burke Why will I use Java EE instead of Spring in new Enterprise Java projects in 2012 ? by Kai Waehner (more discussion on TSS) Spring to Java EE migration (Part 1 and 2, 3 and 4 coming as well) by David Heffelfinger Spring to Java EE - A Migration Experience by Lincoln Baxter Migrating Spring to Java EE 6 by Bert Ertman and Paul Bakker at NLJUG Moving from Spring to Java EE 6 - The Age of Frameworks is Over at TSS Java EE vs Spring Shootout by Rohit Kelapure and Reza Rehman at JavaOne 2011 Java EE 6 and the Ewoks by Murat Yener Definite excuse to avoid Spring forever - Bert Ertman and Arun Gupta I will try to share my perspective in this blog. First of all, I'd like to start with a note: Thank you Spring framework for filling the interim gap and providing functionality that is now included in the mainstream Java EE 6 application servers. The Java EE platform has evolved over the years learning from frameworks like Spring and provides all the functionality to build an enterprise application. Thank you very much Spring framework! While Spring was revolutionary in its time and is still very popular and quite main stream in the same way Struts was circa 2003, it really is last generation's framework - some people are even calling it legacy. However my theory is "code is king". So my approach is to build/take a simple Hello World CRUD application in Java EE 6 and Spring and compare the deployable artifacts. I started looking at the official tutorial Developing a Spring Framework MVC Application Step-by-Step but it is using the older version 2.5. I wasn't able to find any updated version in the current 3.1 release. Next, I downloaded Spring Tool Suite and thought that would provide some template samples to get started. A least a quick search did not show any handy tutorials - either video or text-based. So I searched and found a link to their SVN repository at src.springframework.org/svn/spring-samples/. I tried the "mvc-basic" sample and the generated WAR file was 4.43 MB. While it was named a "basic" sample it seemed to come with 19 different libraries bundled but it was what I could find: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/joda-time-jsptags-1.0.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar And it is not even using any database! The app deployed fine on GlassFish 3.1.2 but the "@Controller Example" link did not work as it was missing the context root. With a bit of tweaking I could deploy the application and assume that the account got created because no error was displayed in the browser or server log. Next I generated the WAR for "mvc-ajax" and the 5.1 MB WAR had 20 JARs (1 removed, 2 added): ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.6.4.jar./WEB-INF/lib/jackson-mapper-asl-1.6.4.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar 2 more JARs for just doing Ajax. Anyway, deploying this application gave the following error: Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.map.SerializationConfig.<init>(Lorg/codehaus/jackson/map/ClassIntrospector;Lorg/codehaus/jackson/map/AnnotationIntrospector;Lorg/codehaus/jackson/map/introspect/VisibilityChecker;Lorg/codehaus/jackson/map/jsontype/SubtypeResolver;)V    at org.springframework.samples.mvc.ajax.json.ConversionServiceAwareObjectMapper.<init>(ConversionServiceAwareObjectMapper.java:20)    at org.springframework.samples.mvc.ajax.json.JacksonConversionServiceConfigurer.postProcessAfterInitialization(JacksonConversionServiceConfigurer.java:40)    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:407) Seems like some incorrect repos in the "pom.xml". Next one is "mvc-showcase" and the 6.49 MB WAR now has 28 JARs as shown below: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/aspectjrt-1.6.10.jar./WEB-INF/lib/commons-fileupload-1.2.2.jar./WEB-INF/lib/commons-io-2.0.1.jar./WEB-INF/lib/el-api-2.2.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.8.1.jar./WEB-INF/lib/jackson-mapper-asl-1.8.1.jar./WEB-INF/lib/javax.inject-1.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/jdom-1.0.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-api-1.2.jar./WEB-INF/lib/jstl-impl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/rome-1.0.0.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.1.0.RELEASE.jar./WEB-INF/lib/spring-asm-3.1.0.RELEASE.jar./WEB-INF/lib/spring-beans-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-support-3.1.0.RELEASE.jar./WEB-INF/lib/spring-core-3.1.0.RELEASE.jar./WEB-INF/lib/spring-expression-3.1.0.RELEASE.jar./WEB-INF/lib/spring-web-3.1.0.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.1.0.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar The app at least deployed and showed results this time. But still no database! Next I tried building "jpetstore" and got the error: [ERROR] Failed to execute goal on project org.springframework.samples.jpetstore:Could not resolve dependencies for project org.springframework.samples:org.springframework.samples.jpetstore:war:1.0.0-SNAPSHOT: Failed to collect dependencies for [commons-fileupload:commons-fileupload:jar:1.2.1 (compile), org.apache.struts:com.springsource.org.apache.struts:jar:1.2.9 (compile), javax.xml.rpc:com.springsource.javax.xml.rpc:jar:1.1.0 (compile), org.apache.commons:com.springsource.org.apache.commons.dbcp:jar:1.2.2.osgi (compile), commons-io:commons-io:jar:1.3.2 (compile), hsqldb:hsqldb:jar:1.8.0.7 (compile), org.apache.tiles:tiles-core:jar:2.2.0 (compile), org.apache.tiles:tiles-jsp:jar:2.2.0 (compile), org.tuckey:urlrewritefilter:jar:3.1.0 (compile), org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-orm:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-context-support:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework.webflow:spring-js:jar:2.0.7.RELEASE (compile), org.apache.ibatis:com.springsource.com.ibatis:jar:2.3.4.726 (runtime), com.caucho:com.springsource.com.caucho:jar:3.2.1 (compile), org.apache.axis:com.springsource.org.apache.axis:jar:1.4.0 (compile), javax.wsdl:com.springsource.javax.wsdl:jar:1.6.1 (compile), javax.servlet:jstl:jar:1.2 (runtime), org.aspectj:aspectjweaver:jar:1.6.5 (compile), javax.servlet:servlet-api:jar:2.5 (provided), javax.servlet.jsp:jsp-api:jar:2.1 (provided), junit:junit:jar:4.6 (test)]: Failed to read artifact descriptor for org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT: Could not transfer artifact org.springframework:spring-webmvc:pom:3.0.0.BUILD-SNAPSHOT from/to JBoss repository (http://repository.jboss.com/maven2): Access denied to: http://repository.jboss.com/maven2/org/springframework/spring-webmvc/3.0.0.BUILD-SNAPSHOT/spring-webmvc-3.0.0.BUILD-SNAPSHOT.pom It appears the sample is broken - maybe I was pulling from the wrong repository - would be great if someone were to point me at a good target to use here. With a 50% hit on samples in this repository, I started searching through numerous blogs, most of which have either outdated information (using XML-heavy Spring 2.5), some piece of configuration (which is a typical "feature" of Spring) is missing, or too much complexity in the sample. I finally found this blog that worked like a charm. This blog creates a trivial Spring MVC 3 application using Hibernate and MySQL. This application performs CRUD operations on a single table in a database using typical Spring technologies.  I downloaded the sample code from the blog, deployed it on GlassFish 3.1.2 and could CRUD the "person" entity. The source code for this application can be downloaded here. More details on the application statistics below. And then I built a similar CRUD application in Java EE 6 using NetBeans wizards in a couple of minutes. The source code for the application can be downloaded here and the WAR here. The Spring Source Tool Suite may also offer similar wizard-driven capabilities but this blog focus primarily on comparing the runtimes. The lack of STS tutorials was slightly disappointing as well. NetBeans however has tons of text-based and video tutorials and tons of material even by the community. One more bit on the download size of tools bundle ... NetBeans 7.1.1 "All" is 211 MB (which includes GlassFish and Tomcat) Spring Tool Suite  2.9.0 is 347 MB (~ 65% bigger) This blog is not about the tooling comparison so back to the Java EE 6 version of the application .... In order to run the Java EE version on GlassFish, copy the MySQL Connector/J to glassfish3/glassfish/domains/domain1/lib/ext directory and create a JDBC connection pool and JDBC resource as: ./bin/asadmin create-jdbc-connection-pool --datasourceclassname \\ com.mysql.jdbc.jdbc2.optional.MysqlDataSource --restype \\ javax.sql.DataSource --property \\ portNumber=3306:user=mysql:password=mysql:databaseName=mydatabase \\ myConnectionPool ./bin/asadmin create-jdbc-resource --connectionpoolid myConnectionPool jdbc/myDataSource I generated WARs for the two projects and the table below highlights some differences between them: Java EE 6 Spring WAR File Size 0.021030 MB 10.87 MB (~516x) Number of files 20 53 (> 2.5x) Bundled libraries 0 36 Total size of libraries 0 12.1 MB XML files 3 5 LoC in XML files 50 (11 + 15 + 24) 129 (27 + 46 + 16 + 11 + 19) (~ 2.5x) Total .properties files 1 Bundle.properties 2 spring.properties, log4j.properties Cold Deploy 5,339 ms 11,724 ms Second Deploy 481 ms 6,261 ms Third Deploy 528 ms 5,484 ms Fourth Deploy 484 ms 5,576 ms Runtime memory ~73 MB ~101 MB Some points worth highlighting from the table ... 516x WAR file, 10x deployment time - With 12.1 MB of libraries (for a very basic application) bundled in your application, the WAR file size and the deployment time will naturally go higher. The WAR file for Spring-based application is 516x bigger and the deployment time is double during the first deployment and ~ 10x during subsequent deployments. The Java EE 6 application is fully portable and will run on any Java EE 6 compliant application server. 36 libraries in the WAR - There are 14 Java EE 6 compliant application servers today. Each of those servers provide all the functionality like transactions, dependency injection, security, persistence, etc typically required of an enterprise or web application. There is no need to bundle 36 libraries worth 12.1 MB for a trivial CRUD application. These 14 compliant application servers provide all the functionality baked in. Now you can also deploy these libraries in the container but then you don't get the "portability" offered by Spring in that case. Does your typical Spring deployment actually do that ? 3x LoC in XML - The number of XML files is about 1.6x and the LoC is ~ 2.5x. So much XML seems circa 2003 when the Java language had no annotations. The XML files can be further reduced, e.g. faces-config.xml can be replaced without providing i18n, but I just want to compare stock applications. Memory usage - Both the applications were deployed on default GlassFish 3.1.2 installation and any additional memory consumed as part of deployment/access was attributed to the application. This is by no means scientific but at least provides an initial ballpark. This area definitely needs more investigation. Another table that compares typical Java EE 6 compliant application servers and the custom-stack created for a Spring application ... Java EE 6 Spring Web Container ? 53 MB (tcServer 2.6.3 Developer Edition) Security ? 12 MB (Spring Security 3.1.0) Persistence ? 6.3 MB (Hibernate 4.1.0, required) Dependency Injection ? 5.3 MB (Framework) Web Services ? 796 KB (Spring WS 2.0.4) Messaging ? 3.4 MB (RabbitMQ Server 2.7.1) 936 KB (Java client 936) OSGi ? 1.3 MB (Spring OSGi 1.2.1) GlassFish and WebLogic (starting at 33 MB) 83.3 MB There are differentiating factors on both the stacks. But most of the functionality like security, persistence, and dependency injection is baked in a Java EE 6 compliant application server but needs to be individually managed and patched for a Spring application. This very quickly leads to a "stack explosion". The Java EE 6 servers are tested extensively on a variety of platforms in different combinations whereas a Spring application developer is responsible for testing with different JDKs, Operating Systems, Versions, Patches, etc. Oracle has both the leading OSS lightweight server with GlassFish and the leading enterprise Java server with WebLogic Server, both Java EE 6 and both with lightweight deployment options. The Web Container offered as part of a Java EE 6 application server not only deploys your enterprise Java applications but also provide operational management, diagnostics, and mission-critical capabilities required by your applications. The Java EE 6 platform also introduced the Web Profile which is a subset of the specifications from the entire platform. It is targeted at developers of modern web applications offering a reasonably complete stack, composed of standard APIs, and is capable out-of-the-box of addressing the needs of a large class of Web applications. As your applications grow, the stack can grow to the full Java EE 6 platform. The GlassFish Server Web Profile starting at 33MB (smaller than just the non-standard tcServer) provides most of the functionality typically required by a web application. WebLogic provides battle-tested functionality for a high throughput, low latency, and enterprise grade web application. No individual managing or patching, all tested and commercially supported for you! Note that VMWare does have a server, tcServer, but it is non-standard and not even certified to the level of the standard Web Profile most customers expect these days. Customers who choose this risk proprietary lock-in since VMWare does not seem to want to formally certify with either Java EE 6 Enterprise Platform or with Java EE 6 Web Profile but of course it would be great if they were to join the community and help their customers reduce the risk of deploying on VMWare software. Some more points to help you decide choose between Java EE 6 and Spring ... Freedom to choose container - There are 14 Java EE 6 compliant application servers today, with a variety of open source and commercial offerings. A Java EE 6 application can be deployed on any of those containers. So if you deployed your application on GlassFish today and would like to scale up with your demands then you can deploy the same application to WebLogic. And because of the portability of a Java EE 6 application, you can even take it a different vendor altogether. Spring requires a runtime which could be any of these app servers as well. But why use Spring when all the required functionality is already baked into the application server itself ? Spring also has a different definition of portability where they claim to bundle all the libraries in the WAR file and move to any application server. But we saw earlier how bloated that archive could be. The equivalent features in Spring runtime offerings (mainly tcServer) are not all open source, not as mature, and often require manual assembly.  Vendor choice - The Java EE 6 platform is created using the Java Community Process where all the big players like Oracle, IBM, RedHat, and Apache are conritbuting to make the platform successful. Each application server provides the basic Java EE 6 platform compliance and has its own competitive offerings. This allows you to choose an application server for deploying your Java EE 6 applications. If you are not happy with the support or feature of one vendor then you can move your application to a different vendor because of the portability promise offered by the platform. Spring is a set of products from a single company, one price book, one support organization, one sustaining organization, one sales organization, etc. If any of those cause a customer headache, where do you go ? Java EE, backed by multiple vendors, is a safer bet for those that are risk averse. Production support - With Spring, typically you need to get support from two vendors - VMWare and the container provider. With Java EE 6, all of this is typically provided by one vendor. For example, Oracle offers commercial support from systems, operating systems, JDK, application server, and applications on top of them. VMWare certainly offers complete production support but do you really want to put all your eggs in one basket ? Do you really use tcServer ? ;-) Maintainability - With Spring, you are likely building your own distribution with multiple JAR files, integrating, patching, versioning, etc of all those components. Spring's claim is that multiple JAR files allow you to go à la carte and pick the latest versions of different components. But who is responsible for testing whether all these versions work together ? Yep, you got it, its YOU! If something does not work, who patches and maintains the JARs ? Of course, you! Commercial support for such a configuration ? On your own! The Java EE application servers manage all of this for you and provide a well-tested and commercially supported bundle. While it is always good to realize that there is something new and improved that updates and replaces older frameworks like Spring, the good news is not only does a Java EE 6 container offer what is described here, most also will let you deploy and run your Spring applications on them while you go through an upgrade to a more modern architecture. End result, you get the best of both worlds - keeping your legacy investment but moving to a more agile, lightweight world of Java EE 6. A message to the Spring lovers ... The complexity in J2EE 1.2, 1.3, and 1.4 led to the genesis of Spring but that was in 2004. This is 2012 and the name has changed to "Java EE 6" :-) There are tons of improvements in the Java EE platform to make it easy-to-use and powerful. Some examples: Adding @Stateless on a POJO makes it an EJB EJBs can be packaged in a WAR with no special packaging or deployment descriptors "web.xml" and "faces-config.xml" are optional in most of the common cases Typesafe dependency injection is now part of the Java EE platform Add @Path on a POJO allows you to publish it as a RESTful resource EJBs can be used as backing beans for Facelets-driven JSF pages providing full MVC Java EE 6 WARs are known to be kilobytes in size and deployed in milliseconds Tons of other simplifications in the platform and application servers So if you moved away from J2EE to Spring many years ago and have not looked at Java EE 6 (which has been out since Dec 2009) then you should definitely try it out. Just be at least aware of what other alternatives are available instead of restricting yourself to one stack. Here are some workshops and screencasts worth trying: screencast #37 shows how to build an end-to-end application using NetBeans screencast #36 builds the same application using Eclipse javaee-lab-feb2012.pdf is a 3-4 hours self-paced hands-on workshop that guides you to build a comprehensive Java EE 6 application using NetBeans Each city generally has a "spring cleanup" program every year. It allows you to clean up the mess from your house. For your software projects, you don't need to wait for an annual event, just get started and reduce the technical debt now! Move away from your legacy Spring-based applications to a lighter and more modern approach of building enterprise Java applications using Java EE 6. Watch this beautiful presentation that explains how to migrate from Spring -> Java EE 6: List of files in the Java EE 6 project: ./index.xhtml./META-INF./person./person/Create.xhtml./person/Edit.xhtml./person/List.xhtml./person/View.xhtml./resources./resources/css./resources/css/jsfcrud.css./template.xhtml./WEB-INF./WEB-INF/classes./WEB-INF/classes/Bundle.properties./WEB-INF/classes/META-INF./WEB-INF/classes/META-INF/persistence.xml./WEB-INF/classes/org./WEB-INF/classes/org/javaee./WEB-INF/classes/org/javaee/javaeemysql./WEB-INF/classes/org/javaee/javaeemysql/AbstractFacade.class./WEB-INF/classes/org/javaee/javaeemysql/Person.class./WEB-INF/classes/org/javaee/javaeemysql/Person_.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$1.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$PersonControllerConverter.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController.class./WEB-INF/classes/org/javaee/javaeemysql/PersonFacade.class./WEB-INF/classes/org/javaee/javaeemysql/util./WEB-INF/classes/org/javaee/javaeemysql/util/JsfUtil.class./WEB-INF/classes/org/javaee/javaeemysql/util/PaginationHelper.class./WEB-INF/faces-config.xml./WEB-INF/web.xml List of files in the Spring 3.x project: ./META-INF ./META-INF/MANIFEST.MF./WEB-INF./WEB-INF/applicationContext.xml./WEB-INF/classes./WEB-INF/classes/log4j.properties./WEB-INF/classes/org./WEB-INF/classes/org/krams ./WEB-INF/classes/org/krams/tutorial ./WEB-INF/classes/org/krams/tutorial/controller ./WEB-INF/classes/org/krams/tutorial/controller/MainController.class ./WEB-INF/classes/org/krams/tutorial/domain ./WEB-INF/classes/org/krams/tutorial/domain/Person.class ./WEB-INF/classes/org/krams/tutorial/service ./WEB-INF/classes/org/krams/tutorial/service/PersonService.class ./WEB-INF/hibernate-context.xml ./WEB-INF/hibernate.cfg.xml ./WEB-INF/jsp ./WEB-INF/jsp/addedpage.jsp ./WEB-INF/jsp/addpage.jsp ./WEB-INF/jsp/deletedpage.jsp ./WEB-INF/jsp/editedpage.jsp ./WEB-INF/jsp/editpage.jsp ./WEB-INF/jsp/personspage.jsp ./WEB-INF/lib ./WEB-INF/lib/antlr-2.7.6.jar ./WEB-INF/lib/aopalliance-1.0.jar ./WEB-INF/lib/c3p0-0.9.1.2.jar ./WEB-INF/lib/cglib-nodep-2.2.jar ./WEB-INF/lib/commons-beanutils-1.8.3.jar ./WEB-INF/lib/commons-collections-3.2.1.jar ./WEB-INF/lib/commons-digester-2.1.jar ./WEB-INF/lib/commons-logging-1.1.1.jar ./WEB-INF/lib/dom4j-1.6.1.jar ./WEB-INF/lib/ejb3-persistence-1.0.2.GA.jar ./WEB-INF/lib/hibernate-annotations-3.4.0.GA.jar ./WEB-INF/lib/hibernate-commons-annotations-3.1.0.GA.jar ./WEB-INF/lib/hibernate-core-3.3.2.GA.jar ./WEB-INF/lib/javassist-3.7.ga.jar ./WEB-INF/lib/jstl-1.1.2.jar ./WEB-INF/lib/jta-1.1.jar ./WEB-INF/lib/junit-4.8.1.jar ./WEB-INF/lib/log4j-1.2.14.jar ./WEB-INF/lib/mysql-connector-java-5.1.14.jar ./WEB-INF/lib/persistence-api-1.0.jar ./WEB-INF/lib/slf4j-api-1.6.1.jar ./WEB-INF/lib/slf4j-log4j12-1.6.1.jar ./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-jdbc-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-orm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-tx-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar ./WEB-INF/lib/standard-1.1.2.jar ./WEB-INF/lib/xml-apis-1.0.b2.jar ./WEB-INF/spring-servlet.xml ./WEB-INF/spring.properties ./WEB-INF/web.xml So, are you excited about Java EE 6 ? Want to get started now ? Here are some resources: Java EE 6 SDK (including runtime, samples, tutorials etc) GlassFish Server Open Source Edition 3.1.2 (Community) Oracle GlassFish Server 3.1.2 (Commercial) Java EE 6 using WebLogic 12c and NetBeans (Video) Java EE 6 with NetBeans and GlassFish (Video) Java EE with Eclipse and GlassFish (Video)

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • jQuery Globalization Plugin from Microsoft

    - by ScottGu
    Last month I blogged about how Microsoft is starting to make code contributions to jQuery, and about some of the first code contributions we were working on: jQuery Templates and Data Linking support. Today, we released a prototype of a new jQuery Globalization Plugin that enables you to add globalization support to your JavaScript applications. This plugin includes globalization information for over 350 cultures ranging from Scottish Gaelic, Frisian, Hungarian, Japanese, to Canadian English.  We will be releasing this plugin to the community as open-source. You can download our prototype for the jQuery Globalization plugin from our Github repository: http://github.com/nje/jquery-glob You can also download a set of samples that demonstrate some simple use-cases with it here. Understanding Globalization The jQuery Globalization plugin enables you to easily parse and format numbers, currencies, and dates for different cultures in JavaScript. For example, you can use the Globalization plugin to display the proper currency symbol for a culture: You also can use the Globalization plugin to format dates so that the day and month appear in the right order and the day and month names are correctly translated: Notice above how the Arabic year is displayed as 1431. This is because the year has been converted to use the Arabic calendar. Some cultural differences, such as different currency or different month names, are obvious. Other cultural differences are surprising and subtle. For example, in some cultures, the grouping of numbers is done unevenly. In the "te-IN" culture (Telugu in India), groups have 3 digits and then 2 digits. The number 1000000 (one million) is written as "10,00,000". Some cultures do not group numbers at all. All of these subtle cultural differences are handled by the jQuery Globalization plugin automatically. Getting dates right can be especially tricky. Different cultures have different calendars such as the Gregorian and UmAlQura calendars. A single culture can even have multiple calendars. For example, the Japanese culture uses both the Gregorian calendar and a Japanese calendar that has eras named after Japanese emperors. The Globalization Plugin includes methods for converting dates between all of these different calendars. Using Language Tags The jQuery Globalization plugin uses the language tags defined in the RFC 4646 and RFC 5646 standards to identity cultures (see http://tools.ietf.org/html/rfc5646). A language tag is composed out of one or more subtags separated by hyphens. For example: Language Tag Language Name (in English) en-AU English (Australia) en-BZ English (Belize) en-CA English (Canada) Id Indonesian zh-CHS Chinese (Simplified) Legacy Zu isiZulu Notice that a single language, such as English, can have several language tags. Speakers of English in Canada format numbers, currencies, and dates using different conventions than speakers of English in Australia or the United States. You can find the language tag for a particular culture by using the Language Subtag Lookup tool located here:  http://rishida.net/utils/subtags/ The jQuery Globalization plugin download includes a folder named globinfo that contains the information for each of the 350 cultures. Actually, this folder contains more than 700 files because the folder includes both minified and un-minified versions of each file. For example, the globinfo folder includes JavaScript files named jQuery.glob.en-AU.js for English Australia, jQuery.glob.id.js for Indonesia, and jQuery.glob.zh-CHS for Chinese (Simplified) Legacy. Example: Setting a Particular Culture Imagine that you have been asked to create a German website and want to format all of the dates, currencies, and numbers using German formatting conventions correctly in JavaScript on the client. The HTML for the page might look like this: Notice the span tags above. They mark the areas of the page that we want to format with the Globalization plugin. We want to format the product price, the date the product is available, and the units of the product in stock. To use the jQuery Globalization plugin, we’ll add three JavaScript files to the page: the jQuery library, the jQuery Globalization plugin, and the culture information for a particular language: In this case, I’ve statically added the jQuery.glob.de-DE.js JavaScript file that contains the culture information for German. The language tag “de-DE” is used for German as spoken in Germany. Now that I have all of the necessary scripts, I can use the Globalization plugin to format the product price, date available, and units in stock values using the following client-side JavaScript: The jQuery Globalization plugin extends the jQuery library with new methods - including new methods named preferCulture() and format(). The preferCulture() method enables you to set the default culture used by the jQuery Globalization plugin methods. Notice that the preferCulture() method accepts a language tag. The method will find the closest culture that matches the language tag. The $.format() method is used to actually format the currencies, dates, and numbers. The second parameter passed to the $.format() method is a format specifier. For example, passing “c” causes the value to be formatted as a currency. The ReadMe file at github details the meaning of all of the various format specifiers: http://github.com/nje/jquery-glob When we open the page in a browser, everything is formatted correctly according to German language conventions. A euro symbol is used for the currency symbol. The date is formatted using German day and month names. Finally, a period instead of a comma is used a number separator: You can see a running example of the above approach with the 3_GermanSite.htm file in this samples download. Example: Enabling a User to Dynamically Select a Culture In the previous example we explicitly said that we wanted to globalize in German (by referencing the jQuery.glob.de-DE.js file). Let’s now look at the first of a few examples that demonstrate how to dynamically set the globalization culture to use. Imagine that you want to display a dropdown list of all of the 350 cultures in a page. When someone selects a culture from the dropdown list, you want all of the dates in the page to be formatted using the selected culture. Here’s the HTML for the page: Notice that all of the dates are contained in a <span> tag with a data-date attribute (data-* attributes are a new feature of HTML 5 that conveniently also still work with older browsers). We’ll format the date represented by the data-date attribute when a user selects a culture from the dropdown list. In order to display dates for any possible culture, we’ll include the jQuery.glob.all.js file like this: The jQuery Globalization plugin includes a JavaScript file named jQuery.glob.all.js. This file contains globalization information for all of the more than 350 cultures supported by the Globalization plugin.  At 367KB minified, this file is not small. Because of the size of this file, unless you really need to use all of these cultures at the same time, we recommend that you add the individual JavaScript files for particular cultures that you intend to support instead of the combined jQuery.glob.all.js to a page. In the next sample I’ll show how to dynamically load just the language files you need. Next, we’ll populate the dropdown list with all of the available cultures. We can use the $.cultures property to get all of the loaded cultures: Finally, we’ll write jQuery code that grabs every span element with a data-date attribute and format the date: The jQuery Globalization plugin’s parseDate() method is used to convert a string representation of a date into a JavaScript date. The plugin’s format() method is used to format the date. The “D” format specifier causes the date to be formatted using the long date format. And now the content will be globalized correctly regardless of which of the 350 languages a user visiting the page selects.  You can see a running example of the above approach with the 4_SelectCulture.htm file in this samples download. Example: Loading Globalization Files Dynamically As mentioned in the previous section, you should avoid adding the jQuery.glob.all.js file to a page whenever possible because the file is so large. A better alternative is to load the globalization information that you need dynamically. For example, imagine that you have created a dropdown list that displays a list of languages: The following jQuery code executes whenever a user selects a new language from the dropdown list. The code checks whether the globalization file associated with the selected language has already been loaded. If the globalization file has not been loaded then the globalization file is loaded dynamically by taking advantage of the jQuery $.getScript() method. The globalizePage() method is called after the requested globalization file has been loaded, and contains the client-side code to perform the globalization. The advantage of this approach is that it enables you to avoid loading the entire jQuery.glob.all.js file. Instead you only need to load the files that you need and you don’t need to load the files more than once. The 5_Dynamic.htm file in this samples download demonstrates how to implement this approach. Example: Setting the User Preferred Language Automatically Many websites detect a user’s preferred language from their browser settings and automatically use it when globalizing content. A user can set a preferred language for their browser. Then, whenever the user requests a page, this language preference is included in the request in the Accept-Language header. When using Microsoft Internet Explorer, you can set your preferred language by following these steps: Select the menu option Tools, Internet Options. Select the General tab. Click the Languages button in the Appearance section. Click the Add button to add a new language to the list of languages. Move your preferred language to the top of the list. Notice that you can list multiple languages in the Language Preference dialog. All of these languages are sent in the order that you listed them in the Accept-Language header: Accept-Language: fr-FR,id-ID;q=0.7,en-US;q=0.3 Strangely, you cannot retrieve the value of the Accept-Language header from client JavaScript. Microsoft Internet Explorer and Mozilla Firefox support a bevy of language related properties exposed by the window.navigator object, such as windows.navigator.browserLanguage and window.navigator.language, but these properties represent either the language set for the operating system or the language edition of the browser. These properties don’t enable you to retrieve the language that the user set as his or her preferred language. The only reliable way to get a user’s preferred language (the value of the Accept-Language header) is to write server code. For example, the following ASP.NET page takes advantage of the server Request.UserLanguages property to assign the user’s preferred language to a client JavaScript variable named acceptLanguage (which then allows you to access the value using client-side JavaScript): In order for this code to work, the culture information associated with the value of acceptLanguage must be included in the page. For example, if someone’s preferred culture is fr-FR (French in France) then you need to include either the jQuery.glob.fr-FR.js or the jQuery.glob.all.js JavaScript file in the page or the culture information won’t be available.  The “6_AcceptLanguages.aspx” sample in this samples download demonstrates how to implement this approach. If the culture information for the user’s preferred language is not included in the page then the $.preferCulture() method will fall back to using the neutral culture (for example, using jQuery.glob.fr.js instead of jQuery.glob.fr-FR.js). If the neutral culture information is not available then the $.preferCulture() method falls back to the default culture (English). Example: Using the Globalization Plugin with the jQuery UI DatePicker One of the goals of the Globalization plugin is to make it easier to build jQuery widgets that can be used with different cultures. We wanted to make sure that the jQuery Globalization plugin could work with existing jQuery UI plugins such as the DatePicker plugin. To that end, we created a patched version of the DatePicker plugin that can take advantage of the Globalization plugin when rendering a calendar. For example, the following figure illustrates what happens when you add the jQuery Globalization and the patched jQuery UI DatePicker plugin to a page and select Indonesian as the preferred culture: Notice that the headers for the days of the week are displayed using Indonesian day name abbreviations. Furthermore, the month names are displayed in Indonesian. You can download the patched version of the jQuery UI DatePicker from our github website. Or you can use the version included in this samples download and used by the 7_DatePicker.htm sample file. Summary I’m excited about our continuing participation in the jQuery community. This Globalization plugin is the third jQuery plugin that we’ve released. We’ve really appreciated all of the great feedback and design suggestions on the jQuery templating and data-linking prototypes that we released earlier this year.  We also want to thank the jQuery and jQuery UI teams for working with us to create these plugins. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. You can follow me at: twitter.com/scottgu

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 'sudo apt-get update' error

    - by psilo
    I've been having an issue with 'sudo apt-get update' for several days now. I've tried every proposed solution I could find but to no avail. Here is the output to 'apt-get update'. Ign http://us.archive.ubuntu.com precise InRelease Ign http://us.archive.ubuntu.com precise-updates InRelease Ign http://us.archive.ubuntu.com precise-backports InRelease Ign http://us.archive.ubuntu.com precise-security InRelease Ign http://archive.ubuntu.com precise InRelease Err http://us.archive.ubuntu.com precise Release.gpg Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates Release.gpg Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports Release.gpg Unable to connect to 69.163.233.85:80: Err http://archive.ubuntu.com precise Release.gpg Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security Release.gpg Unable to connect to 69.163.233.85:80: Ign http://us.archive.ubuntu.com precise Release Ign http://us.archive.ubuntu.com precise-updates Release Ign http://archive.ubuntu.com precise Release Ign http://us.archive.ubuntu.com precise-backports Release Ign http://us.archive.ubuntu.com precise-security Release Ign http://us.archive.ubuntu.com precise/main TranslationIndex Ign http://us.archive.ubuntu.com precise/multiverse TranslationIndex Ign http://us.archive.ubuntu.com precise/restricted TranslationIndex Ign http://us.archive.ubuntu.com precise/universe TranslationIndex Ign http://archive.ubuntu.com precise/main TranslationIndex Ign http://us.archive.ubuntu.com precise-updates/main TranslationIndex Ign http://us.archive.ubuntu.com precise-updates/multiverse TranslationIndex Ign http://us.archive.ubuntu.com precise-updates/restricted TranslationIndex Ign http://us.archive.ubuntu.com precise-updates/universe TranslationIndex Ign http://us.archive.ubuntu.com precise-backports/main TranslationIndex Ign http://us.archive.ubuntu.com precise-backports/multiverse TranslationIndex Ign http://us.archive.ubuntu.com precise-backports/restricted TranslationIndex Ign http://us.archive.ubuntu.com precise-backports/universe TranslationIndex Ign http://us.archive.ubuntu.com precise-security/main TranslationIndex Ign http://us.archive.ubuntu.com precise-security/multiverse TranslationIndex Ign http://us.archive.ubuntu.com precise-security/restricted TranslationIndex Ign http://us.archive.ubuntu.com precise-security/universe TranslationIndex Err http://archive.ubuntu.com precise/main Sources Unable to connect to 69.163.233.85:80: Err http://archive.ubuntu.com precise/main i386 Packages Unable to connect to 69.163.233.85:80: Err http://archive.ubuntu.com precise/main Translation-en_US Unable to connect to 69.163.233.85:80: Err http://archive.ubuntu.com precise/main Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/main Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/restricted Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/universe Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/multiverse Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/main i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/restricted i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/universe i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/multiverse i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/main Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/restricted Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/universe Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/multiverse Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/main i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/restricted i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/universe i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/multiverse i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/main Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/restricted Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/universe Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/multiverse Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/main i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/restricted i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/universe i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/multiverse i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/main Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/restricted Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/universe Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/multiverse Sources Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/main i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/restricted i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/universe i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/multiverse i386 Packages Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/main Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/main Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/multiverse Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/multiverse Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/restricted Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/restricted Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/universe Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise/universe Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/main Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/main Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/multiverse Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/multiverse Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/restricted Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/restricted Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/universe Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-updates/universe Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/main Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/main Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/multiverse Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/multiverse Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/restricted Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/restricted Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/universe Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-backports/universe Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/main Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/main Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/multiverse Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/multiverse Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/restricted Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/restricted Translation-en Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/universe Translation-en_US Unable to connect to 69.163.233.85:80: Err http://us.archive.ubuntu.com precise-security/universe Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/Release.gpg Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/Release.gpg Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/Release.gpg Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/Release.gpg Unable to connect to 69.163.233.85:80: W: Failed to fetch http://archive.ubuntu.com/dists/precise/Release.gpg Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/main/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/restricted/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/universe/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/multiverse/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/main/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/restricted/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/universe/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/multiverse/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/main/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/restricted/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/universe/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/multiverse/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/main/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/restricted/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/universe/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/multiverse/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://archive.ubuntu.com/dists/precise/main/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://archive.ubuntu.com/dists/precise/main/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/main/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/restricted/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/multiverse/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/restricted/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/multiverse/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/main/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/restricted/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/universe/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/multiverse/source/Sources Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/main/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/restricted/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/universe/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/multiverse/binary-i386/Packages Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/main/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/main/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/multiverse/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/multiverse/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/restricted/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/restricted/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/universe/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/universe/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://archive.ubuntu.com/dists/precise/main/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://archive.ubuntu.com/dists/precise/main/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/main/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/main/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/multiverse/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/multiverse/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/restricted/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/restricted/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/universe/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/universe/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/main/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/main/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/multiverse/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/multiverse/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/restricted/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/restricted/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/main/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/main/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/multiverse/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/multiverse/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/restricted/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/restricted/i18n/Translation-en Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en_US Unable to connect to 69.163.233.85:80: W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 69.163.233.85:80: E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Implementing History Support using jQuery for AJAX websites built on asp.net AJAX

    - by anil.kasalanati
    Problem Statement: Most modern day website use AJAX for page navigation and gone are the days of complete HTTP redirection so it is imperative that we support back and forward buttons on the browser so that end users navigation is not broken. In this article we discuss about solutions which are already available and problems with them. Microsoft History Support: Post .Net 3.5 sp1 Microsoft’s Script manager supports history for websites using Update panels. This is achieved by enabling the ENABLE HISTORY property for the script manager and then the event “Page_Browser_Navigate” needs to be handled. So whenever the browser buttons are clicked the event is fired and the application can write code to do the navigation. The following articles provide good tutorials on how to do that http://www.asp.net/aspnet-in-net-35-sp1/videos/introduction-to-aspnet-ajax-history http://www.codeproject.com/KB/aspnet/ajaxhistorymanagement.aspx And Microsoft api internally creates an IFrame and changes the bookmark of the url. Unfortunately this has a bug and it does not work in Ie6 and 7 which are the major browsers but it works in ie8 and Firefox. And Microsoft has apparently fixed this bug in .Net 4.0. Following is the blog http://weblogs.asp.net/joshclose/archive/2008/11/11/asp-net-ajax-addhistorypoint-bug.aspx For solutions which are still running on .net 3.5 sp1 there is no solution which Microsoft offers so there is  are two way to solve this o   Disable the back button. o   Develop custom solution.   Disable back button Even though this might look like a very simple thing to do there are issues around doing this  because there is no event which can be manipulated from the javascript. The browser does not provide an api to do this. So most of the technical solution on internet offer work arounds like doing a history.forward(1) so that even if the user clicks a back button the destination page redirects the user to the original page. This is not a good customer experience and does not work for asp.net website where there are different views in the same page. There are other ways around detecting the window unload events and writing code there. So there are 2 events onbeforeUnload and onUnload and we can write code to show a confirmation message to the user. If we write code in onUnLoad then we can only show a message but it is too late to stop the navigation. And if we write on onBeforeUnLoad we can stop the navigation if the user clicks cancel but this event would be triggered for all AJAX calls and hyperlinks where the href is anything other than #. We can do this but the website has to be checked properly to ensure there are no links where href is not # otherwise the user would see a popup message saying “you are leaving the website”. Believe me after doing a lot of research on the back button disable I found it easier to support it rather than disabling the button. So I am going to discuss a solution which work  using jQuery with some tweaking. Custom Solution JQuery already provides an api to manage the history of a AJAX website - http://plugins.jquery.com/project/history We need to integrate this with Microsoft Page request manager so that both of them work in tandem. The page state is maintained in the cookie so that it can be passed to the server and I used jQuery cookie plug in for that – http://plugins.jquery.com/node/1386/release Firstly when the page loads there is a need to hook up all the events on the page which needs to cause browser history and following is the code to that. jQuery(document).ready(function() {             // Initialize history plugin.             // The callback is called at once by present location.hash.             jQuery.history.init(pageload);               // set onlick event for buttons             jQuery("a[@rel='history']").click(function() {                 //                 var hash = this.page;                 hash = hash.replace(/^.*#/, '');                 isAsyncPostBack = true;                 // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });         }); The above scripts basically gets all the DOM objects which have the attribute rel=”history” and add the event. In our test page we have the link button  which has the attribute rel set to history. <asp:LinkButton ID="Previous" rel="history" runat="server" onclick="PreviousOnClick">Previous</asp:LinkButton> <asp:LinkButton ID="AsyncPostBack" rel="history" runat="server" onclick="NextOnClick">Next</asp:LinkButton> <asp:LinkButton ID="HistoryLinkButton" runat="server" style="display:none" onclick="HistoryOnClick"></asp:LinkButton>   And you can see that there is an hidden HistoryLinkButton which used to send a sever side postback in case of browser back or previous buttons. And note that we need to use display:none and not visible= false because asp.net AJAX would disallow any post backs if visible=false. And in general the pageload event get executed on the client side when a back or forward is pressed and the function is shown below function pageload(hash) {                   if (hash) {                         if (!isAsyncPostBack) {                           jQuery.cookie("page", hash);                     __doPostBack("HistoryLinkButton", "");                 }                isAsyncPostBack = false;                             } else {                 // start page             jQuery("#load").empty();             }         }   As you can see in case there is an hash in the url we are basically do an asp.net AJAX post back using the following statement __doPostBack("HistoryLinkButton", ""); So whenever the user clicks back or forward the post back happens using the event statement we provide and Previous event code is invoked in the code behind.  We need to have the code to use the pageId present in the url to change the page content. And there is an important thing to note – because the hash is worked out using the pageId’s there is a need to recalculate the hash after every AJAX post back so following code is plugged in function ReWorkHash() {             jQuery("a[@rel='history']").unbind("click");             jQuery("a[@rel='history']").click(function() {                 //                 var hash = jQuery(this).attr("page");                 hash = hash.replace(/^.*#/, '');                 jQuery.cookie("page", hash);                 isAsyncPostBack = true;                                   // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });        }   This code is executed from the code behind using ScriptManager RegisterClientScriptBlock as shown below –       ScriptManager.RegisterClientScriptBlock(this, typeof(_Default), "Recalculater", "ReWorkHash();", true);   A sample application is available to be downloaded at the following location – http://techconsulting.vpscustomer.com/Source/HistoryTest.zip And a working sample is available at – http://techconsulting.vpscustomer.com/Samples/Default.aspx

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >