Search Results

Search found 62710 results on 2509 pages for 'non root pathed web appli'.

Page 124/2509 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • touch detection of non-rectangular sprites (cocos2d)

    - by hogni89
    What is the correct way to implement a non-rectangular sprite in Cocos2d? I am working on a jigsaw puzzle. And therefor do our sprites have some strange forms (Jigsaw puzzle bricks). As of now, we have implemented the "detect" this way: - (void)selectSpriteForTouch:(CGPoint)touchLocation { CCSprite * newSprite = nil; // Loop array of sprites for (CCSprite *sprite in movableSprites) { // Check if sprite is hit. // TODO: Swap if with something better. if (CGRectContainsPoint(sprite.boundingBox, touchLocation)) { newSprite = sprite; break; } } if (newSprite != selSprite) { // Move along, nothing to see here // Not the problem } } - (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; [self selectSpriteForTouch:touchLocation]; return TRUE; } I know that the problem is in the keyword "sprite.boundingBox". Is there a better way of implementing this, OR is it a limit when using sprites based on .png's? If so, how should I proceed? I'm new to iPhone and game development :D

    Read the article

  • Lost all non-linux hard drives

    - by Rick
    I've somehow lost access to all my non-linux drives. I'm not sure how. I have two other hard drives set up (a windows boot and an external usb I was using under windowsxp). A few days ago I could access both under linux with no problem. Now, I see them listed under media, but when I click on them, nothing is there (same thing happens when I go there in a term window). Now, the computer (under linux) sees them, but I can't see anything in them. (in nautilus it shows "(Empty)") Any ideas what happened or how to get access to them again? My guesses as to what might have caused it: -I was trying to set up AdobeAIR (unsuccessfully) and had to make and remove a couple of symbolic links. Never got that to work but maybe links I made or removed did something? -I was also trying to set up nautilus so that I could enter it with sudo permission. I also didn't figure out how to do that successfully, but maybe something I did while trying messed it up? (I think I installed and uninstalled nautilus-actions configuration tool). Note: I'm working on a relatively new install of 12.04

    Read the article

  • Technical/Programming/Non-SEO Pros and Cons of WWW or no-WWW?

    - by Ingenutrix
    What are technical/programming/non-SEO pros and cons of www or no-www, for domains as well as sub-domains? From Jeff Atwood's twitter at http://twitter.com/codinghorror/status/1637428313 : "sort of regretting the no-www choice because it causes full cookie submission to ALL subdomains. :(" What does this mean? Is there a blog post or article detailing this? What other specific issues and their reasons should be considered for www. vs no-www. Update: On searching for more info on this topic, I found following helpful ( in addition to Laurence Gonsalves answer ) : Dropping the WWW Prefix Impact on search results: Jivlain's and Isaac Lin's comments Use Cookie-free Domains for Components on StackOverflow : Should I default my website to www.foo or not? on StackOverflow : When should one use a ‘www’ subdomain?

    Read the article

  • Can Tornado communicate with Cassandra, in Non-blocking asynchronous style?

    - by takaomag
    I'm working on a web project, which have to process so many client requests. So I am considering to use Cassandra and tornado. Tornado seems to have a build-in client(tornado.httpclient.AsyncHTTPClient), which can do http Non-Blocking request. But, Cassandra uses Thrift protocol. Using Thrift, Tornado seems to be blocked while quering to Cassandra. Has anyone got expereince? Please suggest how should I do. Or, is there any add-on module for this purpose? Thanks.

    Read the article

  • Why ????? is displayed instead of non-english characters?

    - by smhnaji
    I first created a simple HTML page that uses UTF-8 as its character encoding. Then I moved the HTML content as a view in codeigniter and it was still ok (I had used non-english characters that were ok as always) I added a simple dynamic functionality (there is a contact us form in it that emails users feedback to site admin). Please note that the characters were ok at localhost (which is a LAMP server running on Ubuntu 12.04 LTS) Strange is that when I uploaded the app to server, all persian characters are shown as ???? (For example ??? (which means Name) is shown ??? and so so...) I have not even connected to mysql or any other DBMS. It's the only page in the website (it's more an under construction page) and nothing else has been used in it. Maybe I should state that I have also used session library to thank the user after his feedback was sent to admins, nothing else. I have really no idea about the problem. UPDATE Now I can see that the problem is only with cPanel. On Directadmin I can see that everything is normal. Both Chromium and Firefox DO use UTF-8 as page's character encoding. URL is http://WEBSITE.COM/dmf/dynamic/ (dmf is the abbreviation of the project name!). There is nothing non-english in the URL. The page's code is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>??? ???????</title> <link rel="stylesheet" type="text/css" href="<?php echo base_url('template/css/style.css'); ?>" /> <!-- 1. jquery library --> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"> </script> <!-- 2. flowplayer --> <script src="http://releases.flowplayer.org/5.1.1/flowplayer.min.js"></script> <!-- 3. skin --> <link rel="stylesheet" type="text/css" href="http://releases.flowplayer.org/5.1.1/skin/minimalist.css" /> </head> <body> <div id="wrapper"> <header> <h1>??? ???????</h1> </header> <section id="box-container"> <?php echo form_open('contact', "id='contact-us'"); echo form_fieldset('???? ?? ??'); if ($this->session->userdata('mailsent')) { echo '<div>??????? ???? ??? ????? ??</div>'; $this->session->sess_destroy(); } echo '<input tabindex="1" id="name-in" value="???" type="text" name="name"/> <input tabindex="2" id="mail-in" value="?????" type="email" name="email"/> <textarea tabindex="3" id="content-in" name="message">???????</textarea> <input tabindex="4" id="submit" type="submit" value="?????" />'; echo '<div class="clear"></div>'; echo form_fieldset_close(); echo form_close(); ?> <div id="sms-comp"> <h2>?????? ??????</h2> <p> <span id="comp-title">?? ??? ????</span> ???? ??????? ???? ??? </p> </div> <div id="last-program"> <h2>?????? ????? ??????</h2> <div class="flowplayer"> <video id="my_video_1" width="212" height="126" poster="<?php echo base_url('template/images/img.jpg'); ?>" controls="controls" src="http://archive.org/download/Pbtestfilemp4videotestmp4/video_test.ogv" type='video/mp4'> </video> </div> </div> <div class="clear"></div> </section> </div> <footer> ????? ? ????? : <a href="http://powered-by.com/" target="_blank">????? ???</a> </footer> </body> </html>

    Read the article

  • Postgres : Post statement (or insert) asynchronous, non-blocking processing.

    - by Hassan Syed
    I'm wondering if it is possible, that after a collection of rows is inserted, to initiate an operation that is executed asynchronously, is non-blocking, and doesn't need to inform the originator of the request - of the result. I am working with large amounts of events and I can guarantee that the post-insert logic will not fail -- I just want to have a single insert thread in my event-sources, and I want this thread to keep flying without blocking, and without being responsible for any post-delivery book-keeping. I can tell you that I would potentially have a 100 of these jobs executing concurrently and each job might operate on 5 tables with anywhere between 200-1000 inserts on each of these tables. A hint in the right direction should be enough.

    Read the article

  • Understanding the passing of data/life of a script in web development/CodeIgniter

    - by Pete Jodo
    I hope I worded the title accurately enough but I typically use Java and don't have much experience in Web Development/PHP/CodeIgniter. I have a difficult time understanding the life cycle of a script as I found out trying to implement a certain feature to a website I am developing (as a means of learning how to). I'll first describe the feature I tried implementing and then the problem I ran into that made me question my fundamental understanding of how scripts work since I'm used to typical OOP. Ok so here goes... I have a webpage that has 2 basic tasks a user can do, create and delete an entry. What I attempted to implement was a way to time a user how long it takes them to complete a certain task. The way I did this was have a homepage where there would be a list of tasks a user to choose from (in this case 2, create and delete). A user would click a task which would link to the 'true' homepage where the user then would be expected to complete the task. My script looks like this: <?php class Site extends CI_Controller { var $task1; var $tasks = array( "task1" => NULL, "date1" => 0, "date2" => 0, "diff" => 0); function __construct() { parent::__construct(); include 'timetask.php'; $this->task1 = new TimeTask("create"); } function index() { $this->tasks['task1'] = $this->task1->getTask(); $this->tasks['diff'] = $this->task1->getTimeDiff(); if($this->tasks['diff'] == NULL) { $this->tasks['diff'] = 0; } $this->load->view('usability_test', $this->tasks); } function origIndex() { $this->task1->setDate1(new DateTime()); $this->tasks['date1'] = $this->task1->getDate1()->getTimestamp(); $data = array(); if($q = $this->site_model->get_records()) { $data['records'] = $q; } $this->load->view('options_view', $data); } function create() { $this->task1->setDate2(new DateTime()); $this->tasks['date2'] = $this->task1->getDate2()->getTimestamp(); $data = array( 'author' => $this->input->post('author'), 'title' => $this->input->post('title'), 'contents' => $this->input->post('contents') ); $this->site_model->add_record($data); $this->index(); } I only included create to keep it short. Then I also have the TimeTask class, that actually another StackOverflow so kindly helped me with: <?php class TimeTask { private $task; /** * @var DateTime */ private $date1, $date2; function __construct($currTask) { $this->task = $currTask; } public function getTimeDiff() { $hasDiff = $this->date1 && $this->date2; if ($hasDiff) { return $this->date2->getTimestamp() - $this->date1->getTimestamp(); } else { return NULL; } } public function __toString() { return (string) $this->getTimeDiff(); } /** * @return \DateTime */ public function getDate1() { return $this->date1; } /** * @param \DateTime $date1 */ public function setDate1(DateTime $date1) { $this->date1 = $date1; } /** * @return \DateTime */ public function getDate2() { return $this->date2; } /** * @param \DateTime $date2 */ public function setDate2(DateTime $date2) { $this->date2 = $date2; } /** * @return get current task */ public function getTask() { return $this->task; } } ?> I don't think posting the views is necessary for the question but here is atleast how the links are made. ...and... id", $row-title); ? Now there's no error in the code but it doesn't do what I expect of it and the reason I assume why is because that each time a function of the script is called via a new page it is NOT the same instance of the script called previously so any previously created objects are no longer there. This confuses me and leaves me quite unsure of how to implement this gracefully. Some ways I would guess of how to do this is by passing the necessary data through the URL or have data saved in a database and retrieve it later to compare the times. What would be a recommended way to do, not just this, but anything that needs previously created data? Also, am I correct to think that a script is only 'alive' for one webpage at a time? Thanks!

    Read the article

  • How can I handle validation of non-latin script input in PHP?

    - by Matt
    I am trying to adapt a php application to handle non-latin scripts (specifically: Japanese, simplified Chinese and Arabic). The app's data validation routines make frequent use of regular expressions to check input, but I am not sure how to adapt the \w character type to other languages without installing additional locales on the system (which I cannot rely on). Previous developers to have worked on the app have simply added needed characters to the regexes as the number of languages we supported grew (you frequently see "[\wÀÁÂÃÄÅÆÇÈÉ... etc" in the code), but I can't really do this for all the alphabets I need to support now. Does anybody out there have some advice on how to tackle this?

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • IE9, HTML5 and truck load of other stuff happening around the web

    - by Harish Ranganathan
    First of all, I haven’t been updating this blog as regularly as it used to be.  Primarily, due to the fact was I was visiting a lot of cities talking about SharePoint, Web Matrix, IE9 and few other stuff.  IE9 is my new found love and I simply think we have done great work in improving the browser and browsing experiences for our users. This post would talk about IE, general things happening around the web and few misconceptions around IE (I had earlier written about IE8 and common myths When you think about the way web has transformed, its truly amazing.  Rewind back to late 90s and early 2000s, web was a luxury.  There were lot of desktop applications running around and web applications was starting to pick up.  Primarily reason was not a lot of folks were into web development and the areas of web were confined to HTML and JavaScript.  CSS was around here and there but no one took it so seriously.  XML, XSLT was fast picking up and contributed to decent web development techniques. So as a web developer all we had to worry about was, building good looking websites which worked well with IE6 and occasionally with Safari.  Firefox was  not even in the picture then and so was Chrome.  But with the various arms of W3C consortium and other bodies working actively on stuff like CSS, SVG and XHTML, few more areas came into picture when it comes to browsers supporting standards.  IE6 for sure wasn’t up to the speed and the main issue we were tackling then was privacy and piracy.  We did invest a lot of our efforts to curb piracy and one of the steps into it was that, IE7 the next version of IE would install only on genuine windows machines.  What this means, is that, people who were running pirated windows xp knowingly/unknowingly could not install IE7 and the limitations of IE6 really hurt them.  One more thing of importance is that, if you were running pirated windows, lots of chances that you didn’t get the security updates and thereby were vulnerable to run viruses/trojans on your system. Many of them actually block using IE in the first place and make it difficult to browse.  SP2 came as a big boon but again was there only for genuine windows machines. With Firefox coming as a free install and also heavily pushed by Google then, it was natural that people would try an alternative.  By then, we had started working on IE8 supporting the best standards (note HTML5, CSS 2.1 and other specs were then work in progress.  they are still) Later, Google in their infinite wisdom realized that with Firefox they were going nowhere and they released Chrome.  Now, they heavily push Chrome even for Firefox users, which is natural since its their browser. In the meanwhile, these browsers push their updates as mandatory and therefore have a very short lifecycle to add enhancements and support for stuff like CSS etc., Meanwhile, when IE8 came out, it really was the best standards supported browser and a lot of people saw our efforts in improving our browser. HTML5 is the buzz word in the industry and there is a lot of noise being made by many browsers claiming their support for it.  IE8 doesn’t have much support for HTML5.  But, with IE9 Beta, we have great support for many of HTML5 specifications.  Note that, HTML5 is still work under progress and one of the board of members working on the spec has mentioned that these specs might change and relying on them heavily is dangerous.  But, some of the advances such as video tag, etc., are indeed supported in IE9 Beta.  IE9 Beta also has full hardware acceleration support which other browsers don’t have. IE8 had advanced security features such as smartscreen filter, in-private browsing, anti-phishing and a lot of other stuff.  IE9 builds on top of these with the best in town security standards as well as support for HTML5, CSS3, Hardware acceleration, SVG and many other advancements in browser.  Read more at http://www.beautyoftheweb.com/#/highlights/html5  To summarize, IE9 Beta is really innovative and you should try it to believe what it provides.  You can visit http://www.beautyoftheweb.com/  to install as well as read more on this. Cheers !!!

    Read the article

  • "ERROR:Could not find java.nio.file.Paths" when using Oracle JDK 1.7

    - by Ankit
    I want to try out some features rolled out in Oracle's new JDK 1.7. I followed the post:- Oracle JDK 1.7 but the post doesn't seem to help. I was trying to fetch out the structure for java.nio.file.Paths class file but got the following error:- buffer@ankit:~$ javap java.nio.file.Paths ERROR:Could not find java.nio.file.Paths However i can easily get the information about class structures till JAVA SE 1.6, here is an example:- buffer@ankit:~$ javap java.lang.Object Compiled from "Object.java" public class java.lang.Object{ public java.lang.Object(); public final native java.lang.Class getClass(); public native int hashCode(); public boolean equals(java.lang.Object); protected native java.lang.Object clone() throws java.lang.CloneNotSupportedException; public java.lang.String toString(); public final native void notify(); public final native void notifyAll(); public final native void wait(long) throws java.lang.InterruptedException; public final void wait(long, int) throws java.lang.InterruptedException; public final void wait() throws java.lang.InterruptedException; protected void finalize() throws java.lang.Throwable; static {}; } Running java -version gives the following result:- buffer@ankit:~$ java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) SYSTEM INFORMATION buffer@ankit:~$ sudo update-alternatives --config java [sudo] password for buffer: There are 4 choices for the alternative java (providing /usr/bin/java). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1051 manual mode 3 /usr/lib/jvm/jdk1.7.0_09/ 1 manual mode * 4 /usr/lib/jvm/jdk1.7.0_09/bin/java 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javac There are 2 choices for the alternative javac (providing /usr/bin/javac). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 manual mode * 2 /usr/lib/jvm/jdk1.7.0_09/bin/javac 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javaws There are 3 choices for the alternative javaws (providing /usr/bin/javaws). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws 1060 manual mode * 3 /usr/lib/jvm/jdk1.7.0_09/bin/javaws 1 manual mode The directory structure of /usr/lib/jvm/ is as follows:- buffer@ankit:~$ ls -l /usr/lib/jvm/ total 24 lrwxrwxrwx 1 root root 24 Dec 2 2011 default-java -> java-1.6.0-openjdk-amd64 drwxr-xr-x 4 root root 4096 Nov 8 16:24 java-1.5.0-gcj-4.6 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-1.6.0-openjdk -> java-1.6.0-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 00:01 java-1.6.0-openjdk-amd64 -> java-6-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 06:59 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-6-openjdk -> java-1.6.0-openjdk-amd64 drwxr-xr-x 7 root root 4096 Nov 8 16:24 java-6-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 16:24 java-6-openjdk-common drwxr-xr-x 5 root root 4096 Nov 8 05:48 java-7-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 05:48 java-7-openjdk-common drwxr-xr-x 8 buffer buffer 4096 Sep 25 09:08 jdk1.7.0_09 Any help would be highly appreciated.

    Read the article

  • Desktop Applications Versus Web Applications

    Up until the advent of the internet programmers really only developed one type of application used by end-users.  This type of application was called a desktop application. As the name implies, these applications ran strictly from a desktop computer, and were limited by the resources available to the computer. Initially, this type of applications did not need resources outside of the scope of the computer in which they installed. The problem with this type of application is that if multiple end-users need to access the same desktop application, then the application must be installed on the end-user’s computer. In this age of software development security was not as big of a concern as it is today with other types of applications. This is primarily due to the fact that an end-user must have access to the computer where the software is installed in order for them to access the application. In addition, developers could also password protect the application just in case an authorized end-user was able to gain access to the computer. With the birth of the internet a second form of application emerged because developers were trying to solve inherent issues with the preexisting desktop application. One of the solutions to overcome some of the short comings of desktop applications is the web application. Web applications are hosted on a centralized server and clients only need to have network access and a web browser in order to access the application. Because a web application can be installed on a remote server it removes the need for individual installations of the same application on each end-user’s computer.  The main benefits to an application being hosted on a server is increased accessibility to the application due to the fact that nothing has to be installed on a desktop computer for an end-user to be able to access the application. In addition, web applications are much easier to maintain because any change to the application is applied on the server and is inherently applied to any end-user trying to use the application. This removes the time needed to install and maintain individual installations of a desktop application. However with the increased accessibility there are additional costs that are incurred compared to a desktop application because of the additional cost and maintenance of a server hosting the application. Typically, after a desktop application is purchased there are no additional reoccurring fees associated with the application.  When developing a web based application there are additional considerations that must be addressed compared to a desktop application. The added benefit of increased accessibility also now adds a new failure point when trying to gain access to an application. An end-user now must have network connectivity in order to access the application. This issue is not a concern for desktop applications because there resources are typically bound to the computer in which they run. Since the availability of an application is increased with the use of the client-server model in a web based application, additional security concerns now come in to play. As stated before a, desktop application is bound to the accessibility of the end-user to the computer that the application is installed. This is not the case with web based applications because they potentially could have access from anywhere with the proper internet/network connection. Additional security steps are required to insure the integrity of the application and its data. Examples of these steps include and are not limited to the following: Restricted/Password Areas This form of security is used when specific information can only be accessed by end-users based on a set of accessibility rules. IP Restrictions This form of security is used when only specific locations need to access an application. This form of security is applied from within the web server or a firewall. Network Restrictions (Firewalls) This form of security is used to contain access to an application within a specific sub set of a network. Data Encryption This form of security is used transform personally identifiable information in to something unreadable so that it can be stored for future use. Encrypted Protocols (HTTPS) This form of security is used to prevent others from reading messages being sent between applications over a network.

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • Web Service Example - Part 3: Asynchronous

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2 What's different? In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases. How it was done 1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately.  2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy. 3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked. 4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued. Summary of Fundamental Changes In This Application The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 

    Read the article

  • how to display user name in login name control

    - by user569285
    I have a master page that holds the loginview content that appears on all subsequent pages based on the master page. i have a username control also nested in the loginview to display the name of the user when they are logged in. the code for the loginview from the master page is displayed as follows: <div class="loginView"> <asp:LoginView ID="MasterLoginView" runat="server"> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /> <asp:Label ID="userNameLabel" runat="server" Text="Label"></asp:Label></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/Logout.aspx"/> ] <%--Welcome: <span class="bold"><asp:LoginName ID="MasterLoginName" runat="server" /> </span>!--%> </LoggedInTemplate> <AnonymousTemplate> Welcome: Guest [ <a href="~/Account/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> ] </AnonymousTemplate> </asp:LoginView> <%--&nbsp;&nbsp; [&nbsp;<asp:LoginStatus ID="MasterLoginStatus" runat="server" LogoutAction="Redirect" LogoutPageUrl="~/Logout.aspx" />&nbsp;]&nbsp;&nbsp;--%> </div> Since VS2010 launches with a default login page in the accounts folder, i didnt think it necessary to create a separate log in page, so i just used the same log in page. please find the code for the login control below: <asp:Login ID="LoginUser" runat="server" EnableViewState="false" RenderOuterTable="false"> <LayoutTemplate> <span class="failureNotification"> <asp:Literal ID="FailureText" runat="server"></asp:Literal> </span> <asp:ValidationSummary ID="LoginUserValidationSummary" runat="server" CssClass="failureNotification" ValidationGroup="LoginUserValidationGroup"/> <div class="accountInfo"> <fieldset class="login"> <legend style="text-align:left; font-size:1.2em; color:White;">Account Information</legend> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">User ID:</asp:Label> <asp:TextBox ID="UserName" runat="server" CssClass="textEntry"></asp:TextBox> <asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName" CssClass="failureNotification" ErrorMessage="User ID is required." ToolTip="User ID field is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label> <asp:TextBox ID="Password" runat="server" CssClass="passwordEntry" TextMode="Password"></asp:TextBox> <asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password" CssClass="failureNotification" ErrorMessage="Password is required." ToolTip="Password is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:CheckBox ID="RememberMe" runat="server"/> <asp:Label ID="RememberMeLabel" runat="server" AssociatedControlID="RememberMe" CssClass="inline">Keep me logged in</asp:Label> </p> </fieldset> <p class="submitButton"> <asp:Button ID="LoginButton" runat="server" CommandName="Login" Text="Log In" ValidationGroup="LoginUserValidationGroup" onclick="LoginButton_Click"/> </p> </div> </LayoutTemplate> </asp:Login> I then wrote my own code for authentication since i had my own database. the following displays the code in the login buttons click event.: public partial class Login : System.Web.UI.Page { //create string objects string userIDStr, pwrdStr; protected void LoginButton_Click(object sender, EventArgs e) { //assign textbox items to string objects userIDStr = LoginUser.UserName.ToString(); pwrdStr = LoginUser.Password.ToString(); //SQL connection string string strConn; strConn = WebConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString"].ConnectionString; SqlConnection Conn = new SqlConnection(strConn); //SqlDataSource CSMDataSource = new SqlDataSource(); // CSMDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString"].ToString(); //SQL select statement for comparison string sqlUserData; sqlUserData = "SELECT StaffID, StaffPassword, StaffFName, StaffLName, StaffType FROM Staffs"; sqlUserData += " WHERE (StaffID ='" + userIDStr + "')"; sqlUserData += " AND (StaffPassword ='" + pwrdStr + "')"; SqlCommand com = new SqlCommand(sqlUserData, Conn); SqlDataReader rdr; string usrdesc; string lname; string fname; string staffname; try { //string CurrentData; //CurrentData = (string)com.ExecuteScalar(); Conn.Open(); rdr = com.ExecuteReader(); rdr.Read(); usrdesc = (string)rdr["StaffType"]; fname = (string)rdr["StaffFName"]; lname = (string)rdr["StaffLName"]; staffname = lname.ToString() + " " + fname.ToString(); LoginUser.UserName = staffname.ToString(); rdr.Close(); if (usrdesc.ToLower() == "administrator") { Response.Redirect("~/CaseAdmin.aspx", false); } else if (usrdesc.ToLower() == "manager") { Response.Redirect("~/CaseManager.aspx", false); } else if (usrdesc.ToLower() == "investigator") { Response.Redirect("~/Investigator.aspx", false); } else { Response.Redirect("~/Default.aspx", false); } } catch(Exception ex) { string script = "<script>alert('" + ex.Message + "');</script>"; } finally { Conn.Close(); } } My authentication works perfectly and the page gets redirected to the designated destination. However, the login view does not display the users name. i actually cant figure out how to pass the users name that i had picked from the database to the login name control to be displayed. taking a close look i also noticed the logout text that should be displayed after successful log in does not show. that leaves me wondering if the loggedin template control on the masterpage even fires at all or its still the anonymous template control that keeps displaying.? How do i get this to work as expected? Please help....

    Read the article

  • Nagios: NRPE: Unable to read output, Can't find the reason, can you?

    - by Itai Ganot
    I have a Nagios server and a monitored server. On the monitored server: [root@Monitored ~]# netstat -an |grep :5666 tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN [root@Monitored ~]# locate check_kvm /usr/lib64/nagios/plugins/check_kvm [root@Monitored ~]# /usr/lib64/nagios/plugins/check_kvm -H localhost hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm NRPE: Unable to read output [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 [root@Monitored ~]# ps -ef |grep nrpe nagios 21178 1 0 16:11 ? 00:00:00 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d [root@Monitored ~]# On the Nagios server: [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 -c check_kvm NRPE: Unable to read output [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 NRPE v2.14 [root@Nagios ~]# When I check another server in the network using the same command it works: [root@Nagios ~]# /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.80 -c check_kvm hosts:4 OK:4 WARN:0 CRIT:0 - karmisoft:running ab2c4:running kidumim1:running travel2gether1:running [root@Nagios ~]# Running the check locally using Nagios account: [root@Monitored ~]# su - nagios -bash-4.1$ /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running -bash-4.1$ Running the check remotely from the Nagios server using Nagios account: -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 -c check_kvm NRPE: Unable to read output -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.159 NRPE v2.14 -bash-4.1$ Running the same check_kvm against a different server in the network using Nagios account: -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H 1.1.1.80 -c check_kvm hosts:4 OK:4 WARN:0 CRIT:0 - karmisoft:running ab2c4:running kidumim1:running travel2gether1:running -bash-4.1$ Permissions: -rwxr-xr-x. 1 root root 4684 2013-10-14 17:14 nrpe.cfg (aka /etc/nagios/nrpe.cfg) drwxrwxr-x. 3 nagios nagios 4096 2013-10-15 03:38 plugins (aka /usr/lib64/nagios/plugins) /etc/sudoers: [root@Monitored ~]# grep -i requiretty /etc/sudoers #Defaults requiretty iptables/selinux: [root@Monitored xinetd.d]# service iptables status iptables: Firewall is not running. [root@Monitored xinetd.d]# service ip6tables status ip6tables: Firewall is not running. [root@Monitored xinetd.d]# grep disable /etc/selinux/config # disabled - No SELinux policy is loaded. SELINUX=disabled [root@Monitored xinetd.d]# The command in /etc/nagios/nrpe.cfg is: [root@Monitored ~]# grep kvm /etc/nagios/nrpe.cfg command[check_kvm]=sudo /usr/lib64/nagios/plugins/check_kvm and the nagios user is added on /etc/sudoers: nagios ALL=(ALL) NOPASSWD:/usr/lib64/nagios/plugins/check_kvm nagios ALL=(ALL) NOPASSWD:/usr/lib64/nagios/plugins/check_nrpe The check_kvm is a shell script, looks like that: #!/bin/sh LIST=$(virsh list --all | sed '1,2d' | sed '/^$/d'| awk '{print $2":"$3}') if [ ! "$LIST" ]; then EXITVAL=3 #Status 3 = UNKNOWN (orange) echo "Unknown guests" exit $EXITVAL fi OK=0 WARN=0 CRIT=0 NUM=0 for host in $(echo $LIST) do name=$(echo $host | awk -F: '{print $1}') state=$(echo $host | awk -F: '{print $2}') NUM=$(expr $NUM + 1) case "$state" in running|blocked) OK=$(expr $OK + 1) ;; paused) WARN=$(expr $WARN + 1) ;; shutdown|shut*|crashed) CRIT=$(expr $CRIT + 1) ;; *) CRIT=$(expr $CRIT + 1) ;; esac done if [ "$NUM" -eq "$OK" ]; then EXITVAL=0 #Status 0 = OK (green) fi if [ "$WARN" -gt 0 ]; then EXITVAL=1 #Status 1 = WARNING (yellow) fi if [ "$CRIT" -gt 0 ]; then EXITVAL=2 #Status 2 = CRITICAL (red) fi echo hosts:$NUM OK:$OK WARN:$WARN CRIT:$CRIT - $LIST exit $EXITVAL Edit (10/22/13): Following all that, I am now able to get some response from the script: [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm Unknown guests [root@Monitored ~]# /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 [root@Monitored ~]# /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running [root@Monitored ~]# su - nagios -bash-4.1$ /usr/lib64/nagios/plugins/check_kvm hosts:3 OK:3 WARN:0 CRIT:0 - ab2c7:running alpweb5:running istaweb5:running -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H localhost -c check_kvm Unknown guests -bash-4.1$ /usr/lib64/nagios/plugins/check_nrpe -H localhost NRPE v2.14 It seems like the problem is some how related to the check_nrpe command or something which is related to the nrpe installation on the server.

    Read the article

  • RedHat 5.5 server does not show per processor memory utilization

    - by Mike S
    I have been searching all over internet but not finding any leads. I have a system with a memory leak that I am trying to troubleshoot. Unfortunately I am not able to see per processor memory utilization. Here are the outputs of TOP and PS commands. Linux SERVER_NAME 2.6.18-194.8.1.el5 #1 SMP Wed Jun 23 10:52:51 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux top - 09:17:13 up 18:43, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 375 total, 1 running, 373 sleeping, 0 stopped, 1 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32922828k total, 32776712k used, 146116k free, 267128k buffers Swap: 5245212k total, 0k used, 5245212k free, 32141044k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 10348 744 620 S 0.0 0.0 0:05.65 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.05 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 % ps -auxf | sort -nr -k 4 | head -10 Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ xfs 6205 0.0 0.0 23316 3892 ? Ss Aug19 0:00 xfs -droppriv -daemon uuidd 6101 0.0 0.0 60976 224 ? Ss Aug19 0:00 /usr/sbin/uuidd USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND smmsp 6130 0.0 0.0 57900 1784 ? Ss Aug19 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue rpc 5126 0.0 0.0 8052 632 ? Ss Aug19 0:00 portmap root 99 0.0 0.0 0 0 ? S< Aug19 0:00 [events/1] root 98 0.0 0.0 0 0 ? S< Aug19 0:00 [events/0] root 97 0.0 0.0 0 0 ? S< Aug19 0:00 [watchdog/31] root 96 0.0 0.0 0 0 ? SN Aug19 0:00 [ksoftirqd/31] root 95 0.0 0.0 0 0 ? S< Aug19 0:00 [migration/31] Any help with this is appretiate.

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Android Client : Web service - what's the correct SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I should

    - by Hubert
    if I want to use the following Web service (help.be is just an example, let's say it does exist): http://www.help.be/webservice/webservice_help.php (it's written in PHP=client's choice, not .NET) with the following WSDL : <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" name="webservice_help" targetNamespace="http://www.help.be/webservice/webservice_help.php" xmlns:tns="http://www.help.be/webservice/webservice_help.php" xmlns:impl="http://www.help.be/webservice/webservice_help.php" xmlns:xsd1="http://www.help.be/webservice/webservice_help.php" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"> <portType name="webservice_helpPortType"> <operation name="webservice_help"> <input message="tns:Webservice_helpRequest"/> </operation> <operation name="getLocation" parameterOrder="input"> <input message="tns:GetLocationRequest"/> <output message="tns:GetLocationResponse"/> </operation> <operation name="getStationDetail" parameterOrder="input"> <input message="tns:GetStationDetailRequest"/> <output message="tns:GetStationDetailResponse"/> </operation> <operation name="getStationList" parameterOrder="input"> <input message="tns:GetStationListRequest"/> <output message="tns:GetStationListResponse"/> </operation> </portType> <binding name="webservice_helpBinding" type="tns:webservice_helpPortType"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="webservice_help"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#webservice_help"/> <input> <soap:body use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> </operation> <operation name="getLocation"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getLocation"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> <operation name="getStationDetail"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getStationDetail"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> <operation name="getStationList"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getStationList"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> </binding> <message name="Webservice_helpRequest"/> <message name="GetLocationRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetLocationResponse"> <part name="return" type="xsd:array"/> </message> <message name="GetStationDetailRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetStationDetailResponse"> <part name="return" type="xsd:string"/> </message> <message name="GetStationListRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetStationListResponse"> <part name="return" type="xsd:string"/> </message> <service name="webservice_helpService"> <port name="webservice_helpPort" binding="tns:webservice_helpBinding"> <soap:address location="http://www.help.be/webservice/webservice_help.php"/> </port> </service> </definitions> What is the correct SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I should use below ? I've tried with this : public class Main extends Activity { /** Called when the activity is first created. */ private static final String SOAP_ACTION_GETLOCATION = "getLocation"; private static final String METHOD_NAME_GETLOCATION = "getLocation"; private static final String NAMESPACE = "http://www.help.be/webservice/"; private static final String URL = "http://www.help.be/webservice/webservice_help.php"; TextView tv; @SuppressWarnings("unchecked") @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); tv = (TextView)findViewById(R.id.TextView01); // -------------------------------------------------------------------------------------- SoapObject request_location = new SoapObject(NAMESPACE, METHOD_NAME_GETLOCATION); request_location.addProperty("login", "login"); // -> string required request_location.addProperty("password", "password"); // -> string required request_location.addProperty("serial", "serial"); // -> string required request_location.addProperty("language", "fr"); // -> string required (available « fr,nl,uk,de ») request_location.addProperty("keyword", "Braine"); // -> string required // -------------------------------------------------------------------------------------- SoapSerializationEnvelope soapEnvelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); //soapEnvelope.dotNet = true; // don't forget it for .NET WebServices ! soapEnvelope.setOutputSoapObject(request_location); AndroidHttpTransport aht = new AndroidHttpTransport(URL); try { aht.call(SOAP_ACTION_GETLOCATION, soapEnvelope); // Get the SAOP Envelope back and then extract the body SoapObject resultsRequestSOAP = (SoapObject) soapEnvelope.bodyIn; Vector XXXX = (Vector) resultsRequestSOAP.getProperty("GetLocationResponse"); int vector_size = XXXX.size(); Log.i("Hub", "testat="+vector_size); tv.setText("OK"); } catch(Exception E) { tv.setText("ERROR:" + E.getClass().getName() + ": " + E.getMessage()); Log.i("Hub", "Exception E"); Log.i("Hub", "E.getClass().getName()="+E.getClass().getName()); Log.i("Hub", "E.getMessage()="+E.getMessage()); } // -------------------------------------------------------------------------------------- } } I'm not sure of the SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I have to use? because soapAction is pointing to a URN instead of a traditional URL and it's PHP and not .NET ... also, I'm not sure if I have to use request_location.addProperty("login", "login"); of request_location.addAttribute("login", "login"); ? = <message name="GetLocationRequest"> <part name="input" type="xsd:array"/> What would you say ? Txs for your help. H. EDIT : Here is some code working in PHP - I simply want to have the same but in Android/JAVA : <?php ini_set("soap.wsdl_cache_enabled", "0"); // disabling WSDL cache $request['login'] = 'login'; $request['password'] = 'password'; $request['serial'] = 'serial'; $request['language'] = 'fr'; $client= new SoapClient("http://www.test.be/webservice/webservice_test.wsdl"); print_r( $client->__getFunctions()); ?><hr><h1>getLocation</h1> <h2>Input:</h2> <? $request['keyword'] = 'Bruxelles'; print_r($request); ?><h2>Result</h2><? $result = $client->getLocation($request); print_r($result); ?>

    Read the article

  • Do servlet containers prevent web applications from causing each other interference and how do they do it?

    - by chrisbunney
    I know that a servlet container, such as Apache Tomcat, runs in a single instance of the JVM, which means all of its servlets will run in the same process. I also know that the architecture of the servlet container means each web application exists in its own context, which suggests it is isolated from other web applications. As depicted here: Accepting that each web application is isolated, I would expect that you could create 2 copies of an identical web application, change the names and context paths of each (as well as any other relevant configuration), and run them in parallel without one affecting the other. The answers to this question appear to support this view. However, a colleague disagrees based on their experience of attempting just that. They took a web application and tried to run 2 separate instances (with different names etc) in the same servlet container and experienced issues with the 2 instances conflicting (I'm unable to elaborate more as I wasn't involved in that work). Based on this, they argue that since the web applications run in the same process space, they can't be isolated and things such as class attributes would end up being inadvertently shared. This answer appears to suggest the same thing The two views don't seem to be compatible, so I ask you: Do servlet containers prevent web applications deployed to the same container from conflicting with each other? If yes, How do they do this? If no, Why does interference occur? and finally, Under what circumstances could separate web applications conflict and cause each other interference?, perhaps scenarios involving resources on the file system, native code, or database connections?

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Force an ASP.NET 3.5 WebSite to use version 1.0.61025.0 of System.Web.Extensions

    - by Greg
    I just upgraded my Web Site project from 2.0 to 3.5 to take advantage of the TimeZoneInfo class. When I did this, I started getting an ambiguous assembly error (*see below). The problem is, I'm not using ScriptManager, an old version of SyncFusion is. I can't upgrade SyncFusion right now, so I need to tell ASP.NET to use version 1.0.61025.0 of the assembly. I ripped out all of the 3.5 script stuff from the web.config and adding bindingRedirects to it, but it didn't work. <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="3.5.0.0" newVersion="1.0.61025.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="3.5.0.0" newVersion="1.0.61025.0" /> </dependentAssembly> </assemblyBinding> </runtime> The type 'System.Web.UI.ScriptManager' is ambiguous: it could come from assembly 'C:\inetpub\wwwroot\xxx\bin\System.Web.Extensions.DLL' or from assembly 'C:\WINDOWS\assembly\GAC_MSIL\System.Web.Extensions\3.5.0.0__31bf3856ad364e35\System.Web.Extensions.dll'. Please specify the assembly explicitly in the type name.

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >