Search Results

Search found 55992 results on 2240 pages for 'web service'.

Page 129/2240 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • What Windows service binds a NIC to the network?

    - by Bigbio2002
    I have a server that takes several minutes for the NIC to bind itself to the network upon startup (it has a statically-configured IP). This causes DNS/WINS/Intersite Messaging to fail to start, since they're dependent on a network connection. While I'm still attempting to find a root cause to this issue (I've done firmware updates, checked for any odd drivers/services, no luck so far), but in the meantime, I want to adjust the load order of services to ensure that the NIC binds first before these services attempt to start. The only question is, which service is it? The server is running Server 2008 R2 and only has one NIC installed. (On a side note, there are two other small but odd problems occuring with the server. The server had the issue described in KB2298620, which I've fixed. The other problem occurs in Windows Server Backup. No events appear in the upper portion of the window, despite the fact that backups are running in the background. Whenever I attempt to modify the backup schedule, it gives me the error "Not enough storage is available to process this command" and appears to fail, when, in fact, it actually succeeds. These may be separate issues, but something tells me that some of these might share a common root cause.)

    Read the article

  • How to configure permissions for win2008 task running as Network Service to stop/start a service on different system?

    - by weiji
    Well... title says it all, actually. We've got a Scheduled Task set up on a windows server 2003 box running as the Network Service, and the batch file it runs will invoke "sc" to stop and then start a service on another windows box, however sc reports: [SC] OpenService FAILED 5: Access is denied. Running the same batch file via the windows explorer has no issues, and my user account is part of the Administrators group so I believe this is why there are no issues when I try it manually. Is this a permissions thing I enable for Network Service on the first server? Or do I enable permissions for Network Service somehow on the target server? This question (http://serverfault.com/questions/19382/why-sc-query-fails-from-one-machine-but-works-from-another) touches on something similar, but I'm looking for enabling the Network Service to access the service via the scheduled task.

    Read the article

  • Is there a way to get an ASMX Web Service created in VS 2005 to receive and return JSON?

    - by Ben McCormack
    I'm using .NET 2.0 and Visual Studio 2005 to try to create a web service that can be consumed both as SOAP/XML and JSON. I read Dave Ward's Answer to the question How to return JSON from a 2.0 asmx web service (in addition to reading other articles at Encosia.com), but I can't figure out how I need to set up the code of my asmx file in order to work with JSON using jQuery. Two Questions: How do I enable JSON in my .NET 2.0 ASMX file? What's a simple jQuery call that could consume the service using JSON? Also, I notice that since I'm using .NET 2.0, I i'm not able to implement using System.Web.Script.Services.ScriptService. Here's my C# code for the demo ASMX service: using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; /// <summary> /// Summary description for StockQuote /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class StockQuote : System.Web.Services.WebService { public StockQuote () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public decimal GetStockQuote(string ticker) { //perform database lookup here return 8; } [WebMethod] public string HelloWorld() { return "Hello World"; } } Here's a snippet of jQuery I found on the internet and tried to modify: $(document).ready(function(){ $("#btnSubmit").click(function(event){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "http://bmccorm-xp/WebServices/HelloWorld.asmx", data: "", dataType: "json" }) event.preventDefault(); }); });

    Read the article

  • Spring MVC working with Web Designers

    - by jboyd
    When working with web-designers in a Spring-MVC and JSP environment, are there any tools or helpful patterns to reduce the pain of going back and forth from HTML to JSP and back? Project requirements dictate that views will change often, and it can be difficult to make changes efficiently because of the amount of Java code that leaks into the view layer. Ideally I'd like to remove almost all Java code from the view, but it does not seem like this works with the Spring/JSP philosophy where often the way to remove Java code is to replace that code with tag libraries, which will still have a similar problem. To provide a little clarity to my question I'm going to include some existing code (inherited by me) to show the kinds of problems I'm likely to face when change the look of our views: <%-- Begin looping through results --%> <% List memberList = memberSearchResults.getResults(); for(int i = start - 1; i < memberList.size() && i < end; i++) { Profile profile = (Profile)memberList.get(i); long profileId = profile.getProfileId(); String nickname = profile.getNickname(); String description = profile.getDescription(); Image image = profile.getAvatarImage(); String avatarImageSrc = null; int avatarImageWidthNum = 0; int avatarImageHeightNum = 0; if(null != image) { avatarImageSrc = image.getSrc(); avatarImageWidthNum = image.getWidth(); avatarImageHeightNum = image.getHeight(); } String bgColor = i % 2 == 1 ? "background-color:#FFF" : ""; %> <div style="float:left;clear:both;padding:5px 0 5px 5px;cursor:pointer;<%= bgColor %>" onclick='window.location="profile.sp?profileId=<%= profileId %>"'> <div style="float:right;clear:right;padding-left:10px;width:515px;font-size:10px;color:#7e7e7e"> <h6><%= nickname %></h6> <%= description %> </div> <img style="float:left;clear:left;" alt="Avatar Image" src="<%= null != avatarImageSrc && avatarImageSrc.trim().length() > 0 ? avatarImageSrc : "images/defaultUserImage.png" %>" <%= avatarImageWidthNum < avatarImageHeightNum ? "height='59'" : "width='92'" %> /> </div> <% } // End loop %> Now, ignoring some of the code smells there, it's obvious that if someone wants to change the look of that DIV it would be neccesary to re-place all the Java/JSP code into the new HTML given (the designers don't work in JSP files, they have their own HTML versions of the website). Which is tedious, and prone to errors.

    Read the article

  • Web Services c#, Sending notifications Clients

    - by Diode
    I want to send a notification (say a string) to subscribers(subscribers ip addresses are in a database on the server side) by calling another method. when ever I call that method the output becomes error-some. [WebMethod] public string GetGroupPath(string emailAddress, string password, string ipAddress) { //SqlDataAdapter dbadapter = null; DataSet returnDS = new DataSet(); string groupName = null; string groupPath = null; SqlConnection dbconn = new SqlConnection("Server = localhost;Database=server;User ID = admin;Password = password;Trusted_Connection=false;"); dbconn.Open(); SqlCommand cmd = new SqlCommand(); string getGroupName = "select users.groupname from users where emailaddress = "+"'"+ emailAddress+"'"+ " and "+ "users.password = " + "'" +password+"'"; cmd.CommandText = getGroupName; cmd.Connection = dbconn; SqlDataReader reader = null; try { reader = cmd.ExecuteReader(); while (reader.Read()) { groupName = reader["groupname"].ToString(); } } catch (Exception) { groupPath = "Invalied"; } dbconn.Close(); dbconn.Open(); if (groupName != null) { string getPath = "select groups.grouppath from groups where groupname = " + "'" + groupName + "'"; cmd.CommandText = getPath; cmd.Connection = dbconn; try { reader = cmd.ExecuteReader(); while (reader.Read()) { groupPath = reader["grouppath"].ToString(); } } catch { groupPath = "Invalied"; } } else groupPath = "Invalied"; dbconn.Close(); if (groupPath != "Invalied") { dbconn.Open(); string getPath = "update users set users.ipaddress = "+"'"+ipAddress+"'"+" where users.emailaddress = " + "'" + emailAddress + "'"; cmd.CommandText = getPath; cmd.Connection = dbconn; cmd.ExecuteNonQuery(); dbconn.Close(); } NotifyUsers(); //NotifyUsers nu = new NotifyUsers(); //List<string> ipList = new List<string>(); //ipList.Add("192.168.56.1"); //nu.Notify(); return groupPath; } private void NotifyUsers() { Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); byte[] ipb = Encoding.ASCII.GetBytes("255.255.255.255"); IPAddress ipAddress = new IPAddress(ipb); IPEndPoint endPoint = new IPEndPoint(ipAddress, 15000); string notification = "new_update"; byte[] sendBuffer = Encoding.ASCII.GetBytes(notification); sock.SendTo(sendBuffer, endPoint); sock.Close(); } This is what has to be basically done. in the server side I have a listening thread and it gets notification when the server sends data( assume for now the database contains client ip address). then ever I call the web method it gives a error "invalid IPAddress" atline byte[] ipb = Encoding.ASCII.GetBytes("255.255.255.255"); thank you :) since this is my first ever post please be kind enough to give me a better feedback too :) thaks

    Read the article

  • Text misaligns in IE

    - by kingrichard2005
    I have a ASP.net web page I'm working with, I didn't create it myself, with the following HTML code: <DIV style="POSITION: absolute; TEXT-ALIGN: center; WIDTH: 1400px; TOP: 60px; LEFT: 125px"> <SPAN style="TEXT-ALIGN: center; FONT-SIZE: xx-large" id=labelInstructions>Some Text: <BR><BR></SPAN> <TABLE style="WIDTH: 1200px" border=1 align=center> <TBODY> <TR> <TD><LABEL style="FONT-SIZE: x-large" for=FileUpload1>ENTER Path: </LABEL><INPUT id=FileUpload1 size=70 type=file name=FileUpload1></TD> </TR> <TR> <TD><SPAN style="COLOR: red; FONT-SIZE: medium" id=fileUploadError><BR><BR></SPAN></TD> </TR> <TR> <TD> <TABLE style="WIDTH: 1200px" border=1> <TBODY> <TR> <TD style="WIDTH: 400px; FONT-SIZE: x-large" vAlign=top align=right>FILE CONTENT INSTRUCTIONS:</TD> <TD style="WIDTH: 850px; FONT-SIZE: x-large" vAlign=top align=left>INSTRUCTION 1<BR>INSTRUCTION 2<BR></TD></TR> <TR><TD></TD></TR> <TR> <TD style="WIDTH: 400px; FONT-SIZE: x-large" vAlign=top align=right>FILE CONTENT EXAMPLE:</TD> <TD style="WIDTH: 850px; FONT-SIZE: x-large" vAlign=top align=left>EXAMPLE 1<BR>EXAMPLE 2<BR><BR></TD> </TR> </TBODY> </TABLE> </TD> </TR> </TBODY> </TABLE> </DIV> When this html is displayed in IE, I notice that the alignment of the text in the cells in the inner table, i.e. the table that is in the third cell of the outer table, is distorted when zooming in and out on it. I have a fixed table setting in pixels instead of percentages, so I don't understand why this is an issue. I want the text in the cells to stay in the same position when zooming. The code must be manipulated from the code behind, so I cannot create a separate CSS file. Any help is appreciated. Here are two examples to illustrate what I'm talking about: Normal zoom at 100%: Zoom at 75%: Notice in the second image the two table cells at the bottom are slightly offset to the left. UPDATE: Yes, I understand, we will be implementing a new system in the near future. Obviously this is old and very non-standard, this was dropped in my lap when I started working with it. And we're coming up with plans for a new system to replace it, in the meantime, this is what I have to deal with.

    Read the article

  • How do I lock the workstation from a windows service?

    - by Brad Mathews
    Hello, I need to lock the workstation from a windows service written in VB.Net. I am writing the app on Windows 7 but it needs to work under Vista and XP as well. User32 API LockWorkStation does not work as it requires an interactive desktop and I get return value of 0. I tried calling %windir%\System32\rundll32.exe user32.dll,LockWorkStation from both a Process and from Shell, but still nothing happens. Setting the service to interact with the desktop is a no-go as I am running the service under the admin account so it can do some other stuff that requires admin rights - like disabling the network, and you can only select the interact with desktop option if running under Local System Account. That would be secondary question - how to run another app with admin rights from a service running under Local System Account without bugging the user. I am writing an app to control my kids computer/internet access (which I plan to open source when done) so I need everything to happen as stealthily as possible. I have a UI that handles settings and status notifications in the taskbar, but that is easy to kill and thus defeat the locking. I could make another hidden Windows Forms app to handle the locking, but that just seems a rather inelegant solution. Better ideas anyone? Thanks! Brad

    Read the article

  • Understanding the passing of data/life of a script in web development/CodeIgniter

    - by Pete Jodo
    I hope I worded the title accurately enough but I typically use Java and don't have much experience in Web Development/PHP/CodeIgniter. I have a difficult time understanding the life cycle of a script as I found out trying to implement a certain feature to a website I am developing (as a means of learning how to). I'll first describe the feature I tried implementing and then the problem I ran into that made me question my fundamental understanding of how scripts work since I'm used to typical OOP. Ok so here goes... I have a webpage that has 2 basic tasks a user can do, create and delete an entry. What I attempted to implement was a way to time a user how long it takes them to complete a certain task. The way I did this was have a homepage where there would be a list of tasks a user to choose from (in this case 2, create and delete). A user would click a task which would link to the 'true' homepage where the user then would be expected to complete the task. My script looks like this: <?php class Site extends CI_Controller { var $task1; var $tasks = array( "task1" => NULL, "date1" => 0, "date2" => 0, "diff" => 0); function __construct() { parent::__construct(); include 'timetask.php'; $this->task1 = new TimeTask("create"); } function index() { $this->tasks['task1'] = $this->task1->getTask(); $this->tasks['diff'] = $this->task1->getTimeDiff(); if($this->tasks['diff'] == NULL) { $this->tasks['diff'] = 0; } $this->load->view('usability_test', $this->tasks); } function origIndex() { $this->task1->setDate1(new DateTime()); $this->tasks['date1'] = $this->task1->getDate1()->getTimestamp(); $data = array(); if($q = $this->site_model->get_records()) { $data['records'] = $q; } $this->load->view('options_view', $data); } function create() { $this->task1->setDate2(new DateTime()); $this->tasks['date2'] = $this->task1->getDate2()->getTimestamp(); $data = array( 'author' => $this->input->post('author'), 'title' => $this->input->post('title'), 'contents' => $this->input->post('contents') ); $this->site_model->add_record($data); $this->index(); } I only included create to keep it short. Then I also have the TimeTask class, that actually another StackOverflow so kindly helped me with: <?php class TimeTask { private $task; /** * @var DateTime */ private $date1, $date2; function __construct($currTask) { $this->task = $currTask; } public function getTimeDiff() { $hasDiff = $this->date1 && $this->date2; if ($hasDiff) { return $this->date2->getTimestamp() - $this->date1->getTimestamp(); } else { return NULL; } } public function __toString() { return (string) $this->getTimeDiff(); } /** * @return \DateTime */ public function getDate1() { return $this->date1; } /** * @param \DateTime $date1 */ public function setDate1(DateTime $date1) { $this->date1 = $date1; } /** * @return \DateTime */ public function getDate2() { return $this->date2; } /** * @param \DateTime $date2 */ public function setDate2(DateTime $date2) { $this->date2 = $date2; } /** * @return get current task */ public function getTask() { return $this->task; } } ?> I don't think posting the views is necessary for the question but here is atleast how the links are made. ...and... id", $row-title); ? Now there's no error in the code but it doesn't do what I expect of it and the reason I assume why is because that each time a function of the script is called via a new page it is NOT the same instance of the script called previously so any previously created objects are no longer there. This confuses me and leaves me quite unsure of how to implement this gracefully. Some ways I would guess of how to do this is by passing the necessary data through the URL or have data saved in a database and retrieve it later to compare the times. What would be a recommended way to do, not just this, but anything that needs previously created data? Also, am I correct to think that a script is only 'alive' for one webpage at a time? Thanks!

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • IE9, HTML5 and truck load of other stuff happening around the web

    - by Harish Ranganathan
    First of all, I haven’t been updating this blog as regularly as it used to be.  Primarily, due to the fact was I was visiting a lot of cities talking about SharePoint, Web Matrix, IE9 and few other stuff.  IE9 is my new found love and I simply think we have done great work in improving the browser and browsing experiences for our users. This post would talk about IE, general things happening around the web and few misconceptions around IE (I had earlier written about IE8 and common myths When you think about the way web has transformed, its truly amazing.  Rewind back to late 90s and early 2000s, web was a luxury.  There were lot of desktop applications running around and web applications was starting to pick up.  Primarily reason was not a lot of folks were into web development and the areas of web were confined to HTML and JavaScript.  CSS was around here and there but no one took it so seriously.  XML, XSLT was fast picking up and contributed to decent web development techniques. So as a web developer all we had to worry about was, building good looking websites which worked well with IE6 and occasionally with Safari.  Firefox was  not even in the picture then and so was Chrome.  But with the various arms of W3C consortium and other bodies working actively on stuff like CSS, SVG and XHTML, few more areas came into picture when it comes to browsers supporting standards.  IE6 for sure wasn’t up to the speed and the main issue we were tackling then was privacy and piracy.  We did invest a lot of our efforts to curb piracy and one of the steps into it was that, IE7 the next version of IE would install only on genuine windows machines.  What this means, is that, people who were running pirated windows xp knowingly/unknowingly could not install IE7 and the limitations of IE6 really hurt them.  One more thing of importance is that, if you were running pirated windows, lots of chances that you didn’t get the security updates and thereby were vulnerable to run viruses/trojans on your system. Many of them actually block using IE in the first place and make it difficult to browse.  SP2 came as a big boon but again was there only for genuine windows machines. With Firefox coming as a free install and also heavily pushed by Google then, it was natural that people would try an alternative.  By then, we had started working on IE8 supporting the best standards (note HTML5, CSS 2.1 and other specs were then work in progress.  they are still) Later, Google in their infinite wisdom realized that with Firefox they were going nowhere and they released Chrome.  Now, they heavily push Chrome even for Firefox users, which is natural since its their browser. In the meanwhile, these browsers push their updates as mandatory and therefore have a very short lifecycle to add enhancements and support for stuff like CSS etc., Meanwhile, when IE8 came out, it really was the best standards supported browser and a lot of people saw our efforts in improving our browser. HTML5 is the buzz word in the industry and there is a lot of noise being made by many browsers claiming their support for it.  IE8 doesn’t have much support for HTML5.  But, with IE9 Beta, we have great support for many of HTML5 specifications.  Note that, HTML5 is still work under progress and one of the board of members working on the spec has mentioned that these specs might change and relying on them heavily is dangerous.  But, some of the advances such as video tag, etc., are indeed supported in IE9 Beta.  IE9 Beta also has full hardware acceleration support which other browsers don’t have. IE8 had advanced security features such as smartscreen filter, in-private browsing, anti-phishing and a lot of other stuff.  IE9 builds on top of these with the best in town security standards as well as support for HTML5, CSS3, Hardware acceleration, SVG and many other advancements in browser.  Read more at http://www.beautyoftheweb.com/#/highlights/html5  To summarize, IE9 Beta is really innovative and you should try it to believe what it provides.  You can visit http://www.beautyoftheweb.com/  to install as well as read more on this. Cheers !!!

    Read the article

  • Desktop Applications Versus Web Applications

    Up until the advent of the internet programmers really only developed one type of application used by end-users.  This type of application was called a desktop application. As the name implies, these applications ran strictly from a desktop computer, and were limited by the resources available to the computer. Initially, this type of applications did not need resources outside of the scope of the computer in which they installed. The problem with this type of application is that if multiple end-users need to access the same desktop application, then the application must be installed on the end-user’s computer. In this age of software development security was not as big of a concern as it is today with other types of applications. This is primarily due to the fact that an end-user must have access to the computer where the software is installed in order for them to access the application. In addition, developers could also password protect the application just in case an authorized end-user was able to gain access to the computer. With the birth of the internet a second form of application emerged because developers were trying to solve inherent issues with the preexisting desktop application. One of the solutions to overcome some of the short comings of desktop applications is the web application. Web applications are hosted on a centralized server and clients only need to have network access and a web browser in order to access the application. Because a web application can be installed on a remote server it removes the need for individual installations of the same application on each end-user’s computer.  The main benefits to an application being hosted on a server is increased accessibility to the application due to the fact that nothing has to be installed on a desktop computer for an end-user to be able to access the application. In addition, web applications are much easier to maintain because any change to the application is applied on the server and is inherently applied to any end-user trying to use the application. This removes the time needed to install and maintain individual installations of a desktop application. However with the increased accessibility there are additional costs that are incurred compared to a desktop application because of the additional cost and maintenance of a server hosting the application. Typically, after a desktop application is purchased there are no additional reoccurring fees associated with the application.  When developing a web based application there are additional considerations that must be addressed compared to a desktop application. The added benefit of increased accessibility also now adds a new failure point when trying to gain access to an application. An end-user now must have network connectivity in order to access the application. This issue is not a concern for desktop applications because there resources are typically bound to the computer in which they run. Since the availability of an application is increased with the use of the client-server model in a web based application, additional security concerns now come in to play. As stated before a, desktop application is bound to the accessibility of the end-user to the computer that the application is installed. This is not the case with web based applications because they potentially could have access from anywhere with the proper internet/network connection. Additional security steps are required to insure the integrity of the application and its data. Examples of these steps include and are not limited to the following: Restricted/Password Areas This form of security is used when specific information can only be accessed by end-users based on a set of accessibility rules. IP Restrictions This form of security is used when only specific locations need to access an application. This form of security is applied from within the web server or a firewall. Network Restrictions (Firewalls) This form of security is used to contain access to an application within a specific sub set of a network. Data Encryption This form of security is used transform personally identifiable information in to something unreadable so that it can be stored for future use. Encrypted Protocols (HTTPS) This form of security is used to prevent others from reading messages being sent between applications over a network.

    Read the article

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • how to display user name in login name control

    - by user569285
    I have a master page that holds the loginview content that appears on all subsequent pages based on the master page. i have a username control also nested in the loginview to display the name of the user when they are logged in. the code for the loginview from the master page is displayed as follows: <div class="loginView"> <asp:LoginView ID="MasterLoginView" runat="server"> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /> <asp:Label ID="userNameLabel" runat="server" Text="Label"></asp:Label></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/Logout.aspx"/> ] <%--Welcome: <span class="bold"><asp:LoginName ID="MasterLoginName" runat="server" /> </span>!--%> </LoggedInTemplate> <AnonymousTemplate> Welcome: Guest [ <a href="~/Account/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> ] </AnonymousTemplate> </asp:LoginView> <%--&nbsp;&nbsp; [&nbsp;<asp:LoginStatus ID="MasterLoginStatus" runat="server" LogoutAction="Redirect" LogoutPageUrl="~/Logout.aspx" />&nbsp;]&nbsp;&nbsp;--%> </div> Since VS2010 launches with a default login page in the accounts folder, i didnt think it necessary to create a separate log in page, so i just used the same log in page. please find the code for the login control below: <asp:Login ID="LoginUser" runat="server" EnableViewState="false" RenderOuterTable="false"> <LayoutTemplate> <span class="failureNotification"> <asp:Literal ID="FailureText" runat="server"></asp:Literal> </span> <asp:ValidationSummary ID="LoginUserValidationSummary" runat="server" CssClass="failureNotification" ValidationGroup="LoginUserValidationGroup"/> <div class="accountInfo"> <fieldset class="login"> <legend style="text-align:left; font-size:1.2em; color:White;">Account Information</legend> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">User ID:</asp:Label> <asp:TextBox ID="UserName" runat="server" CssClass="textEntry"></asp:TextBox> <asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName" CssClass="failureNotification" ErrorMessage="User ID is required." ToolTip="User ID field is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label> <asp:TextBox ID="Password" runat="server" CssClass="passwordEntry" TextMode="Password"></asp:TextBox> <asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password" CssClass="failureNotification" ErrorMessage="Password is required." ToolTip="Password is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:CheckBox ID="RememberMe" runat="server"/> <asp:Label ID="RememberMeLabel" runat="server" AssociatedControlID="RememberMe" CssClass="inline">Keep me logged in</asp:Label> </p> </fieldset> <p class="submitButton"> <asp:Button ID="LoginButton" runat="server" CommandName="Login" Text="Log In" ValidationGroup="LoginUserValidationGroup" onclick="LoginButton_Click"/> </p> </div> </LayoutTemplate> </asp:Login> I then wrote my own code for authentication since i had my own database. the following displays the code in the login buttons click event.: public partial class Login : System.Web.UI.Page { //create string objects string userIDStr, pwrdStr; protected void LoginButton_Click(object sender, EventArgs e) { //assign textbox items to string objects userIDStr = LoginUser.UserName.ToString(); pwrdStr = LoginUser.Password.ToString(); //SQL connection string string strConn; strConn = WebConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString"].ConnectionString; SqlConnection Conn = new SqlConnection(strConn); //SqlDataSource CSMDataSource = new SqlDataSource(); // CSMDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString"].ToString(); //SQL select statement for comparison string sqlUserData; sqlUserData = "SELECT StaffID, StaffPassword, StaffFName, StaffLName, StaffType FROM Staffs"; sqlUserData += " WHERE (StaffID ='" + userIDStr + "')"; sqlUserData += " AND (StaffPassword ='" + pwrdStr + "')"; SqlCommand com = new SqlCommand(sqlUserData, Conn); SqlDataReader rdr; string usrdesc; string lname; string fname; string staffname; try { //string CurrentData; //CurrentData = (string)com.ExecuteScalar(); Conn.Open(); rdr = com.ExecuteReader(); rdr.Read(); usrdesc = (string)rdr["StaffType"]; fname = (string)rdr["StaffFName"]; lname = (string)rdr["StaffLName"]; staffname = lname.ToString() + " " + fname.ToString(); LoginUser.UserName = staffname.ToString(); rdr.Close(); if (usrdesc.ToLower() == "administrator") { Response.Redirect("~/CaseAdmin.aspx", false); } else if (usrdesc.ToLower() == "manager") { Response.Redirect("~/CaseManager.aspx", false); } else if (usrdesc.ToLower() == "investigator") { Response.Redirect("~/Investigator.aspx", false); } else { Response.Redirect("~/Default.aspx", false); } } catch(Exception ex) { string script = "<script>alert('" + ex.Message + "');</script>"; } finally { Conn.Close(); } } My authentication works perfectly and the page gets redirected to the designated destination. However, the login view does not display the users name. i actually cant figure out how to pass the users name that i had picked from the database to the login name control to be displayed. taking a close look i also noticed the logout text that should be displayed after successful log in does not show. that leaves me wondering if the loggedin template control on the masterpage even fires at all or its still the anonymous template control that keeps displaying.? How do i get this to work as expected? Please help....

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Do servlet containers prevent web applications from causing each other interference and how do they do it?

    - by chrisbunney
    I know that a servlet container, such as Apache Tomcat, runs in a single instance of the JVM, which means all of its servlets will run in the same process. I also know that the architecture of the servlet container means each web application exists in its own context, which suggests it is isolated from other web applications. As depicted here: Accepting that each web application is isolated, I would expect that you could create 2 copies of an identical web application, change the names and context paths of each (as well as any other relevant configuration), and run them in parallel without one affecting the other. The answers to this question appear to support this view. However, a colleague disagrees based on their experience of attempting just that. They took a web application and tried to run 2 separate instances (with different names etc) in the same servlet container and experienced issues with the 2 instances conflicting (I'm unable to elaborate more as I wasn't involved in that work). Based on this, they argue that since the web applications run in the same process space, they can't be isolated and things such as class attributes would end up being inadvertently shared. This answer appears to suggest the same thing The two views don't seem to be compatible, so I ask you: Do servlet containers prevent web applications deployed to the same container from conflicting with each other? If yes, How do they do this? If no, Why does interference occur? and finally, Under what circumstances could separate web applications conflict and cause each other interference?, perhaps scenarios involving resources on the file system, native code, or database connections?

    Read the article

  • Force an ASP.NET 3.5 WebSite to use version 1.0.61025.0 of System.Web.Extensions

    - by Greg
    I just upgraded my Web Site project from 2.0 to 3.5 to take advantage of the TimeZoneInfo class. When I did this, I started getting an ambiguous assembly error (*see below). The problem is, I'm not using ScriptManager, an old version of SyncFusion is. I can't upgrade SyncFusion right now, so I need to tell ASP.NET to use version 1.0.61025.0 of the assembly. I ripped out all of the 3.5 script stuff from the web.config and adding bindingRedirects to it, but it didn't work. <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="3.5.0.0" newVersion="1.0.61025.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="3.5.0.0" newVersion="1.0.61025.0" /> </dependentAssembly> </assemblyBinding> </runtime> The type 'System.Web.UI.ScriptManager' is ambiguous: it could come from assembly 'C:\inetpub\wwwroot\xxx\bin\System.Web.Extensions.DLL' or from assembly 'C:\WINDOWS\assembly\GAC_MSIL\System.Web.Extensions\3.5.0.0__31bf3856ad364e35\System.Web.Extensions.dll'. Please specify the assembly explicitly in the type name.

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • Webcast with Brian Griffin, Ancestry, 2013 Winner 10 Best Web Support Sites

    - by Tuula Fai
    The web is one of the fastest growing channels for providing service, support and information, as seen in The Service Council's (TSC) latest multi-channel research survey. Join TSC's Chief Customer Officer Sumair Dutta as he shares key findings from his current customer experience research from over 200 organizations. Sumair will be joined by Brian Griffin, Senior Program Manager, Global Support Experience, Ancestry.com who will show how Ancestry is using the web as a powerful tool to enhance self-service opportunities and increase customer engagement. Smarter Web Service Educast Thursday, November 14th 2 pm ET / 11 am PT Register: http://bit.ly/1cwz4Ns  

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • Integrate BING API for Search inside ASP.Net web application

    - by sreejukg
    As you might already know, Bing is the Microsoft Search engine and is getting popular day by day. Bing offers APIs that can be integrated into your website to increase your website functionality. At this moment, there are two important APIs available. They are Bing Search API Bing Maps The Search API enables you to build applications that utilize Bing’s technology. The API allows you to search multiple source types such as web; images, video etc. and supports various output prototypes such as JSON, XML, and SOAP. Also you will be able to customize the search results as you wish for your public facing website. Bing Maps API allows you to build robust applications that use Bing Maps. In this article I am going to describe, how you can integrate Bing search into your website. In order to start using Bing, First you need to sign in to http://www.bing.com/toolbox/bingdeveloper/ using your windows live credentials. Click on the Sign in button, you will be asked to enter your windows live credentials. Once signed in you will be redirected to the Developer page. Here you can create applications and get AppID for each application. Since I am a first time user, I don’t have any applications added. Click on the Add button to add a new application. You will be asked to enter certain details about your application. The fields are straight forward, only thing you need to note is the website field, here you need to enter the website address from where you are going to use this application, and this field is optional too. Of course you need to agree on the terms and conditions and then click Save. Once you click on save, the application will be created and application ID will be available for your use. Now we got the APP Id. Basically Bing supports three protocols. They are JSON, XML and SOAP. JSON is useful if you want to call the search requests directly from the browser and use JavaScript to parse the results, thus JSON is the favorite choice for AJAX application. XML is the alternative for applications that does not support SOAP, e.g. flash/ Silverlight etc. SOAP is ideal for strongly typed languages and gives a request/response object model. In this article I am going to demonstrate how to search BING API using SOAP protocol from an ASP.Net application. For the purpose of this demonstration, I am going to create an ASP.Net project and implement the search functionality in an aspx page. Open Visual Studio, navigate to File-> New Project, select ASP.Net empty web application, I named the project as “BingSearchSample”. Add a Search.aspx page to the project, once added the solution explorer will looks similar to the following. Now you need to add a web reference to the SOAP service available from Bing. To do this, from the solution explorer, right click your project, select Add Service Reference. Now the new service reference dialog will appear. In the left bottom of the dialog, you can find advanced button, click on it. Now the service reference settings dialog will appear. In the bottom left, you can find Add Web Reference button, click on it. The add web reference dialog will appear now. Enter the URL as http://api.bing.net/search.wsdl?AppID=<YourAppIDHere>&version=2.2 (replace <yourAppIDHere> with the appID you have generated previously) and click on the button next to it. This will find the web service methods available. You can change the namespace suggested by Bing, but for the purpose of this demonstration I have accepted all the default settings. Click on the Add reference button once you are done. Now the web reference to Search service will be added your project. You can find this under solution explorer of your project. Now in the Search.aspx, that you previously created, place one textbox, button and a grid view. For the purpose of this demonstration, I have given the identifiers (ID) as txtSearch, btnSearch, gvSearch respectively. The idea is to search the text entered in the text box using Bing service and show the results in the grid view. In the design view, the search.aspx looks as follows. In the search.aspx.cs page, add a using statement that points to net.bing.api. I have added the following code for button click event handler. The code is very straight forward. It just calls the service with your AppID, a query to search and a source for searching. Let us run this page and see the output when I enter Microsoft in my textbox. If you want to search a specific site, you can include the site name in the query parameter. For e.g. the following query will search the word Microsoft from www.microsoft.com website. searchRequest.Query = “site:www.microsoft.com Microsoft”; The output of this query is as follows. Integrating BING search API to your website is easy and there is no limit on the customization of the interface you can do. There is no Bing branding required so I believe this is a great option for web developers when they plan for site search.

    Read the article

  • Service Level Loggin/Tracing

    - by Ahsan Alam
    We all love to develop services, right? First timers want to learn technologies like WCF and Web Services. Some simply want to build services; whereas, others may find services as natural architectural decision for particular systems. Whatever the reason might be, services are commonly used in building wide range of systems. Developers often encapsulates various functionality (small or big) within one or more services, and expose them for multiple applications. Sometimes from day one (and definitely over time) these services may evolve into a set of black boxes. Services or not, black boxes or not, issues and exceptions are sometimes hard to avoid, especially in highly evolving and transactional systems. We can try to be methodical with our unit testing, QA and overall process; but we may not be able to avoid some type of system issues. When issues arise from one or more highly transactional services, it becomes necessary to resolve them very quickly. When systems handle thousands of transaction in matter of hours, some issues may not surface immediately. That is when service level logging becomes very useful. Technologies such as WCF, allow us to enable service level tracing with minimal effort; but that may not provide us with complete picture. Developers may need to add tracing within critical areas of the code with various degrees of verbosity. Programmer can always utilize some logging framework such as the 'Logging Application Block' to get the job done. It may seem overkill sometimes; but I have noticed from my experience that service level logging helps programmer trace many issues very quickly.

    Read the article

  • The configuration section 'system.web.extensions' cannot be read because it is missing a section declaration

    - by Jeff Widmer
    After upgrading an ASP.NET application from .NET Framework 3.5 to .NET Framework 4.0 I ran into the following error message dialog when trying to view any of the modules in IIS on the server. What happened is during the upgrade, the web.config file was automatically converted for .NET Framework 4.0.  This left behind an empty system.web.extensions section: It was an easy fix to resolve, just remove the unused system.web.extensions section.   ERROR DIALOG TEXT: --------------------------- ASP --------------------------- There was an error while performing this operation. Details: Filename: \\?\web.config Line number: 171 Error: The configuration section 'system.web.extensions' cannot be read because it is missing a section declaration --------------------------- OK   ---------------------------

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >