Search Results

Search found 50879 results on 2036 pages for 'web services interoperabi'.

Page 103/2036 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • difference between Casini [IIS express] and VS Development server or Expression web

    - by anirudha
    MVC3 project can be run within Expression web and Visual studio as opened like a website not a project. they work same even if you open blogengine.net project in VS they take a time when you have more theme but expression web debug them in a second. well because theme design not matter for code. Expression web is a good because they save time for compile the code. even changes we make a little in design nothing in backend code.   i found a little difference between Casini and VS development server that if image putted in wrong way like <img src=”//img.png”/> instead of <img src=”/img.png”/> the error we make // instead of / that’s not worked in Expression web or Visual studio debugging but in Cassini it’s work fine.   Well i found that debug Blogengine.net in Expression web is a great thing because in VS they take a time like a minute to debug when you trying to debug first time. Expression Web save a time when we design themes within them and that’s much good option because web is also maked for design.   Well if you want to debug application faster then use casini but Expression web debugging is a good option when they take a long time to debug in Visual studio and EW debug them in a seconds.

    Read the article

  • Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Direct

    - by mark.wilcox
    We're doing another live webcast on May 27 - Here's the details: Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time Organizations can spend up to 60% of their IT budgets on operational activities. • Are you being asked to do more, with less resources? • Have you had to lead a cost cutting exercise in your IT department? • Do you have licenses for software and wonder whether you are getting the most out of those resources? • Do you want to be an Identity Hero inside your organization? Oracle brings leadership in Directory Services to help organization's identify ways to leverage Oracle Virtual Directory to reduce costs in their enterprise. This presentation will explore ways to use Oracle Virtual Directory to federate faster, create architectures to meet aggressive time constraints for identity projects or mergers and acquisitions in a cost conscious environment. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • Working with Reporting Services Filters – Part 2: The LIKE Operator

    - by smisner
    In the first post of this series, I introduced the use of filters within the report rather than in the query. I included a list of filter operators, and then focused on the use of the IN operator. As I mentioned in the previous post, the use of some of these operators is not obvious, so I'm going to spend some time explaining them as well as describing ways that you can use report filters in Reporting Services in this series of blog posts. Now let's look at the LIKE operator. If you write T-SQL queries, you've undoubtedly used the LIKE operator to produce a query using the % symbol as a wildcard for multiple characters like this: select * from DimProduct where EnglishProductName like '%Silver%' And you know that you can use the _ symbol as a wildcard for a single character like this: select * from DimProduct where EnglishProductName like '_L Mountain Frame - Black, 4_'   So when you encounter the LIKE operator in a Reporting Services filter, you probably expect it to work the same way. But it doesn't. You use the * symbol as a wildcard for multiple characters as shown here: Expression Data Type Operator Value [EnglishProductName] Text Like *Silver* Note that you don’t have to include quotes around the string that you use for comparison. Books Online has an example of using the % symbol as a wildcard for a single character, but I have not been able to successfully use this wildcard. If anyone has a working example, I’d love to see it!

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • Returning data from SQL Server reporting web service call

    - by user79339
    Hi, I am generating a report that contains the version number. The version number is stored in the DB and retrieved/incremented as part of the report generation. The only problem is, I am calling SSRS via a web service call, which returns the generated report as a byte array. Is there any way to get the version number out of this generated report? For example to display a dialog that says "You generated Status Report, Version number 3". I tried passing in an output parameter and setting it inside the storedproc. Its modified when i execute it in sql management studio, but not in the reporting studio. Or atleast i can't seem to bind to the modified, post execution value (using expression "=Parameters!ReportVersion.Value"). Of course, I could get/increment the version number from database myself before calling the SSRS webservice and pass it along as a parameter to the Report, but that might cause concurrancy problems. On the whole, it just seems neater for the storedproc to access/generate a version number and return it to the ReportingEngine, which will generate the report with the version number and return the updated version number to the WebService client. Any thoughts/Ideas?

    Read the article

  • Duplicate C# web service proxy classes generated for Java types

    - by Sergey
    My question is about integration between a Java web service and a C# .NET client. Service: CXF 2.2.3 with Aegis databinding Client: C#, .NET 3.5 SP1 For some reason Visual Studio generates two C# proxy enums for each Java enum. The generated C# classes do not compile. For example, this Java enum: public enum SqlDialect { GENERIC, SYBASE, SQL_SERVER, ORACLE; } Produces this WSDL: <xsd:simpleType name="SqlDialect"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="GENERIC" /> <xsd:enumeration value="SYBASE" /> <xsd:enumeration value="SQL_SERVER" /> <xsd:enumeration value="ORACLE" /> </xsd:restriction> </xsd:simpleType> For this WSDL Visual Studio generates two partial C# classes (generated comments removed): [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")] [System.Runtime.Serialization.DataContractAttribute(Name="SqlDialect", Namespace="http://somenamespace")] public enum SqlDialect : int { [System.Runtime.Serialization.EnumMemberAttribute()] GENERIC = 0, [System.Runtime.Serialization.EnumMemberAttribute()] SYBASE = 1, [System.Runtime.Serialization.EnumMemberAttribute()] SQL_SERVER = 2, [System.Runtime.Serialization.EnumMemberAttribute()] ORACLE = 3, } [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Xml.Serialization.XmlTypeAttribute(Namespace="http://somenamespace")] public enum SqlDialect { GENERIC, SYBASE, SQL_SERVER, ORACLE, } The resulting C# code does not compile: The namespace 'somenamespace' already contains a definition for 'SqlDialect' I will appreciate any ideas...

    Read the article

  • Calling web service from seems to hang

    - by anothershrubery
    I am trying to call an asmx web service from an Android app. Just literally started some Android development today. I've been following various solutions I have found on the net and on here and it seems it is more difficult than anticipated. I have tried various solutions and using KSoap2 seems to be the easiest way to implement this, well it would be if I could get it working. I have the following code which works up until a point: private class CallWebService extends AsyncTask<Void, Void, Void> { private static final String SOAP_ACTION = "http://tempuri.org/GetUser"; private static final String METHOD_NAME = "GetUser"; private static final String NAMESPACE = "http://tempuri.org/"; private static final String URL = "http://160.10.1.79:59315/Service1.asmx"; TextView tv; @Override protected Void doInBackground(Void... params) { tv=(TextView)findViewById(R.id.txtMessage); try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet=true; envelope.setOutputSoapObject(request); HttpTransportSE androidHttpTransport = new HttpTransportSE(URL); androidHttpTransport.call(SOAP_ACTION, envelope); Object result = (Object)envelope.getResponse(); tv.setText(result.toString()); } catch (Exception e) { tv.setText(e.getMessage()); } return null; } } It seems to hang at the line androidHttpTransport.call(SOAP_ACTION, envelope); Any ideas why? Is this the correct approach? Should I be looking in another direction?

    Read the article

  • How to pass value in url using web request GET method asp.net

    - by Narasi
    Am new to .Net and web service..i like to pass id through url..how to do that?whether it was done post or get method?guide me string url = "http://XXXXX//"+id=22; WebRequest request = WebRequest.Create(url); request.Proxy.Credentials = new NetworkCredential(xxxxx); request.Credentials = CredentialCache.DefaultCredentials; //add properties request.Method = "GET"; request.ContentType = "application/json"; //convert byte[] byteArray = Encoding.UTF8.GetBytes(data); request.ContentLength = byteArray.Length; //post data Stream streamdata = request.GetRequestStream(); streamdata.Write(byteArray, 0, byteArray.Length); streamdata.Close(); //response WebResponse response = request.GetResponse(); // Get the stream containing content returned by the server. Stream dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Clean up the streams and the response. reader.Close(); response.Close(); Thanks in advance

    Read the article

  • asmx web service returning xml instead of json in .net 4.0

    - by Baldy
    i have just upgraded a test copy of my site to asp.net 4.0 and have noticed a strange issue that only arises when i upload the site to my server. the site has an asmx web service that returns json, yet when i run the site on my server it returns xml. it as been working fine in asp.net 3.5 for over a year. the webMethod is decorated with the correct attributes... [WebMethod][ScriptMethod(ResponseFormat = ResponseFormat.Json)] public List<LocationRecentChange> RecentChanges() and on my local machine it returns json. yet on the server (Windows 2008 64bit) it returns xml. you can inspect the response on the test site here... my test site using firebug console you will see a 200 OK response and a bunch of XML, and on my local machine the data returned is the JSON i expect. Here is the javascript that calls the service.. function loadRecentData() { $.ajax({ type: "POST", url: "service/spots.asmx/RecentChanges", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: loadRecentUpdates, failure: function(msg) { //alert(msg); } }); } Any suggestions welcome, this has got me stumped!

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Jason/ajax web service call get redirected (302 Object moved) and then 500 unknow webservice name

    - by user646499
    I have been struggling with this for some times.. I searched around but didnt get a solution yet. This is only an issue on production server. In my development environment, everything works just fine. I am using JQuery/Ajax to update product information based on product's Color attribute. for example, a product has A and B color, the price for color A is different from color B. When user choose different color, the price information is updated as well. What I did is to first add a javascript function: function updateProductVariant() { var myval = jQuery("#<%=colorDropDownList.ClientID%").val(); jQuery.ajax({ type: "POST", url: "/Products.aspx/GetVariantInfoById", data: "{variantId:'" + myval + "'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { var obj = jQuery.parseJSON(response.d); jQuery("#<%=lblStockAvailablity.ClientID%>").text(obj.StockMessage); jQuery(".price .productPrice").text(obj.CurrentPrice); jQuery(".price .oldProductPrice").text(obj.OldPrice); } }); } Then I can register the dropdown list's 'onclick' event point to function 'updateProductVariant' GetVariantInfoById is a WebGet method in the codebehind, its also very straightforward: [WebMethod] public static string GetVariantInfoById(string variantId) { int id = Convert.ToInt32(variantId); ProductVariant productVariant = IoC.Resolve().GetProductVariantById(id); string stockMessage = productVariant.FormatStockMessage(); StringBuilder oBuilder = new StringBuilder(); oBuilder.Append("{"); oBuilder.AppendFormat(@"""{0}"":""{1}"",", "StockMessage", stockMessage); oBuilder.AppendFormat(@"""{0}"":""{1}"",", "OldPrice", PriceHelper.FormatPrice(productVariant.OldPrice)); oBuilder.AppendFormat(@"""{0}"":""{1}""", "CurrentPrice", PriceHelper.FormatPrice(productVariant.Price)); oBuilder.Append("}"); return oBuilder.ToString(); } All these works fine on my local box, but if i upload to the production server, catching the traffic using fiddler, /Products.aspx/GetVariantInfoById becomes a Get call instead of a POST On my local box, HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 03 Mar 2011 09:00:08 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private, max-age=0 Content-Type: application/json; charset=utf-8 Content-Length: 117 Connection: Close But when deployed on the host, it becomes HTTP/1.1 302 Found Proxy-Connection: Keep-Alive Connection: Keep-Alive Content-Length: 186 Via: 1.1 ADV-TMG-01 Date: Thu, 03 Mar 2011 08:59:12 GMT Location: http://www.pocomaru.com/Products.aspx/GetVariantInfoById/default.aspx Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET then of course, then it returns 500 error titleUnknown web method GetVariantInfoById/default.aspx.Parameter name: methodName Can someone please take a look? I think its some configuration in the production server which automatially redirects my webservice call to some other URL, but since the product server is out of my control, i am seeking a local fix for this. Thanks for the help!

    Read the article

  • What am I missing about WCF?

    - by Bigtoe
    I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release. However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML. It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site. I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism. I suppose what I'm asking is, Am I looking at WCF the wrong way? What strengths does it have over the alternatives? Under what circumstances should I choose to use WCF? OK Folks, Sorry about the delay in responding, work does have a nasty habbit of get in the way somethimes :) Some clarifications My main paint point with WCF I suppose falls down into the following areas While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden Size of string than can be passed can't be over 8K Number of objects that can be passed in a single message is restricted Proxies not automatically recovering from failures The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it. Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembley, but it's full of rubbish and certain code generators choke on it. I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading. I just think there should be simplier configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rahter than just the web.config/app.config files). OK, back to the daily grid. Thanks for all your replies so far. Kind Regards Noel

    Read the article

  • NullPointerException when trying to connect to web service using kSoap method Android

    - by benjamin schultz
    My web service should be returning an integer, but every time i run the code i get the NullPointerException error. Any ideas or help would be very appreciated Here's my code: public class CGCountTest extends Activity { TextView testTV; private static final String NAMESPACE = "http://passport-america.com/webservices/"; private static final String URL = "http://localhost:11746/Service1.asmx"; private static final String SOAP_ACTION = "http://www.passport-america.com/webservices/getCGCount"; private static final String METHOD_NAME = "getCGCount"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.soap_test); SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); HttpTransportSE androidHttpTransport = new HttpTransportSE(URL); try { androidHttpTransport.call(SOAP_ACTION, envelope); java.lang.Integer result = (Integer)envelope.getResponse(); TextView testTV = (TextView)findViewById(R.id.testTV); result.toString(); testTV.setText(result); } catch(Exception e) { testTV.setText(e.getMessage()); } } here's the logcat 06-02 15:13:36.557: WARN/dalvikvm(326): threadid=3: thread exiting with uncaught exception (group=0x4001aa28) 06-02 15:13:36.557: ERROR/AndroidRuntime(326): Uncaught handler: thread main exiting due to uncaught exception 06-02 15:13:36.876: ERROR/AndroidRuntime(326): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.pa.passammain/com.pa.passammain.CGCountTest}: java.lang.NullPointerException 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.os.Handler.dispatchMessage(Handler.java:99) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.os.Looper.loop(Looper.java:123) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.app.ActivityThread.main(ActivityThread.java:4203) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at java.lang.reflect.Method.invokeNative(Native Method) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at java.lang.reflect.Method.invoke(Method.java:521) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at dalvik.system.NativeStart.main(Native Method) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): Caused by: java.lang.NullPointerException 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at com.pa.passammain.CGCountTest.onCreate(CGCountTest.java:46) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 06-02 15:13:36.876: ERROR/AndroidRuntime(326): ... 11 more i think my url string may be the problem, but i've tried using my ip with no luck

    Read the article

  • Loading XML from Web Service

    - by Lukasz
    I am connecting to a web service to get some data back out as xml. The connection works fine and it returns the xml data from the service. var remoteURL = EveApiUrl; var postData = string.Format("userID={0}&apikey={1}&characterID={2}", UserId, ApiKey, CharacterId); var request = (HttpWebRequest)WebRequest.Create(remoteURL); request.Method = "POST"; request.ContentLength = postData.Length; request.ContentType = "application/x-www-form-urlencoded"; // Setup a stream to write the HTTP "POST" data var WebEncoding = new ASCIIEncoding(); var byte1 = WebEncoding.GetBytes(postData); var newStream = request.GetRequestStream(); newStream.Write(byte1, 0, byte1.Length); newStream.Close(); var response = (HttpWebResponse)request.GetResponse(); var receiveStream = response.GetResponseStream(); var readStream = new StreamReader(receiveStream, Encoding.UTF8); var webdata = readStream.ReadToEnd(); Console.WriteLine(webdata); This prints out the xml that comes from the service. I can also save the xml as an xml file like so; TextWriter writer = new StreamWriter(@"C:\Projects\TrainingSkills.xml"); writer.WriteLine(webdata); writer.Close(); Now I can load the file as an XDocument to perform queries on it like this; var data = XDocument.Load(@"C:\Projects\TrainingSkills.xml"); What my problem is that I don't want to save the file and then load it back again. When I try to load directly from the stream I get an exception, Illegal characters in path. I don't know what is going on, if I can load the same xml as a text file why can't I load it as a stream. The xml is like this; <?xml version='1.0' encoding='UTF-8'?> <eveapi version="2"> <currentTime>2010-04-28 17:58:27</currentTime> <result> <currentTQTime offset="1">2010-04-28 17:58:28</currentTQTime> <trainingEndTime>2010-04-29 02:48:59</trainingEndTime> <trainingStartTime>2010-04-28 00:56:42</trainingStartTime> <trainingTypeID>3386</trainingTypeID> <trainingStartSP>8000</trainingStartSP> <trainingDestinationSP>45255</trainingDestinationSP> <trainingToLevel>4</trainingToLevel> <skillInTraining>1</skillInTraining> </result> <cachedUntil>2010-04-28 18:58:27</cachedUntil> </eveapi> Thanks for your help!

    Read the article

  • retrieving information from web service calls

    - by Monte Chan
    Hi all, I am trying to retrieve information from a web service call. The following is what I have so far. In my text view, it is showing Map {item=anyType{key=TestKey; value=2;}; item=anyType{key=TestField; value=adsfasd; };} When I ran that in the debugger, I can see the information above in the variable, tempvar. But the question is, how do I retrieve the information (i.e. the actual values of "key" and "value" in each of the array positions)? Yes, I know there is a lot going on in onCreate and I will fix it later. Thanks in advance, Monte My codes are as follows, import java.util.Vector; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.AndroidHttpTransport; public class ViewHitUpActivity extends Activity { private static final String SOAP_ACTION = "test_function"; private static final String METHOD_NAME = "test_function"; private static final String NAMESPACE = "http://www.monteandjanicechan.com/"; private static final String URL = "http://www.monteandjanicechan.com/ws/test_ws.cfc?wsdl"; // private Object resultRequestSOAP = null; private TextView tv; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); tv = (TextView)findViewById(R.id.people_view); //SoapObject request.addProperty("test_item", "1"); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.setOutputSoapObject(request); AndroidHttpTransport androidHttpTransport = new AndroidHttpTransport(URL); try { androidHttpTransport.call(SOAP_ACTION, envelope); /* resultRequestSOAP = envelope.getResponse(); Vector tempResult = (Vector) resultRequestSOAP("test_functionReturn"); */ SoapObject resultsRequestSOAP = (SoapObject) envelope.bodyIn; Vector tempResult = (Vector) resultsRequestSOAP.getProperty("test_functionReturn"); int testsize = tempResult.size(); // SoapObject test = (SoapObject) tempResult.get(0); //String[] results = (String[]) resultRequestSOAP; Object tempvar = tempResult.elementAt(1); tv.setText(tempvar.toString()); } catch (Exception aE) { aE.printStackTrace (); tv.setText(aE.getClass().getName() + ": " + aE.getMessage()); } } }

    Read the article

  • WP7 Return the last 7 days of data from an xml web service

    - by cvandal
    Hello, I'm trying to return the last 7 days of data from an xml web service but with no luck. Could someone please explain me to how I would accomplish this? The XML is as follows: <node> <api> <usagelist> <usage day="2011-01-01"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-02"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-03"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-04"> <traffic name="total" unit="bytes">23579797</traffic> </usage> </usagelist> </api> </node> EDIT The data I want to retrieve will be used to populate a line graph. Specificly I require the day attribute value and the traffic element value for the past 7 days. At the moment, I have the code below in place, howevewr it's only showing the first day 7 times and traffic for the first day 7 times. XDocument xDocument = XDocument.Parse(e.Result); var values = from query in xDocument.Descendants("usagelist") select new History { day = query.Element("usage").Attribute("day").Value, traffic = query.Element("usage").Element("traffic").Value }; foreach (History history in values) { ObservableCollection<LineGraphItem> Data = new ObservableCollection<LineGraphItem>() { new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, }; lineGraph1.DataSource = Data; }

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

  • The Case of the Extra Page: Rendering Reporting Services as PDF

    - by smisner
    I had to troubleshoot a problem with a mysterious extra page appearing in a PDF this week. My first thought was that it was likely to caused by one of the most common problems that people encounter when developing reports that eventually get rendered as PDF is getting blank pages inserted into the PDF document. The cause of the blank pages is usually related to sizing. You can learn more at Understanding Pagination in Reporting Services in Books Online. When designing a report, you have to be really careful with the layout of items in the body. As you move items around, the body will expand to accommodate the space you're using and you might eventually tighten everything back up again, but the body doesn't automatically collapse. One of my favorite things to do in Reporting Services 2005 - which I dubbed the "vacu-pack" method - was to just erase the size property of the Body and let it auto-calculate the new size, squeezing out all the extra space. Alas, that method no longer works beginning with Reporting Services 2008. Even when you make sure the body size is as small as possible (with no unnecessary extra space along the top, bottom, left, or right side of the body), it's important to calculate the body size plus header plus footer plus the margins and ensure that the calculated height and width do not exceed the report's height and width (shown as the page in the illustration above). This won't matter if users always render reports online, but they'll get extra pages in a PDF document if the report's height and width are smaller than the calculate space. Beginning the Investigation In the situation that I was troubleshooting, I checked the properties: Item Property Value Body Height 6.25in   Width 10.5in Page Header Height 1in Page Footer Height 0.25in Report Left Margin 0.1in   Right Margin 0.1in   Top Margin 0.05in   Bottom Margin 0.05in   Page Size - Height 8.5in   Page Size - Width 11in So I calculated the total width using Body Width + Left Margin + Right Margin and came up with a value of 10.7 inches. And then I calculated the total height using Body Height + Page Header Height + Page Footer Height + Top Margin + Bottom Margin and got 7.6 inches. Well, page sizing couldn't be the reason for the extra page in my report because 10.7 inches is smaller than the report's width of 11 inches and 7.6 inches is smaller than the report's height of 8.5 inches. I had to look elsewhere to find the culprit. Conducting the Third Degree My next thought was to focus on the rendering size of the items in the report. I've adapted my problem to use the Adventure Works database. At the top of the report are two charts, and then below each chart is a rectangle that contains a table. In the real-life scenario, there were some graphics present as a background for the tables which fit within the rectangles that were about 3 inches high so the visual space of the rectangles matched the visual space of the charts - also about 3 inches high. But there was also a huge amount of white space at the bottom of the page, and as I mentioned at the beginning of this post, a second page which was blank except for the footer that appeared at the bottom. Placing a textbox beneath the rectangles to see if they would appear on the first page resulted the textbox's appearance on the second page. For some reason, the rectangles wanted a buffer zone beneath them. What's going on? Taking the Suspect into Custody My next step was to see what was really going on with the rectangle. The graphic appeared to be correctly sized, but the behavior in the report indicated the rectangle was growing. So I added a border to the rectangle to see what it was doing. When I added borders, I could see that the size of each rectangle was growing to accommodate the table it contains. The rectangle on the right is slightly larger than the one on the left because the table on the right contains an extra row. The rectangle is trying to preserve the whitespace that appears in the layout, as shown below. Closing the Case Now that I knew what the problem was, what could I do about it? Because of the graphic in the rectangle (not shown), I couldn't eliminate the use of the rectangles and just show the tables. But fortunately, there is a report property that comes to the rescue: ConsumeContainerWhitespace (accessible only in the Properties window). I set the value of this property to True. Problem solved. Now the rectangles remain fixed at the configured size and don't grow vertically to preserve the whitespace. Case closed.

    Read the article

  • Ajax application: using SOAP vs REST ?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app? EDIT: While the wsdl is in xml, the data I'm transferring back and forth is actually in JSON. It just appears more natural to use wsdl/soap here due to the nature of the app. The verbs GET and POST may not be enough. I may want to say something like: processQueue, or executeTimer. This is why my conclusion has been wsdl/soap would be good for bridging a complex layer between two applications (client and server) whereas REST would be better (due to its simplicity) for allowing many developer-users to consume resources programmatically. So you could say the choice falls along two lines Will the app be verb-oriented (completing tasks: use soap) or noun-oriented (consuming resources: use REST) Will the api be consumed by few developers or many developers (REST is strong for many developers)? Since such an ajax heavy app would potentially use many verbs and would only be used by the client developer it appears soap/wsdl would be the best fit.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >