Search Results

Search found 13586 results on 544 pages for 'gizmo the great'.

Page 45/544 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • ASP Classic Session Variable Not Always Getting Set

    - by Mike at KBS
    I have a classic ASP site I'm maintaining and recently we wanted to do special rendering actions if the visitor was coming from one specific set of websites. So in the page where those visitors have to come through to get to our site, I put a simple line setting a session variable: <!-- #include virtual="/clsdbAccess.cs"--> <% set db = new dbAccess %> <html> <head> ... <script language="javascript" type="text/javascript"> ... </script> </head> <body> <% Session("CAME_FROM_NEWSPAPER") = "your local newspaper" ... %> ... html stuff ... </body> </html> Then, in all the following pages, whenever I need to put up nav links, menus, etc., that we don't want to show to those visitors, I test for if that session variable is "" or not, then render accordingly. My problem is that it seems to work great for me in development, then I get it out into Production and it works great sometimes, and other times it doesn't work at all. Sometimes the session variable gets set, sometimes it doesn't. Everything else works great. But our client is seeing inconsistent results on pages rendered to their visitors and this is a problem. I've tried logging on from different PC's and I confirm that I get different, and unpredictable, results. The problem is I can't reproduce it at will, and I can't troubleshoot/trace Production. For now I've solved this by just creating duplicate pages with the special rendering, and making sure those visitors go there. But this is a hack and will eventually be difficult to maintain. Is there something special or inconsistent about session variables in Classic ASP? Is there a better way to approach the problem? TIA.

    Read the article

  • Getting a .Net remoting service accessible with IP v6 and IP v4

    - by jon.ediger
    My company has an existing .Net Remoting service that listens on a port, fronting interfaces used by external systems. This all works great with IP v4 based communications. However, this service now needs to support both IP v4 communications and IP v6 communications. I have found info that the system.runtime.remoting section of the app.config should include two channels as follows: <channel ref="tcp" name="tcp6" port="9000" bindTo="[::]" /> <channel ref="tcp" name="tcp4" port="9000" bindTo="0.0.0.0" /> I've tried this. For communications to this service and a direct response back, this works great. Some of the communications instead return a stream back, either for uploading or downloading large files. These calls fail with the an ArgumentException: IPv4 address 0.0.0.0 and IPv6 address ::0 are unspecified addresses that cannot be used as a target address. Parameter name: hostNameOrAddress How should these config values be modified so that the client will know how to communicate back to the .Net remoting service?

    Read the article

  • MVC2 jQuery Validation & Custom Business Objects

    - by durilai
    I have an application that was built with MVC1 and am in the process of updating to MVC2. I have a custom DLL and BLL, of which the model objects are custom business objects that reside in a separate class library. I was using this validation library in MVC1, which worked great. It worked great, but I want to eliminate the extra plugins and use what is available. Rather than use the Enterprise Library validation attributes I have converted to using DataAnnotations and want to use jQuery validation as the client side validation. My questions are: 1) Is the MicrosoftMvcJQueryValidation JS file still required, where do I download. 2) How to you automate the validation to views that do not have models, IE Membership sign in page? 3) How to you add model errors in a custom business layer. Thanks for any help or guidance.

    Read the article

  • IE8 + Jquery ajax call giving parsererror : for json data which seems valid in Firefox

    - by PlanetUnknown
    The ajax call works fine in FF. the data returned is in JSON here is an example from FF firebug - {"noProfiles": "No profiles have been created, lets start now !"} When I try to print the error in IE8 (& in compatibility modes as well), it says "parsererror". But the output seems to be valid JSON. Here is the ajax function call I'm making. Any pointers would be great ! $.ajax({ type: "GET", url: "/get_all_profile_details/", data: "", dataType: "json", beforeSend: function() {alert("before send called");}, success: function(jsonData) { alert("data received"); }, error: function(xhr, txt, err){ alert("xhr: " + xhr + "\n textStatus: " + txt + "\n errorThrown: " + err); } }); The alerts in the error function above give - xhr:<blank> textstatus:parsererror errorThrown: undefined Any pointers would be great ! Note : jquery : 1.3.2

    Read the article

  • vBulletin Nightmare

    - by Chris
    I am creating a vBulletin forum community called: wehypnosis We purchased the vBulletin version 4.0 CMS and the SEO plugin. Im used to designing and building templates for drupal, wordpress and joomla. All of which have great documentation and many tutorials built by the community. But vBulletin although claimed as the best forum software on the internet, is missing any form of documentation worth reading. The only thing I can find to help me rebuild this monster is www.vbulletin.com/docs/ Does anyone know of a set of tutorials or any documentation which would help me: Configure a good solid working version? Give detailed instruction on how to build themes for vBulletin? Drupal has a mountain of great books and resources. Does vBulletin?

    Read the article

  • What is the characteristic of spaghetti code ?

    - by justjoe
    Somebody said that when your PHP Code and application use global variable then it must be a spaghetti code (i assume this). I use wordpress a lot. As far as i know, it's the best thing near a great php software. And it use many global variables to interact between its components. but forget about that, cause frankly, that's the only thing i know. so it's completely biased ;D so just curious, What is the characteristic of spaghetti code ? ps : the only thing i know is wordpress. so, Hopefully, maybe this will help somebody give great answer for somebody who have a little experience on developing full web application on PHP (for example, stack-overflow website)

    Read the article

  • Best non-development book for software developers

    - by Dima Malenko
    What is the best non software development related book that you think each software developer should read? Note, there is a similar, poll-style question here: What non-programming books should programmers read? Update: Peopleware is a great book, must read, no doubt. But it is about software development so does not count. Update: We ended up suggesting more than one book and that's great! Below is summary (with links to Amazon) of the books you should consider for your reading list. The Design of Everyday Things by Donald Norman Getting Things Done by David Allen Godel, Escher, Bach by Douglas R. Hofstadter The Goal and It's Not Luck by Eliyahu M. Goldratt Here Comes Everybody by Clay Shirky ...to be continued.

    Read the article

  • Is DataGrid a necessity in WPF?

    - by Jobi Joy
    I have seen a lot of discussions going on and people asking about DataGrid for WPF and complaining about Microsoft for not having one with their WPF framework till date. We know that WPF is a great UI technology and have the Concept of ItemsControl,DataTemplate, etc,etc to make great UX. Even WPF has got a more closely matching control- ListView, which can be easily templated to give better UX than a traditional Datagrid like display. And I would say a readymade DataGrid control will kill or hide a lot of creativity and it surely will decrease the innovations in User Experience field. So what is your opinion about the need of DataGrid in WPF as a Framework component? If you feel it is necessary then is it just because the world is so used to the DatGrid way of data display for many years? Some other threads having the discussion about DatGrid are here and here Link to WPF ToolKit - Latest WPF DatGrid

    Read the article

  • SQL Server: SELECT rows with MAX(Column A), MAX(Column B), DISTINCT by related columns

    - by z531
    Scenario: Table A MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/1/2010, 'Fred', null, null 2, 1/2/2010, 'Barney', 'Mr. Slate', 1/7/2010 3, 1/3/2010, 'Noname', null, null Table B MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/3/2010, 'Wilma', 'The Great Kazoo', 1/5/2010 2, 1/4/2010, 'Betty', 'Dino', 1/4/2010 Table C MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/5/2010, 'Pebbles', null, null 2, 1/6/2010, 'BamBam', null, null Table D MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/2/2010, 'Noname', null, null 3, 1/4/2010, 'Wilma', null, null I need to return the max added date and corresponding user, and max updated date and corresponding user for each distinct record when tables A,B,C&D are UNION'ed, i.e.: 1, 1/5/2010, 'Pebbles', 'The Great Kazoo', 1/5/2010 2, 1/6/2010, 'BamBam', 'Mr. Slate', 1/7/2010 3, 1/4/2010, 'Wilma', null, null I know how to do this with one date/user per row, but with two is beyond me. DBMS is SQL Server 2005. T-SQL solution preferred. Thanks in advance, Dave

    Read the article

  • Library to parse ERB files

    - by Douglas Sellers
    I am attempting to parse, not evaluate, rails ERB files in a Hpricot/Nokogiri type manner. The files I am attempting to parse contain HTML fragments intermixed with dynamic content generated using ERB (standard rails view files) I am looking for a library that will not only parse the surrounding content, much the way that Hpricot or Nokogiri will but will also treat the ERB symbols, <%, <%= etc, as though they were html/xml tags. Ideally I would get back a DOM like structure where the <%, <%= etc symbols would be included as their own node types. I know that it is possible to hack something together using regular expressions but I was looking for something a bit more reliable as I am developing a tool that I need to run on a very large view code base where both the html content and the erb content are important. For example, content such as: blah blah blah <divMy Great Text <%= my_dynamic_expression %</div Would return a tree structure like: root - text_node (blah blah blah) - element (div) - text_node (My Great Text ) - erb_node (<%=)

    Read the article

  • What are the best programming and development related Blogs?

    - by Christopher Cashell
    There are lots of great resources available on the Internet for learning more about programming and improving your skills. Blogs are one of the best, IMO. There's a wealth of knowledge and experience, much of it covering topics not often found in traditional books, and the increased community aspect helps to bring in multiple viewpoints and ideas. We're probably all familiar with Coding Horror and Joel on Software (so no need to mention them), but what are the other great ones out there? What are the Blogs that you find yourself following most closely? Where you see the best new ideas, the most interesting or informative ideas, or just the posts that make you sit back and think? One Blog per answer, and then we'll vote up the best so we can all learn from them.

    Read the article

  • Django: Overriding the clean() method in forms - question about raising errors

    - by Monika Sulik
    I've been doing things like this in the clean method: if self.cleaned_data['type'].organized_by != self.cleaned_data['organized_by']: raise forms.ValidationError('The type and organization do not match.') if self.cleaned_data['start'] > self.cleaned_data['end']: raise forms.ValidationError('The start date cannot be later than the end date.') But then that means that the form can only raise one of these errors at a time. Is there a way for the form to raise both of these errors? EDIT #1: Any solutions for the above are great, but would love something that would also work in a scenario like: if self.cleaned_data['type'].organized_by != self.cleaned_data['organized_by']: raise forms.ValidationError('The type and organization do not match.') if self.cleaned_data['start'] > self.cleaned_data['end']: raise forms.ValidationError('The start date cannot be later than the end date.') super(FooAddForm, self).clean() Where FooAddForm is a ModelForm and has unique constraints that might also cause errors. If anyone knows of something like that, that would be great...

    Read the article

  • wrap text around image IE

    - by Tillebeck
    Hi I have done a bit of searching for a solution to wrap text around an image and came across the JQSlickWrap. http://stackoverflow.com/questions/2457266/jquery-plugin-to-wrap-text-around-images-support-ie6 But it is not working in IE. Is there another way to wrap text around an image? Or is that just not possible for IE yet?... Great wrap example in firefox but not so great in IE: http://jwf.us/projects/jQSlickWrap/example1.html There is this manuel way to create div's but in my case that is a no-go since it is multible images uploaded by a webmaster. Br. Anders

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • URL Routing with WebForms, what to do with additional querystrings like page number, filter, etc?

    - by Scott
    I am switching from Intelligencia's UrlRewriter to the new web forms routing in ASP.NET 4.0. I have it working great for basic pages, however, in my e-commerce site, when browsing category pages, I previously used querystrings that were built into my pager control to control paging. An old url (with UrlRewriting) would be: http://www.mysite.com/Category/Arts-and-Crafts_17 I have a MapPageRoute defined in global.asax as: routes.MapPageRoute("category-browse", "Category/{name}_{id}", ~/CategoryPage.aspx"); This works great. Now, somebody clicks to go to page 2. Previously I would have just tacked on ?page=2 as the querystring. Now, How do I handle this using web forms routing? I know I can do something like: http://www.mysite.com/Category/Arts-and-Crafts_17/page/2 But in addition to page, I can have filters, age ranges, gender, etc. Should I just keep defining routes that handle these variables, or should I continue using querystrings and can I define a route that allows me to use my querstrings like before?

    Read the article

  • Chrome 5 problem with scroll

    - by Parhs
    $(document).keydown(function (event) { if(event.keyCode==38 || event.keyCode==40) { var row; if(event.keyCode==40) row=$(row_selected).next(); if(event.keyCode==38) row=$(row_selected).prev(); if(row.length==0) { row=$(row_selected); } row_select( row ); var row_position_bottom=$(row).height() +$(row).offset().top; var doc_position=$(window).height() + $(window).scrollTop(); if(row_position_bottom >doc_position) $(window).scrollTop(row_position_bottom-$(window).height()); if($(row).offset().top < $(window).scrollTop()) $(window).scrollTop($(row).offset().top); return false; } }); Hello i used this code to select rows of my table...If the selection isnt visible page scrolls... It works great ,FIrefox,Internet Explorer,Safari, but not in chrome..... In Chrome 4 not the last version it worked great!!! The problem is that return false doesnt prevent the page from scrolling...

    Read the article

  • Collection View Item binding issue

    - by Harry
    I have two entities in a Data Model, ENTITY_A, and ENTITY_B, that are related (ENTITY_A with a one-to-many relationship to ENTITY_B named DetailItems). I have set up a NSCollectionView with its appropriate bindings to ENTITY_A, and have placed on a Collection View Item a label. If I bind the label to [Collection View Item] and with a Model Key Path of [representedObject.FIELD_NAME], it works great. If I bind it to a Model Key Path [representedObject.DetailItems.@count], again it works great. If I bind it to a Model Key Path [[email protected]_NAME], I get the following error on the console: addObserver:forKeyPath:options:context:] is not supported. Key path: @sum.FIELD_NAME. Can anyone please help? Thank you, Harry

    Read the article

  • NSZombieEnabled breaking working code?

    - by Gordon Fontenot
    I have the following method in UIImageManipulation.m: +(UIImage *)scaleImage:(UIImage *)source toSize:(CGSize)size { UIImage *scaledImage = nil; if (source != nil) { UIGraphicsBeginImageContext(size); [source drawInRect:CGRectMake(0, 0, size.width, size.height)]; scaledImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); } return scaledImage; } I am calling it in a different view with: imageFromFile = [UIImageManipulator scaleImage:imageFromFile toSize:imageView.frame.size]; (imageView is a UIImageView allocated earlier) This is working great in my code. I resizes the image perfectly, and throws zero errors. I also don't have anything pop up under build - analyze. But the second I turn on NSZombieEnabled to debug a different EXC_BAD_ACCESS issue, the code breaks. Every single time. I can turn NSZombieEnabled off, code runs great. I turn it on, and boom. Broken. I comment out the call, and it works again. Every single time, it gives me an error in the console: -[UIImage release]: message sent to deallocated instance 0x3b1d600. This error doesn't appear if `NSZombieEnabled is turned off. Any ideas? --EDIT-- Ok, This is killing me. I have stuck breakpoints everywhere I can, and I still cannot get a hold of this thing. Here is the full code when I call the scaleImage method: -(void)setupImageButton { UIImage *imageFromFile; if (object.imageAttribute == nil) { imageFromFile = [UIImage imageNamed:@"no-image.png"]; } else { imageFromFile = object.imageAttribute; } UIImage *scaledImage = [UIImageManipulator scaleImage:imageFromFile toSize:imageButton.frame.size]; UIImage *roundedImage = [UIImageManipulator makeRoundCornerImage:scaledImage :10 :10 withBorder:YES]; [imageButton setBackgroundImage:roundedImage forState:UIControlStateNormal]; } The other UIImageManipulator method (makeRoundCornerImage) shouldn't be causing the error, but just in case I'm overlooking something, I threw the entire file up on github here. It's something about this method though. Has to be. If I comment it out, it works great. If I leave it in, Error. But it doesn't throw errors with NSZombieEnabled turned off ever.

    Read the article

  • Is it worthwhile learning about Emacs? [closed]

    - by dole doug
    Hi there Some time ago, I've heard what a great tool Emacs can be. I've read some papers about it and some watched some video too. I've read that emacs is great not only for developers but for usual users too...so i decided to start learning how to use it and wok with it. The problem is that I'm a MS Windows user and I learn in my spare time PHP and C(I also did some products on those languages, but i still considering myself in learning stage). Another problem is that I learn alone (no friends around to ask/learn from them, programming related stuff). Can you give me some tips about how to use those type of tools(especially those written for gnu/unix) with "poor" GUI but rich features? Do you recommend to use the specific windows written applications only and forget about those which come from GNU?

    Read the article

  • using MEF with NHibernate and windsor

    - by Fran
    I have an ASP.net MVC application that is using NHibernate under the covers for data access. I'm using the Windsor container to handle injecting ISession references into each controller. This works great, but now I'm looking to expand my application with a pluggable architecture so that I can have a core product and specific add-ons. I found a great article on doing this with MEF. My question is how to make the Windsor container and MEF container, life/work together so that I can achieve this. There was an article (http://codebetter.com/blogs/glenn.block/archive/2009/10/31/should-i-use-mef-with-an-ioc-container.aspx) by Glenn Block that talked about this exact issue. Then end then said that the next article would show you how to do this, but there's no part 2. Has anyone created an application like this using asp.net mvc, mef, nhibernate, castle windsor?

    Read the article

  • How is Magento when it comes to adding custom functionality?

    - by ohsnapy
    hi, How is working with Magento when it comes to adding custom functionality? From an outsiders view the framework looks very complicated and it's hard to gage what is and is not possible. What are your experiences? Is it a difficult framework to get it to do what you want? Are there things that simply cannot be customized in Magento? How difficult is it to manage customizations when upgrading to latest versions,etc.. Any feedback/personal experiences would be great. Examples of highly customized Magento sites would be great too. Thanks for your help.

    Read the article

  • How to setup a webserver in common lisp?

    - by Serpico
    Several months ago, I was inspired by the magnificent book ANSI Common Lisp written by Paul Graham, and the statement that Lisp could be used as a secret weapon in your web development, published by the same author on his blog. Wow, that is amazing. That is something that I have been looking for long time. The author really developed a successful web applcation and sold it to Yahoo. With those encouraging images, I determined to spend some time (1 year or 2 year, who knows) on learning Common Lisp. Maybe someday I will development my web application and turn into a great Lisp expert. In fact, this is the second time for me to get to study Lisp. The first time was a couple of years ago when I was fascinated by the famous book SICP but found later Scheme was so unbelievably immature for real life application. After reading some chapters of ANSI Common Lisp, I was pretty sure that is a great book full of detailed exploration of Common Lisp. Then I began to set up a web server in Common Lisp. After all, this should be the best way if you want to learn something. Demonstrations are always better than definations. As suggested by the book Practical Common Lisp (by the way, this is also a great book), I chose to install AllegroServe on some Common Lisp implementation. Then, from somewhere else, I learned that Hunchentoot seems to be better than AllegroServe. (I don't remember where and whom this word is from. So don't argue with me.) Ironically, you know what, I never could installed the two packages on any Common Lisp implementation. More annoyingly, I even don't know why. The machine always spit up a lot of jargon and lead me into a chaos. I've tried searching the internet and have not found anything. Could anybody who has successfully installed these packages in Linux tell me how you did it? Have you run into any trouble? How did you figured out what is wrong and fixed it? The more detailed, the more helpful.

    Read the article

  • Django : json serialize a queryset which uses defer() or only()

    - by PlanetUnknown
    Now I've been using json serializer and it works great. I had to modify my queries where I started using the only() & defer() filters, like so - retObj = OBJModel.objects.defer("create_dt").filter(loged_in_dt__gte=dtStart) After I've done the above, suddenly the json serializer is returning empty fields - {"pk": 19047, "model": "OBJModel_deferred_create_dt", "fields": {}} If I remove the defer(), the serializer gives all the data correctly. import json from django.utils import simplejson from django.core import serializers json_serializer = serializers.get_serializer("json")() retObj = OBJModel.objects.defer("create_dt").filter(loged_in_dt__gte=dtStart) json_serializer.serialize(retObj, ensure_ascii=False) I've scratched my head on this for a while now. Any insight would be great. NOTE : I am using django 1.1

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >