Search Results

Search found 37415 results on 1497 pages for 'attribute id'.

Page 461/1497 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Ubuntu 11 and 12 initially fast but later bogs down, CPU pegged

    - by uos??
    I started with Ubuntu 11 a few weeks ago. It's on a DELL M4300 with a OCZ SSD. Default setup, except that I've installed the proprietary NVIDIA graphics and BROADCOM wireless drivers. Dual boot with Windows. If I cold boot into Ubuntu, it is very fast, just like the Windows experience that I'm used to. But SOMETHING happens, and I haven't yet determined what, but the system gets incredibly slow and stays that way. At first I thought it had to do with Adobe Flash because it seemed to be triggered by sites with Flash. But then I removed Flash and the problem remains. I thought it was just an overheating problem, but I've now upgraded to 12.04 which supposedly fixes the overheating problems I've read about. Perhaps the heat situation was brought on by Flash in my early cases? So I installed Jupiter for CPU management, but the thermometer reports a familiar Windows-side temperature of 53 degrees Celsius. Switching Jupiter to lower performance doesn't help. When I check the System Monitor application, sorting by CPU usage, there are no obvious problem processes. However, in the graphs tab, both CPU cores are pegged at 100%! I notice that the slowness seems to be similar to the extremely bad performance I got prior to installing the NVIDIA drivers. I'm not sure if that helps. This is the strangest part to me - although the temperature seems OK, even after rebooting, the system remains slow - starting with GRUB2 which is very noticeably delayed, all the way through to either Ubuntu or Windows! That's right, even the Windows side suffers effects and takes several minutes to complete booting whereas normally (with my SSD) it's ready to use in 15 seconds. The only way to fix it is to shutdown and let the parts cool down. Or maybe it just needs to completely power off and boot rather than a soft reboot, temperature has nothing to do with it? - is that possible? But know that I have never had this problem in Windows, even if Windows gets very hot (135 F) a reboot would be enough time for it to recover. For this reason, I don't think it's a heat thing, but I can't imagine what else could be surviving the reboot. I'm entirely updated - there are no pending updates. I have the Post-Release updates of NVIDIA too, btw. If this sounds CLOSE to something you know about, but one of the details doesn't line up exactly, it might be a mistake in my perception. Are there tests you can suggest to rule something out? Thanks! processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T9500 @ 2.60GHz stepping : 6 microcode : 0x60c cpu MHz : 800.000 cache size : 6144 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 5187.00 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T9500 @ 2.60GHz stepping : 6 microcode : 0x60c cpu MHz : 800.000 cache size : 6144 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 5186.94 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: (Redundant figures removed. You can view them in the edits if they are still relevant) ps: %CPU PID USER COMMAND 9.4 2399 jason gnome-terminal 6.2 2408 jason bash 17.3 1117 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none 13.7 1667 jason compiz 1.3 1960 jason /usr/lib/unity/unity-panel-service 1.3 1697 jason python /usr/bin/jupiter 0.9 1964 jason /usr/lib/indicator-appmenu/hud-service 0.6 1689 jason nautilus -n 0.4 1458 jason //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session I should highlight specifically that GRUB2 can also be very slow. I don't know the relationship of which scenarios GRUB2 is also slow, but WHEN it is slow, it is slow both before the menu appears and after the selection is made - although for the diagnosis of GRUB2 it is harder for me to tell what the normal speeds should be. With SSD, I would expect that GRUB2 could load instantly, and that the GRUB2 purple would disappear instantly after the selection. The only delay to be expected is the change in graphics modes (though I couldn't guess why that ever requires any noticeable time)

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • VSNewFile: A Visual Studio Addin to More Easily Add New Items to a Project

    - by InfinitiesLoop
    My first Visual Studio Add-in! Creating add-ins is pretty simple, once you get used to the CommandBar model it is using, which is apparently a general Office suite extensibility mechanism. Anyway, let me first explain my motivation for this. It started out as an academic exercise, as I have always wanted to dip my feet in a little VS extensibility. But I thought of a legitimate need for an add-in, at least in my personal experience, so it took on new life. But I figured I can’t be the only one who has felt this way, so I decided to publish the add-in, and host it on GitHub (VSNewFile on GitHub) hoping to spur contributions. Adding Files the Built-in Way Here’s the problem I wanted to solve. You’re working on a project, and it’s time to add a new file to the project. Whatever it is – a class, script, html page, aspx page, or what-have-you, you go through a menu or keyboard shortcut to get to the “Add New Item” dialog. Typically, you do it by right-clicking the location where you want the file (the project or a folder of it): This brings up a dialog the contains, well, every conceivable type of item you might want to add. It’s all the available item templates, which can result in anywhere from a ton to a veritable sea of choices. To be fair, this dialog has been revamped in Visual Studio 2010, which organizes it a little better than Visual Studio 2008, and adds a search box. It also loads noticeably faster.   To me, this dialog is just getting in my way. If I want to add a JavaScript script to my project, I don’t want to have to hunt for the script template item in this dialog. Yes, it is categorized, and yes, it now has a search box. But still, all this UI to swim through when all I need is a new file in the project. I will name it. I will provide the content, I don’t even need a ‘template’. VS kind of realizes this. In the add menu in a class library project, for example, there is a “Add Class…” choice. But all this really does is select that project item from the dialog by default. You still must wait for the dialog, see it, and type in a name for the file. How is that really any different than hitting F2 on an existing item? It isn’t. Adding Files the Hack Way What I often find myself doing, just to avoid going through this dialog, is to copy and paste an existing file, rename it, then “CTRL-A, DEL” the content. In a few short keystrokes I’ve got my new file. Even if the original file wasn’t the right type, it doesn’t matter – I will rename it anyway, including the extension. It works well enough if the place I am adding the file to doesn’t have much in it already. But if there are a lot of files at that level, it sucks, because the new file will have the name “Copy of xyz”, causing it to be moved into the ‘C’ section of the alphabetically sorted items, which might be far, far away from the original file (and so I tend to try and copy a file that starts with ‘C’ *evil grin*). Using ‘Export Template’ To be completely fair I should at least mention this feature. I’m not even sure if this is new in VS 2010 or not (I think so). But it allows you to export a project item or items, including potential project references required by it. Then it becomes a new item in the available ‘installed templates’. No doubt this is useful to help bootstrap new projects. But that still requires you to go through the ‘New Item’ dialog. Adding Files with VSNewFile So hopefully I have sufficiently defined the problem and got a few of you to think, “Yeah, me too!”… What VSNewFile does is let you skip the dialog entirely by adding project items directly to the context menu. But it does a bit more than that, so do read on. For example, to add a new class, you can right-click the location and pick that option. A new .cs file is instantly added to the project, and the new item is selected and put into the ‘rename’ mode immediately. The default items available are shown here. But you can customize them. You can also customize the content of each template. To do so, you create a directory in your documents folder, ‘VSNewFile Templates’. In there, you drop the templates you want to use, but you name them in a particular way. For example, here’s a template that will add a new item named “Add TITLE”. It will add a project item named “SOMEFILE.foo” (or ‘SOMEFILE1.foo’ if that exists, etc). The format of the file name is: <ORDER>_<KEY>_<BASE FILENAME>_<ICON ID>_<TITLE>.<EXTENTION> Where: <ORDER> is a number that lets you determine the order of the items in the menu (relative to each other). <KEY> is a case sensitive identifier different for each template item. More on that later. <BASE FILENAME> is the default name of the file, which doesn’t matter that much, since they will be renaming it anyway. <ICON ID> is a number the dictates the icon used for the menu item. There are a huge number of built-in choices. More on that later. <TITLE> is the string that will appear in the menu. And, the contents of the file are the default content for the item (the ‘template’). The content of the file can contain anything you want, of course. But it also supports two tokens: %NAMESPACE% and %FILENAME%, which will be replaced with the corresponding values. Here is the content of this sample: testing Namespace = %NAMESPACE% Filename = %FILENAME% I kind went back and forth on this. I could have made it so there’d be an XML or JSON file that defines the templates, instead of cramming all this data into the filename itself. I like the simplicity of this better. It makes it easy to customize since you can literally just throw these files around, copy them from someone else, etc, without worrying about merge data into a central description file, in whatever format. Here’s our new item showing up: Practical Use One immediate thing I am using this for is to make it easier to add very commonly used scripts to my web projects. For example, uh, say, jQuery? :) All I need to do is drop jQuery-1.4.2.js and jQuery-1.4.2.min.js into the templates folder, provide the order, title, etc, and then instantly, I can now add jQuery to any project I have without even thinking about “where is jQuery? Can I copy it from that other project?”   Using the KEY There are two reasons for the ‘key’ portion of the item. First, it allows you to turn off the built-in, default templates, which are: FILE = Add File (generic, empty file) VB = Add VB Class CS = Add C# Class (includes some basic usings) HTML = Add HTML page (includes basic structure, doctype, etc) JS = Add Script (includes an immediately-invoking function closure) To turn one off, just include a file with the name “_<KEY>”. For example, to turn off all the items except our custom one, you do this: The other reason for the key is that there are new Visual Studio Commands created for each one. This makes it possible to bind a keyboard shortcut to one of them. So you could, for example, have a keyboard combination that adds a new web page to your website, or a new CS class to your class library, etc. Here is our sample item showing up in the keyboard bindings option. Even though the contents of the template directory may change from one launch of Visual Studio to the next, the bindings will remain attached to any item with a particular key, thanks to it taking care not to lose keyboard bindings even though the commands are completely recreated each time. The Icon Face ID Visual Studio uses a Microsoft Office style add-in mechanism, I gather. There are a predetermined set of built-in icons available. You can use your own icons when developing add-ins, of course, but I’m no designer. I just wanted to find appropriate-ish icons for the built-in templates, and allow you to choose from an existing built-in icon for your own. Unfortunately, there isn’t a lot out there on the interwebs that helps you figure out what the built-in types are. There’s an MSDN article that describes at length a way to create a program that lists all the icons. But I don’t want to write a program to figure them out! Just show them to me! Sheesh :) Thankfully, someone out there felt the same way, and uses a novel hack to get the icons to show up in an outlook toolbar. He then painstakingly took screenshots of them, one group at a time. It isn’t complete though – there are tens of thousands of icons. But it’s good enough. If anyone has an exhaustive list, please let me, and the rest of the add-in community know. Icon Face ID Reference Installing the Add-in It will work with Visual Studio 2008 and Visual Studio 2010. Just unzip the release into your Documents\Visual Studio 20xx\Addins folder. It contains the binary and the Visual Studio “.addin” file. For example, the path to mine is: C:\Users\InfinitiesLoop\Documents\Visual Studio 2010\Addins Conclusion So that’s it! I hope you find it as useful as I have. It’s on GitHub, so if you’re into this kind of thing, please do fork it and improve it! Reference: VSNewFile on GitHub VSNewFile release on GitHub Icon Face ID Reference

    Read the article

  • New January 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I am super excited to announce the January 2013 release of the Ajax Control Toolkit! I have one word to describe this release and that word is “Charts” – we’ve added lots of great new chart controls to the Ajax Control Toolkit. You can download the new release directly from http://AjaxControlToolkit.CodePlex.com – or, just fire the following command from the Visual Studio Library Package Manager Console Window (NuGet): Install-Package AjaxControlToolkit You also can view the new chart controls by visiting the “live” Ajax Control Toolkit Sample Site. 5 New Ajax Control Toolkit Chart Controls The Ajax Control Toolkit contains five new chart controls: the AreaChart, BarChart, BubbleChart, LineChart, and PieChart controls. Here is a sample of each of the controls: AreaChart: BarChart: BubbleChart: LineChart: PieChart: We realize that people love to customize the appearance of their charts so all of the chart controls include properties such as color properties. The chart controls render the chart on the browser using SVG. The chart controls are compatible with any browser which supports SVG including Internet Explorer 9 and new and recent versions of Google Chrome, Mozilla Firefox, and Apple Safari. (If you attempt to display a chart on a browser which does not support SVG then you won’t get an error – you just won’t get anything). Updates to the HTML Sanitizer If you are using the HtmlEditorExtender on a public-facing website then it is really important that you enable the HTML Sanitizer to prevent Cross-Site Scripting (XSS) attacks. The HtmlEditorExtender uses the HTML Sanitizer by default. The HTML Sanitizer strips out any suspicious content (like JavaScript code and CSS expressions) from the HTML submitted with the HtmlEditorExtender. We followed the recommendations of OWASP and ha.ckers.org to identify suspicious content. We updated the HTML Sanitizer with this release to protect against new types of XSS attacks. The HTML Sanitizer now has over 220 unit tests. The Ajax Control Toolkit team would like to thank Gil Cohen who helped us identify and block additional XSS attacks. Change in Ajax Control Toolkit Version Format We ran out of numbers. The Ajax Control Toolkit was first released way back in 2006. In previous releases, the version of the Ajax Control Toolkit followed the format: Release Year + Date. So, the previous release was 60919 where 6 represented the 6th release year and 0919 represent September 19. Unfortunately, the AssembyVersion attribute uses a UInt16 data type which has a maximum size of 65,534. The number 70123 is bigger than 65,534 so we had to change our version format with this release. Fortunately, the AssemblyVersion attribute actually accepts four UInt16 numbers so we used another one. This release of the Ajax Control Toolkit is officially version 7.0123. This new version format should work for another 65,000 years. And yes, I realize that 7.0123 is less than 60,919, but we ran out of numbers. Summary I hope that you find the chart controls included with this latest release of the Ajax Control Toolkit useful. Let me know if you use them in applications that you build. And, let me know if you run into any issues using the new chart controls. Next month, back to improving the File Upload control – more exciting stuff.

    Read the article

  • SharePoint 2010 Hosting :: Hiding SharePoint 2010 Ribbon From Anonymous Users

    - by mbridge
    The user interface improvements in SharePoint 2010 as a whole are truly amazing. Microsoft has brought this already impressive product leaps and bounds in terms of accessibility, standards, and usability. One thing you might be aware of is the new and quite useful “ribbon” control that appears by default at the top of every SharePoint 2010 master page. Here’s a sneak peek: You’ll see this ribbon not only in the 2010 web interface, but also throughout the entire family of Office products coming out this year. Even SharePoint Designer 2010 makes use of the ribbon in a very flexible and useful way. Hiding The Ribbon In SharePoint 2010, the ribbon is used almost exclusively for content creation and site administration. It doesn’t make much sense to show the ribbon on a public-facing internet site (in fact, it can really retract from your site’s design when it appears), so you’ll probably want to hide the ribbon when users aren’t logged in. Here’s how it works: <SharePoint:SPSecurityTrimmedControl PermissionsString="ManagePermissions" runat="server">     <div id="s4-ribbonrow" class="s4-pr s4-ribbonrowhidetitle">         <!-- Ribbon code appears here... -->     </div> </SharePoint:SPSecurityTrimmedControl> In your master page, find the SharePoint ribbon by looking for the line of code that begins with <div id=”s4-ribbonrow”>. Place the SPSecurityTrimmedControl code around your ribbon to conditionally hide it based on user permissions. In our example, we’ve hidden the ribbon from any user who doesn’t have the ManagePermissions ability, which is going to be almost any user short of a site administrator. Other Permission Levels You can specify different permission levels for the SPSecurityTrimmedControl, allowing you to configure exactly who can see the SharePoint 2010 ribbon. Basically, this control will hide anything inside of it when users don’t have the specified PermissionString. The available options include: 1. List Permissions - ManageLists - CancelCheckout - AddListItems - EditListItems - DeleteListItems - ViewListItems - ApproveItems - OpenItems - ViewVersionsDeleteVersions - CreateAlerts - ViewFormPages 2. Site Permissions - ManagePermissions - ViewUsageData - ManageSubwebs - ManageWeb - AddAndCustomizePages - ApplyThemeAndBorder - ApplyStyleSheets - CreateGroups - BrowseDirectories - CreateSSCSite - ViewPages - EnumeratePermissions - BrowseUserInfo - ManageAlerts - UseRemoteAPIs - UseClientIntegration - Open - EditMyUserInfo 3. Personal Permissions - ManagePersonalViews - AddDelPrivateWebParts - UpdatePersonalWebParts You can use this control to hide anything in your master page or on related page layouts, so be sure to keep it in mind when you’re trying to hide/show things conditionally based on user permission. The One Catch You may notice that the login control (or welcome control) is actually inside the ribbon by default in SharePoint 2010. You’ll probably want to pull this control out of the ribbon and place it elsewhere on your page. Just look for the line of code that looks like this: <wssuc:Welcome id="IdWelcome" runat="server" EnableViewState=”false”/> Move this code out of the ribbon and into another location within your master page. Save your changes, check in and approve all files, and anonymous users will never know your site is built on SharePoint 2010!

    Read the article

  • Visual Studio 10 crashed when tried to open one of solutions

    - by Michael Freidgeim
    Visual Studio 10 crashed when I tried to open  one of my solutions. Closing Visual Studio and rebooting the machine didn’t help.The error message that was logged(see below), didn’t give any useful ideas.Finally It was fixed after I’ve deleted MySolution.suo file, which was quite big, and also Resharper folders.Log Name:      ApplicationSource:        Application ErrorEvent ID:      1000Task Category: (100)Level:         ErrorKeywords:      ClassicUser:          N/ADescription:Faulting application name: devenv.exe, version: 10.0.40219.1, time stamp: 0x4d5f2a73Faulting module name: msenv.dll, version: 10.0.40219.1, time stamp: 0x4d5f2d48Exception code: 0xc0000005Fault offset: 0x00355770Faulting process id: 0x1dc0Faulting application start time: 0x01cd1836888599f4Faulting application path: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exeFaulting module path: c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\msenv.dllReport Id: 9924b2f9-844e-11e1-bc19-782bcba513eaEvent Xml:<Event >  <System>    <Provider Name="Application Error" />    <EventID Qualifiers="0">1000</EventID>    <Level>2</Level>    <Task>100</Task>    <Keywords>0x80000000000000</Keywords>    <TimeCreated SystemTime="2012-04-12T03:21:31.000000000Z" />    <EventRecordID>401998</EventRecordID>    <Channel>Application</Channel>    <Security />  </System>  <EventData>    <Data>devenv.exe</Data>    <Data>10.0.40219.1</Data>    <Data>4d5f2a73</Data>    <Data>msenv.dll</Data>    <Data>10.0.40219.1</Data>    <Data>4d5f2d48</Data>    <Data>c0000005</Data>    <Data>00355770</Data>    <Data>1dc0</Data>    <Data>01cd1836888599f4</Data>    <Data>C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe</Data>    <Data>c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\msenv.dll</Data>    <Data>9924b2f9-844e-11e1-bc19-782bcba513ea</Data>  </EventData></Event>v

    Read the article

  • Maps for Business: Generating Valid Signatures

    Maps for Business: Generating Valid Signatures This video shows developers how to generate signed requests to Google Maps for Business Web Services such as the Geocoding API. It also points Maps for Business developers to pertinent documentation like developers.google.com and shows an example of the Maps for Business Welcome Letter. To get a response from the Maps for Business API Web Services, developers need to know: how to find their Maps For Business client ID; how to include that client ID in requests; how to retrieve their Maps for Business digital signing key from the Maps for Business welcome materials; and how to use the key to generate a signature. The video will walk developers through the signature generation steps using Python, and provides pointers to examples in other languages. From: GoogleDevelopers Views: 7 0 ratings Time: 10:42 More in Science & Technology

    Read the article

  • how to use gps receiver bu-353 in ubuntu 10.10

    - by Parimal
    Hi I have a gps receiver bu-353 with usb interface i want to know how can i use it under ubuntu I ran the following command gpsd -n -N -D 2 /dev/ttyUSB0 i got the output as: gpsd: launching (Version 2.94) gpsd: listening on port gpsd gpsd: running with effective group ID 1000 gpsd: running with effective user ID 1000 gpsd: opening GPS data source type 3 at '/dev/ttyUSB0' gpsd: speed 38400, 8N1 gpsd: Garmin: garmin_gps Linux USB module not active. gpsd: speed 9600, 8O1 gpsd: speed 38400, 8N1 gpsd: gpsd_activate(): opened GPS (fd 6) gpsd: speed 4800, 8N1 gpsd: NTPD ntpd_link_activate: 0 gpsd: /dev/ttyUSB0 identified as type SiRF binary (2.687608 sec @ 4800bps) gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client after this i started tangoGPS, which said no gps and no gpsd found

    Read the article

  • wcf http 504: Working on a mystery

    - by James Fleming
    Ok,  So you're here because you've been trying to solve the mystery of why you're getting a 504 error. If you've made it to this lonely corner of the Internet, then the advice you're getting from other bloggers isn't the answer you are after. It wasn't the answer I needed either, so once I did solve my problem, I thought I'd share the answer with you. For starters, if by some miracle, you landed here first you may not already know that the 504 error is NOT coming from IIS or Casini, that response code is coming from Fiddler. HTTP/1.1 504 Fiddler - Receive Failure Content-Type: text/html Connection: close Timestamp: 09:43:05.193 ReadResponse() failed: The server did not return a response for this request.       The take away here is Fiddler won't help you with the diagnosis and any further digging in that direction is a red herring. Assuming you've dug around a bit, you may have arrived at posts which suggest you may be getting the error because you're trying to hump too much data over the wire, and have an urgent need to employ an anti-pattern: due to a special case: http://delphimike.blogspot.com/2010/01/error-504-in-wcfnet-35.html Or perhaps you're experiencing wonky behavior using WCF-CustomIsolated Adapter on Windows Server 2008 64bit environment, in which case the rather fly MVP Dwight Goins' advice is what you need. http://dgoins.wordpress.com/2009/12/18/64bit-wcf-custom-isolated-%E2%80%93-rest-%E2%80%93-%E2%80%9C504%E2%80%9D-response/ For me, none of that was helpful. I could perform a get on a single record  http://localhost:8783/Criterion/Skip(0)/Take(1) but I couldn't get more than one record in my collection as in:  http://localhost:8783/Criterion/Skip(0)/Take(2) I didn't have a big payload, or a large number of objects (as you can see by the size of one record below) - - A-1B f5abd850-ec52-401a-8bac-bcea22c74138 .biological/legal mother This item refers to the supervisor’s evaluation of the caseworker’s ability to involve the biological/legal mother in the permanency planning process. 75d8ecb7-91df-475f-aa17-26367aeb8b21 false true Admin account 2010-01-06T17:58:24.88 1.20 764a2333-f445-4793-b54d-1c3084116daa So while I was able to retrieve one record without a hitch (thus the record above) I wasn't able to return multiple records. I confirmed I could get each record individually, (Skip(1)/Take(1))so it stood to reason the problem wasn't with the data at all, so I suspected a serialization error. The first step to resolving this was to enable WCF Tracing. Instructions on how to set it up are here: http://msdn.microsoft.com/en-us/library/ms733025.aspx. The tracing log led me to the solution. The use of type 'Application.Survey.Model.Criterion' as a get-only collection is not supported with NetDataContractSerializer.  Consider marking the type with the CollectionDataContractAttribute attribute or the SerializableAttribute attribute or adding a setter to the property. So I was wrong (but close!). The problem was a deserializing issue in trying to recreate my read only collection. http://msdn.microsoft.com/en-us/library/aa347850.aspx#Y1455 So looking at my underlying model, I saw I did have a read only collection. Adding a setter was all it took.         public virtual ICollection<string> GoverningResponses         {             get             {                 if (!string.IsNullOrEmpty(GoverningResponse))                 {                     return GoverningResponse.Split(';');                 }                 else                     return null;             }                  } Hope this helps. If it does, post a comment.

    Read the article

  • Don’t Program by Fear, Question Everything

    - by João Angelo
    Perusing some code base I’ve recently came across with a code comment that I would like to share. It was something like this: class Animal { public Animal() { this.Id = Guid.NewGuid(); } public Guid Id { get; private set; } } class Cat : Animal { public Cat() : base() // Always call base since it's not always done automatically { } } Note: All class names were changed to protect the innocent. To clear any possible doubts the C# specification explicitly states that: If an instance constructor has no constructor initializer, a constructor initializer of the form base() is implicitly provided. Thus, an instance constructor declaration of the form C(...) {...} is exactly equivalent to C(...): base() {...} So in conclusion it’s clearly an incorrect comment but what I find alarming is how a comment like that gets into a code base and survives the test of time. Not to forget what it can do to someone who is making a jump from other technologies to C# and reads stuff like that.

    Read the article

  • Ranking Part III

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part III In a previous blogs “Ranking an Introduction” and  “Ranking Part II” , you have already praised me in “Rank the Author” and learned how to create a new element on a page and how to place it where you need it. For this installment, I just added code to keep the number of votes (you vote by clicking one of the stars) and the total vote. Using these two, we can compute the average rating. It’s a small step, but its purpose is to show that we do not need a detailed history in order to compute the average. A running total is sufficient. Please note that once you close the game, you will lose your previous total. In real life, we persist the totals in the list itself. We also keep a list of actual votes, but its purpose is to prevent double votes. If a person has already voted, his user id is already on the list and our program will check for it and bar the person from voting again. This is coded in an event receiver, which is a SharePoint server piece of code. I will show you how to do this part in a subsequent blog. Again, go to the page and look at the code. The gist of it is here. avg, votes, and stars are global variables that I defined before. function sendRate(sel){//I hate long line so I created pieces of the message in their own vars            var s1 = "Your Rating Was: ";            var s2 = ".. ";            var s3 = "\nVotes = ";            var s4 = "\nTotal Stars = ";            var s5 = "\nAverage = ";            var s;            s = parseInt(sel.id.replace("_", '')); // Get the selected star number            votes = parseInt(votes) + 1;            stars = parseInt(stars) + s;            avg = parseFloat(stars) / parseFloat(votes);            alert(s1 + sel.id + s2 +sel.title + s3 + votes + s4 + stars + s5 + avg);} Click on the link to play and examine “Ranking with Stats” That’s all folks!

    Read the article

  • Should we rename overloaded methods?

    - by Mik378
    Assume an interface containing these methods : Car find(long id); List<Car> find(String model); Is it better to rename them like this? Car findById(long id); List findByModel(String model); Indeed, any developer who use this API won't need to look at the interface for knowing possible arguments of initial find() methods. So my question is more general : What is the benefit of using overloaded methods in code since it reduce readability?

    Read the article

  • Creating and using VM Groups in VirtualBox

    - by Fat Bloke
    With VirtualBox 4.2 we introduced the Groups feature which allows you to organize and manage your guest virtual machines collectively, rather than individually. Groups are quite a powerful concept and there are a few nice features you may not have discovered yet, so here's a bit more information about groups, and how they can be used.... Creating a group Groups are just ad hoc collections of virtual machines and there are several ways of creating a group: In the VirtualBox Manager GUI: Drag one VM onto another to create a group of those 2 VMs. You can then drag and drop more VMs into that group; Select multiple VMs (using Ctrl or Shift and click) then  select the menu: Machine...Group; or   press Cmd+U (Mac), or Ctrl+U(Windows); or right-click the multiple selection and choose Group, like this: From the command line: Group membership is an attribute of the vm so you can modify the vm to belong in a group. For example, to put the vm "Ubuntu" into the group "TestGroup" run this command: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup" Deleting a Group Groups can be deleted by removing a group attribute from all the VMs that constitute that group. To do this via the command-line the syntax is: VBoxManage modifyvm "Ubuntu" --groups "" In the VirtualBox Manager, this is more easily done by right-clicking on a group header and selecting "Ungroup", like this: Multiple Groups Now that we understand that Groups are just attributes of VMs, it can be seen that VMs can exist in multiple groups, for example, doing this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup","/ProjectX","/ProjectY" Results in: Or via the VirtualBox Manager, you can drag VMs while pressing the Alt key (Mac) or Ctrl (other platforms). Nested Groups Just like you can drag VMs around in the VirtualBox Manager, you can also drag whole groups around. And dropping a group within a group creates a nested group. Via the command-line, nested groups are specified using a path-like syntax, like this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup/Linux" ...which creates a sub-group and puts the VM in it. Navigating Groups In the VirtualBox Manager, Groups can be collapsed and expanded by clicking on the carat to the left in the Group Header. But you can also Enter and Leave groups too, either by using the right-arrow/left-arrow keys, or by clicking on the carat on the right hand side of the Group Header, like this: . ..leading to a view of just the Group contents. You can Leave or return to the parent in the same way. Don't worry if you are imprecise with your clicking, you can use a double click on the entire right half of the Group Header to Enter a group, and the left half to Leave a group. Double-clicking on the left half when you're at the top will roll-up or collapse the group.   Group Operations The real power of Groups is not simply in arranging them prettily in the Manager. Rather it is about performing collective operations on them, once you have grouped them appropriately. For example, let's say that you are working on a project (Project X) where you have a solution stack of: Database VM, Middleware/App VM, and  a couple of client VMs which you use to test your app. With VM Groups you can start the whole stack with one operation. Select the Group Header, and choose Start: The full list of operations that may be performed on Groups are: Start Starts from any state (boot or resume) Start VMs in headless mode (hold Shift while starting) Pause Reset Close Save state Send Shutdown signal Poweroff Discard saved state Show in filesystem Sort Conclusion Hopefully we've shown that the introduction of VM Groups not only makes Oracle VM VirtualBox pretty, but pretty powerful too.  - FB 

    Read the article

  • Implementing an Interceptor Using NHibernate’s Built In Dynamic Proxy Generator

    - by Ricardo Peres
    NHibernate 3.2 came with an included proxy generator, which means there is no longer the need – or the possibility, for that matter – to choose Castle DynamicProxy, LinFu or Spring. This is actually a good thing, because it means one less assembly to deploy. Apparently, this generator was based, at least partially, on LinFu. As there are not many tutorials out there demonstrating it’s usage, here’s one, for demonstrating one of the most requested features: implementing INotifyPropertyChanged. This interceptor, of course, will still feature all of NHibernate’s functionalities that you are used to, such as lazy loading, and such. We will start by implementing an NHibernate interceptor, by inheriting from the base class NHibernate.EmptyInterceptor. This class does not do anything by itself, but it allows us to plug in behavior by overriding some of its methods, in this case, Instantiate: 1: public class NotifyPropertyChangedInterceptor : EmptyInterceptor 2: { 3: private ISession session = null; 4:  5: private static readonly ProxyFactory factory = new ProxyFactory(); 6:  7: public override void SetSession(ISession session) 8: { 9: this.session = session; 10: base.SetSession(session); 11: } 12:  13: public override Object Instantiate(String clazz, EntityMode entityMode, Object id) 14: { 15: Type entityType = Type.GetType(clazz); 16: IProxy proxy = factory.CreateProxy(entityType, new _NotifyPropertyChangedInterceptor(), typeof(INotifyPropertyChanged)) as IProxy; 17: 18: _NotifyPropertyChangedInterceptor interceptor = proxy.Interceptor as _NotifyPropertyChangedInterceptor; 19: interceptor.Proxy = this.session.SessionFactory.GetClassMetadata(entityType).Instantiate(id, entityMode); 20:  21: this.session.SessionFactory.GetClassMetadata(entityType).SetIdentifier(proxy, id, entityMode); 22:  23: return (proxy); 24: } 25: } Then we need a class that implements the NHibernate dynamic proxy behavior, let’s place it inside our interceptor, because it will only need to be used there: 1: class _NotifyPropertyChangedInterceptor : NHibernate.Proxy.DynamicProxy.IInterceptor 2: { 3: private PropertyChangedEventHandler changed = delegate { }; 4:  5: public Object Proxy 6: { 7: get; 8: set;} 9:  10: #region IInterceptor Members 11:  12: public Object Intercept(InvocationInfo info) 13: { 14: Boolean isSetter = info.TargetMethod.Name.StartsWith("set_") == true; 15: Object result = null; 16:  17: if (info.TargetMethod.Name == "add_PropertyChanged") 18: { 19: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 20: this.changed += propertyChangedEventHandler; 21: } 22: else if (info.TargetMethod.Name == "remove_PropertyChanged") 23: { 24: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 25: this.changed -= propertyChangedEventHandler; 26: } 27: else 28: { 29: result = info.TargetMethod.Invoke(this.Proxy, info.Arguments); 30: } 31:  32: if (isSetter == true) 33: { 34: String propertyName = info.TargetMethod.Name.Substring("set_".Length); 35: this.changed(this.Proxy, new PropertyChangedEventArgs(propertyName)); 36: } 37:  38: return (result); 39: } 40:  41: #endregion 42: } What this does for every interceptable method (those who are either virtual or from the INotifyPropertyChanged) is: For methods that came from the INotifyPropertyChanged interface, add_PropertyChanged and remove_PropertyChanged (yes, events are methods ), we add an implementation that adds or removes the event handlers to the delegate which we declared as changed; For all the others, we direct them to the place where they are actually implemented, which is the Proxy field; If the call is setting a property, it fires afterwards the PropertyChanged event. In order to use this, we need to add the interceptor to the Configuration before building the ISessionFactory: 1: using (ISessionFactory factory = cfg.SetInterceptor(new NotifyPropertyChangedInterceptor()).BuildSessionFactory()) 2: { 3: using (ISession session = factory.OpenSession()) 4: using (ITransaction tx = session.BeginTransaction()) 5: { 6: Customer customer = session.Get<Customer>(100); //some id 7: INotifyPropertyChanged inpc = customer as INotifyPropertyChanged; 8: inpc.PropertyChanged += delegate(Object sender, PropertyChangedEventArgs e) 9: { 10: //fired when a property changes 11: }; 12: customer.Address = "some other address"; //will raise PropertyChanged 13: customer.RecentOrders.ToList(); //will trigger the lazy loading 14: } 15: } Any problems, questions, do drop me a line!

    Read the article

  • SOA PS5 Bundle Patch 4 and OSB PS5 Bundle Patch 1

    - by ShawnBailey
    Announcing the Availability of SOA Suite 11g PS5 Bundled Patch 4 and OSB PS5 Bundle Patch 1 These Bundled Patches contain a number of high impact fixes for PS5 and are recommended for anyone currently using this release. Please review the list of included fixes in the readmes and if you are running with any SOA or OSB patches not included in the Bundled Patches please request for Support to create a one-off on top of the appropriate Bundled Patch. The patches can be downloaded from My Oracle Support. 'Patches & Updates' - Enter '14406487' (SOA) or '14389126' (OSB) and click 'Search'. Further information on specific included fixes can also be found in the following documents on MOS: SOA 11g: Bundle Patch Reference, Doc ID 1485949.1 OSB 11g: Bundle Patch Reference, Doc ID 1499170.1

    Read the article

  • ASP.Net MVC - how to post values to the server that are not in an input element

    - by David Carter
    Problem As was mentioned in a previous blog I am building a web page that allows the user to select dates in a calendar and then shows the dates in an unordered list. The problem now is that those dates need to be sent to the server on page submit so that they can be saved to the database. If I was storing the dates in an input element, say a textbox, that wouldn't be an issue but because they are in an html element whose contents are not posted to the server an alternative strategy needs to be developed. Solution The approach that I took to solve this problem is as follows: 1. Place a hidden input field on the form <input id="hiddenDates" name="hiddenDates" type="hidden" value="" /> ASP.Net MVC has an Html helper with a method called Hidden() that will do this for you @Html.Hidden("hiddenDates"). 2. Copy the values from the html element to the hidden input field before submitting the form The following javascript is added to the page:        $(function () {          $('#formCreate').submit(function () {               PopulateHiddenDates();          });        });            function PopulateHiddenDates() {          var dateValues = '';          $($('#dateList').children('li')).each(function(index) {             dateValues += $(this).attr("id") + ",";          });          $('#hiddenDates').val(dateValues);        } I'm using jQuery to bind to the form submit event so that my method to populate the hidden field gets called before the form is submitted. The dateList element is an unordered list and by using the jQuery each function I can itterate through all the <li> items that it contains, get each items id attribute (to which I have assigned the value of the date in millisecs) and write them to the hidden field as a comma delimited string. 3. Process the dates on the server        [HttpPost]         public ActionResult Create(string hiddenDates, string utcOffset)         {            List<DateTime> dates = GetDates(hiddenDates, utcOffset);         }         private List<DateTime> GetDates(string hiddenDates, int utcOffset)         {             List<DateTime> dates = new List<DateTime>();             var values = hiddenDates.Split(",".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);             foreach (var item in values)             {                 DateTime newDate = new DateTime(1970, 1, 1).AddMilliseconds(double.Parse(item)).AddMinutes(utcOffset*-1);                 dates.Add(newDate);                }             return dates;         } By declaring a parameter with the same name as the hidden field ASP.Net will take care of finding the corresponding entry in the form collection posted back to the server and binding it to the hiddenDates parameter! Excellent! I now have my dates the user selected and I can save them to the database. I have also used the same technique to pass back a utcOffset so that I know what timezone the user is in and I can show the dates correctly to users in other timezones if necessary (this isn't strictly necessary at the moment but I plan to introduce times later), Saving multiple dates from an unordered list - DONE!

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

  • How to speed up this simple mysql query?

    - by Jim Thio
    The query is simple: SELECT TB.ID, TB.Latitude, TB.Longitude, 111151.29341326*SQRT(pow(-6.185-TB.Latitude,2)+pow(106.773-TB.Longitude,2)*cos(-6.185*0.017453292519943)*cos(TB.Latitude*0.017453292519943)) AS Distance FROM `tablebusiness` AS TB WHERE -6.2767668133836 < TB.Latitude AND TB.Latitude < -6.0932331866164 AND FoursquarePeopleCount >5 AND 106.68123318662 < TB.Longitude AND TB.Longitude <106.86476681338 ORDER BY Distance See, we just look at all business within a rectangle. 1.6 million rows. Within that small rectangle there are only 67,565 businesses. The structure of the table is 1 ID varchar(250) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 2 Email varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 3 InBuildingAddress varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 4 Price int(10) Yes NULL Change Change Drop Drop More Show more actions 5 Street varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 6 Title varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 7 Website varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 8 Zip varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 9 Rating Star double Yes NULL Change Change Drop Drop More Show more actions 10 Rating Weight double Yes NULL Change Change Drop Drop More Show more actions 11 Latitude double Yes NULL Change Change Drop Drop More Show more actions 12 Longitude double Yes NULL Change Change Drop Drop More Show more actions 13 Building varchar(200) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 14 City varchar(100) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 15 OpeningHour varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 16 TimeStamp timestamp on update CURRENT_TIMESTAMP No CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Change Change Drop Drop More Show more actions 17 CountViews int(11) Yes NULL Change Change Drop Drop More Show more actions The indexes are: Edit Edit Drop Drop PRIMARY BTREE Yes No ID 1965990 A Edit Edit Drop Drop City BTREE No No City 131066 A Edit Edit Drop Drop Building BTREE No No Building 21 A YES Edit Edit Drop Drop OpeningHour BTREE No No OpeningHour (255) 21 A YES Edit Edit Drop Drop Email BTREE No No Email (255) 21 A YES Edit Edit Drop Drop InBuildingAddress BTREE No No InBuildingAddress (255) 21 A YES Edit Edit Drop Drop Price BTREE No No Price 21 A YES Edit Edit Drop Drop Street BTREE No No Street (255) 982995 A YES Edit Edit Drop Drop Title BTREE No No Title (255) 1965990 A YES Edit Edit Drop Drop Website BTREE No No Website (255) 491497 A YES Edit Edit Drop Drop Zip BTREE No No Zip (255) 178726 A YES Edit Edit Drop Drop Rating Star BTREE No No Rating Star 21 A YES Edit Edit Drop Drop Rating Weight BTREE No No Rating Weight 21 A YES Edit Edit Drop Drop Latitude BTREE No No Latitude 1965990 A YES Edit Edit Drop Drop Longitude BTREE No No Longitude 1965990 A YES The query took forever. I think there has to be something wrong there. Showing rows 0 - 29 ( 67,565 total, Query took 12.4767 sec)

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • JQuery and the multiple date selector

    - by David Carter
    Overview I recently needed to build a web page that would allow a user to capture some information and most importantly select multiple dates. This functionality was core to the application and hence had to be easy and quick to do. This is a public facing website so it had to be intuitive and very responsive. On the face of it it didn't seem too hard, I know enough juery to know what it is capable of and I was pretty sure that there would be some plugins that would help speed things along the way. I'm using ASP.Net MVC for this project as I really like the control that it gives you over the generated html and javascript. After years of Web Forms development it makes me feel like a web developer again and puts a smile on my face, that can only be a good thing!   The Calendar The first item that I needed on this page was a calender and I wanted the ability to: have the calendar be always visible select/deselect multiple dates at the same time bind to the select/deselect event so that I could update a seperate listing of the selected dates allow the user to move to another month and still have the calender remember any dates in the previous month I was hoping that there was a jQuery plugin that would meet my requirements and luckily there was! The jQuery datepicker does everything I want and there is quite a bit of documentation on how to use it. It makes use of a javascript date library date.js which I had not come across before but has a number of very useful date utilities that I have used elsewhere in the project. As you can see from the image there still needs to be some styling done! But there will be plenty of time for that later. The calendar clearly shows which dates the user has selected in red and i also make use of an unordered list to show the the selected dates so the user can always clearly see what has been selected even if they move to another month on the calendar. The javascript code that is responsible for listening to events on the calendar and synchronising the list look as follows: <script type="text/javascript">     $(function () {         $('.datepicker').datePicker({ inline: true, selectMultiple: true })         .bind(             'dateSelected',             function (e, selectedDate, $td, state) {                                 var dateInMillisecs = selectedDate.valueOf();                 if (state) { //adding a date                     var newDate = new Date(selectedDate);                     //insert the new item into the correct place in the list                     var listitems = $('#dateList').children('li').get();                     var liToAdd = "<li id='" + dateInMillisecs + "' >" + newDate.toString('ddd dd MMM yyyy') + "</li>";                     var targetIndex = -1;                     for (var i = 0; i < listitems.length; i++) {                         if (dateInMillisecs <= listitems[i].id) {                             targetIndex = i;                             break;                         }                     }                     if (targetIndex < 0) {                         $('#dateList').append(liToAdd);                     }                     else {                         $($('#dateList').children("li")[targetIndex]).before(liToAdd);                     }                 }                 else {//removing a date                     $('ul #' + dateInMillisecs).remove();                 }             }         )     }); When a date is selected on the calendar a function is called with a number of parameters passed to it. The ones I am particularly interested in are selectedDate and state. State tells me whether the user has selected or deselected the date passed in the selectedDate parameter. The <ul> that I am using to show the date has an id of dateList and this is what I will be adding and removing <li> items from. To make things a little more logical for the user I decided that the date should be sorted in chronological order, this means that each time a new date is selected it need to be placed in the correct position in the list. One way to do this would be just to append a new <li> to the list and then sort the whole list. However the approach I took was to get an array of all the items in the list var listitems = ('#dateList').children('li').get(); and then check the value of each item in the array against my new date and as soon as I found the case where the new date was less than the current item remember that position in the list as this is where I would insert it later. To make this work easily I decided to store a numeric representation of each date in the list in the id attribute of each <li> element. Fortunately javascript natively stores dates as the number of milliseconds since 1 Jan 1970. var dateInMillisecs = selectedDate.valueOf(); Please note that this is the value of the date in UTC! I always like to store dates in UTC as I learnt a long time ago that it saves a lot of refactoring at a later date... When I convert the dates back to their original back on the server I will need the UTC offset that was used when calculating the dates, this and how to actually serialise the dates and get them posted back will be the subject of another post.

    Read the article

  • Metro: Dynamically Switching Templates with a WinJS ListView

    - by Stephen.Walther
    Imagine that you want to display a list of products using the WinJS ListView control. Imagine, furthermore, that you want to use different templates to display different products. In particular, when a product is on sale, you want to display the product using a special “On Sale” template. In this blog entry, I explain how you can switch templates dynamically when displaying items with a ListView control. In other words, you learn how to use more than one template when displaying items with a ListView control. Creating the Data Source Let’s start by creating the data source for the ListView. Nothing special here – our data source is a list of products. Two of the products, Oranges and Apples, are on sale. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99, onSale: true }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44, onSale: true }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The file above is saved with the name products.js and referenced by the default.html page described below. Declaring the Templates and ListView Control Next, we need to declare the ListView control and the two Template controls which we will use to display template items. The markup below appears in the default.html file: <!-- Templates --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productOnSaleTemplate" data-win-control="WinJS.Binding.Template"> <div class="product onSale"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, layout: { type: WinJS.UI.ListLayout } }"> </div> In the markup above, two Template controls are declared. The first template is used when rendering a normal product and the second template is used when rendering a product which is on sale. The second template, unlike the first template, includes the text “(On Sale!)”. The ListView control is bound to the data source which we created in the previous section. The ListView itemDataSource property is set to the value ListViewDemos.products.dataSource. Notice that we do not set the ListView itemTemplate property. We set this property in the default.js file. Switching Between Templates All of the magic happens in the default.js file. The default.js file contains the JavaScript code used to switch templates dynamically. Here’s the entire contents of the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { var productsListView = document.getElementById("productsListView"); productsListView.winControl.itemTemplate = itemTemplateFunction; });; } }; function itemTemplateFunction(itemPromise) { return itemPromise.then(function (item) { // Select either normal product template or on sale template var itemTemplate = document.getElementById("productItemTemplate"); if (item.data.onSale) { itemTemplate = document.getElementById("productOnSaleTemplate"); }; // Render selected template to DIV container var container = document.createElement("div"); itemTemplate.winControl.render(item.data, container); return container; }); } app.start(); })(); In the code above, a function is assigned to the ListView itemTemplate property with the following line of code: productsListView.winControl.itemTemplate = itemTemplateFunction;   The itemTemplateFunction returns a DOM element which is used for the template item. Depending on the value of the product onSale property, the DOM element is generated from either the productItemTemplate or the productOnSaleTemplate template. Using Binding Converters instead of Multiple Templates In the previous sections, I explained how you can use different templates to render normal products and on sale products. There is an alternative approach to displaying different markup for normal products and on sale products. Instead of creating two templates, you can create a single template which contains separate DIV elements for a normal product and an on sale product. The following default.html file contains a single item template and a ListView control bound to the template. <!-- Template --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, itemTemplate: select('#productItemTemplate'), layout: { type: WinJS.UI.ListLayout } }"> </div> The first DIV element is used to render a normal product: <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> The second DIV element is used to render an “on sale” product: <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> Notice that both templates include a data-win-bind attribute. These data-win-bind attributes are used to show the “normal” template when a product is not on sale and show the “on sale” template when a product is on sale. These attributes set the Cascading Style Sheet display attribute to either “none” or “block”. The data-win-bind attributes take advantage of binding converters. The binding converters are defined in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll(); } }; WinJS.Namespace.define("ListViewDemos", { displayNormalProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "none" : "block"; }), displayOnSaleProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "block" : "none"; }) }); app.start(); })(); The ListViewDemos.displayNormalProduct binding converter converts the value true or false to the value “none” or “block”. The ListViewDemos.displayOnSaleProduct binding converter does the opposite; it converts the value true or false to the value “block” or “none” (Sadly, you cannot simply place a NOT operator before the onSale property in the binding expression – you need to create both converters). The end result is that you can display different markup depending on the value of the product onSale property. Either the contents of the first or second DIV element are displayed: Summary In this blog entry, I’ve explored two approaches to displaying different markup in a ListView depending on the value of a data item property. The bulk of this blog entry was devoted to explaining how you can assign a function to the ListView itemTemplate property which returns different templates. We created both a productItemTemplate and productOnSaleTemplate and displayed both templates with the same ListView control. We also discussed how you can create a single template and display different markup by using binding converters. The binding converters are used to set a DIV element’s display property to either “none” or “block”. We created a binding converter which displays normal products and a binding converter which displays “on sale” products.

    Read the article

  • Making your WCF Web Apis to speak in multiple languages

    - by cibrax
    One of the key aspects of how the web works today is content negotiation. The idea of content negotiation is based on the fact that a single resource can have multiple representations, so user agents (or clients) and servers can work together to chose one of them. The http specification defines several “Accept” headers that a client can use to negotiate content with a server, and among all those, there is one for restricting the set of natural languages that are preferred as a response to a request, “Accept-Language”. For example, a client can specify “es” in this header for specifying that he prefers to receive the content in spanish or “en” in english. However, there are certain scenarios where the “Accept-Language” header is just not enough, and you might want to have a way to pass the “accepted” language as part of the resource url as an extension. For example, http://localhost/ProductCatalog/Products/1.es” returns all the descriptions for the product with id “1” in spanish. This is useful for scenarios in which you want to embed the link somewhere, such a document, an email or a page.  Supporting both scenarios, the header and the url extension, is really simple in the new WCF programming model. You only need to provide a processor implementation for any of them. Let’s say I have a resource implementation as part of a product catalog I want to expose with the WCF web apis. [ServiceContract][Export]public class ProductResource{ IProductRepository repository;  [ImportingConstructor] public ProductResource(IProductRepository repository) { this.repository = repository; }  [WebGet(UriTemplate = "{id}")] public Product Get(string id, HttpResponseMessage response) { var product = repository.GetById(int.Parse(id)); if (product == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent(Messages.OrderNotFound); }  return product; }} The Get method implementation in this resource assumes the desired culture will be attached to the current thread (Thread.CurrentThread.Culture). Another option is to pass the desired culture as an additional argument in the method, so my processor implementation will handle both options. This method is also using an auto-generated class for handling string resources, Messages, which is available in the different cultures that the service implementation supports. For example, Messages.resx contains “OrderNotFound”: “Order Not Found” Messages.es.resx contains “OrderNotFound”: “No se encontro orden” The processor implementation bellow tackles the first scenario, in which the desired language is passed as part of the “Accept-Language” header. public class CultureProcessor : Processor<HttpRequestMessage, CultureInfo>{ string defaultLanguage = null;  public CultureProcessor(string defaultLanguage = "en") { this.defaultLanguage = defaultLanguage; this.InArguments[0].Name = HttpPipelineFormatter.ArgumentHttpRequestMessage; this.OutArguments[0].Name = "culture"; }  public override ProcessorResult<CultureInfo> OnExecute(HttpRequestMessage request) { CultureInfo culture = null; if (request.Headers.AcceptLanguage.Count > 0) { var language = request.Headers.AcceptLanguage.First().Value; culture = new CultureInfo(language); } else { culture = new CultureInfo(defaultLanguage); }  Thread.CurrentThread.CurrentCulture = culture; Messages.Culture = culture;  return new ProcessorResult<CultureInfo> { Output = culture }; }}   As you can see, the processor initializes a new CultureInfo instance with the value provided in the “Accept-Language” header, and set that instance to the current thread and the auto-generated resource class with all the messages. In addition, the CultureInfo instance is returned as an output argument called “culture”, making possible to receive that argument in any method implementation   The following code shows the implementation of the processor for handling languages as url extensions.   public class CultureExtensionProcessor : Processor<HttpRequestMessage, Uri>{ public CultureExtensionProcessor() { this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri; }  public override ProcessorResult<Uri> OnExecute(HttpRequestMessage httpRequestMessage) { var requestUri = httpRequestMessage.RequestUri.OriginalString;  var extensionPosition = requestUri.LastIndexOf(".");  if (extensionPosition > -1) { var extension = requestUri.Substring(extensionPosition + 1);  var query = httpRequestMessage.RequestUri.Query;  requestUri = string.Format("{0}?{1}", requestUri.Substring(0, extensionPosition), query); ;  var uri = new Uri(requestUri);  httpRequestMessage.Headers.AcceptLanguage.Clear();  httpRequestMessage.Headers.AcceptLanguage.Add(new StringWithQualityHeaderValue(extension));  var result = new ProcessorResult<Uri>();  result.Output = uri;  return result; }  return new ProcessorResult<Uri>(); }} The last step is to inject both processors as part of the service configuration as it is shown bellow, public void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode){ processors.Insert(0, new CultureExtensionProcessor()); processors.Add(new CultureProcessor());} Once you configured the two processors in the pipeline, your service will start speaking different languages :). Note: Url extensions don’t seem to be working in the current bits when you are using Url extensions in a base address. As far as I could see, ASP.NET intercepts the request first and tries to route the request to a registered ASP.NET Http Handler with that extension. For example, “http://localhost/ProductCatalog/products.es” does not work, but “http://localhost/ProductCatalog/products/1.es” does.

    Read the article

  • Calling Web Services using ADF 11g

    - by James Taylor
    One of the benefits of ADF is that fact that it can use multiple data sources. With SOA playing a big part in today’s IT landscape, applications need to be able to utilise this SOA framework to leverage functionality from multiple systems to provide a composite application. ADF provides functionality to expose web services via the ADF Business Component so if you know how to use Business Components for a database. Configuring ADF for web services is much the same. In this example I use an OSB web service that gets a customer. Create a new Fusion Web Application (ADF) Application and click OK    Provide an Application Name, GetCustomerADF and click Next    From the Project Technologies move Web Services into the Selected box. Accept the defaults and click Finish. Right-click the Model project and select New In the Gallery select Web Services –> Web Service Data Control then click OK. Provide a name GetCustomerDC and give the URL endpoint for the Web Service, then click Next Select the web service operation you want to use for the ADF application. In my example my web service only has one operation. Click Finish Save your work, File –> Save The data control has now been created, the next steps create the UI components. In your application created in step 1 find the ViewController project, right-click and choose New In the Gallery select JSF –> JSF Page Provide a name for the jsp page, GetCustomer, Also ensure that the ‘Create as XML Document (*.jsp) check box is checked. I have selected the page template, Oracle Three Column Layout but you can create a layout of your choice. I only want 2 columns so I delete the last column but right-clicking the right had panel and selecting Delete Drag the fields you require from the web service data control to the left pannel. In my example I only require the Customer ID. When you drag to the panel select Texts –>ADF Input Text w/Label In this example I want to search on a customer based on the ID. So Once I select the ID I want to execute the request. To do this I need a button. Drag the operation object under the fields created in step 15. Select Methods –> ADF Button. You now need to provide the mappings, Choose the ‘Show EI Expression Builder’ Navigate to the bindings, ADFBindings –> bindings –> parametersIterator –> currentRow Click OK Drag and drop the return information I just want the results shown in a form. I want to show all fields Now it is time to test, Right-click the jspx page created in steps 11 – 21 and select Run A browser should start, enter valid values and test  

    Read the article

  • New OTL Top Error Documents

    - by Oracle_EBS
    We would like to take this opportunity to announce new documents that are aimed at easing your experience when faced with troubleshooting Oracle Time and Labor issues. To this end we would like to highlight related and updated documentation regarding the top most reported OTL issues. Similar to the iRecruitment top error document updates announced in our EBS HCM Newsletter for December 2011, we proactively analyzed the issues reported on Oracle Time and Labor, identifying and consolidating knowledge content for the top 3 - 4 error messages in My Oracle Support documents. These new documents are as follows: Document Content Type Note ID: Oracle Time and Labor (OTL) Timekeeper issues Functional 1380612.1 Oracle Time and Labor (OTL) Approval issues Functional 1383990.1 Oracle Time and Labor (OTL) Retrieval issues Functional 1385426.1 These documents are now available via our Oracle Time and Labor Information Center Doc ID 1293475.1. As always, we very much welcome your feedback should you use these documents. Please add your views by using the "Rate This Document" feature should you wish to share your experience and any further improvement suggestions.

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >