Search Results

Search found 7053 results on 283 pages for 'body'.

Page 179/283 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Image Preview in ASP.NET MVC

    - by imran_ku07
      Introduction :         Previewing an image is a great way to improve the UI of your site. Also it is always best to check the file type, size and see a preview before submitting the whole form. There are some ways to do this using simple JavaScript but not work in all browsers (like FF3).In this Article I will show you how do this using ASP.NET MVC application. You also see how this will work in case of nested form.   Description :          Create a new ASP.NET MVC project and then add a file upload and image control into your View. <form id="form1" method="post" action="NerdDinner/ImagePreview/AjaxSubmit">            <table>                <tr>                    <td>                        <input type="file" name="imageLoad1" id="imageLoad1"  onchange="ChangeImage(this,'#imgThumbnail')" />                    </td>                </tr>                <tr>                    <td align="center">                        <img src="images/TempImage.gif" id="imgThumbnail" height="200px" width="200px">                     </td>                </tr>            </table>        </form>           Note that here NerdDinner is refers to the virtual directory name, ImagePreview is the Controller and ImageLoad is the action name which you will see shortly          I will use the most popular jQuery form plug-in, that turns a form into an AJAX form with very little code. Therefore you must get these from Jquery site and then add these files into your page.          <script src="NerdDinner/Scripts/jquery-1.3.2.js" type="text/javascript"></script>        <script src="NerdDinner/Scripts/jquery.form.js" type="text/javascript"></script>            Then add the javascript function. <script type="text/javascript">function ChangeImage(fileId,imageId){ $("#form1").ajaxSubmit({success: function(responseText){ var d=new Date(); $(imageId)[0].src="NerdDinner/ImagePreview/ImageLoad?a="+d.getTime(); } });}</script>             This function simply submit the form named form1 asynchronously to ImagePreviewController's method AjaxSubmit and after successfully receiving the response, it will set the image src property to the action method ImageLoad. Here I am also adding querystring, preventing the browser to serve the cached image.           Now I will create a new Controller named ImagePreviewController. public class ImagePreviewController : Controller { [AcceptVerbs(HttpVerbs.Post)] public ActionResult AjaxSubmit(int? id) { Session["ContentLength"] = Request.Files[0].ContentLength; Session["ContentType"] = Request.Files[0].ContentType; byte[] b = new byte[Request.Files[0].ContentLength]; Request.Files[0].InputStream.Read(b, 0, Request.Files[0].ContentLength); Session["ContentStream"] = b; return Content( Request.Files[0].ContentType+";"+ Request.Files[0].ContentLength ); } public ActionResult ImageLoad(int? id) { byte[] b = (byte[])Session["ContentStream"]; int length = (int)Session["ContentLength"]; string type = (string)Session["ContentType"]; Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = type; Response.BinaryWrite(b); Response.Flush(); Session["ContentLength"] = null; Session["ContentType"] = null; Session["ContentStream"] = null; Response.End(); return Content(""); } }             The AjaxSubmit action method will save the image in Session and return content type and content length in response. ImageLoad action method will return the contents of image in response.Then clear these Sessions.           Just run your application and see the effect.   Checking Size and Content Type of File:          You may notice that AjaxSubmit action method is returning both content type and content length. You can check both properties before submitting your complete form.     $(myform).ajaxSubmit({success: function(responseText)            {                                var contentType=responseText.substring(0,responseText.indexOf(';'));                var contentLength=responseText.substring(responseText.indexOf(';')+1);                // Here you can do your validation                var d=new Date();                $(imageId)[0].src="http://weblogs.asp.net/MoneypingAPP/ImagePreview/ImageLoad?a="+d.getTime();            }        });  Handling Nested Form Case:          The above code will work if you have only one form. But this is not the case always.You may have a form control which wraps all the controls and you do not want to submit the whole form, just for getting a preview effect.           In this case you need to create a dynamic form control using JavaScript, and then add file upload control to this form and submit the form asynchronously  function ChangeImage(fileId,imageId)         {            var myform=document.createElement("form");                    myform.action="NerdDinner/ImagePreview/AjaxSubmit";            myform.enctype="multipart/form-data";            myform.method="post";            var imageLoad=document.getElementById(fileId).cloneNode(true);            myform.appendChild(imageLoad);            document.body.appendChild(myform);            $(myform).ajaxSubmit({success: function(responseText)                {                                    var contentType=responseText.substring(0,responseText.indexOf(';'));                    var contentLength=responseText.substring(responseText.indexOf(';')+1);                    var d=new Date();                    $(imageId)[0].src="http://weblogs.asp.net/MoneypingAPP/ImagePreview/ImageLoad?a="+d.getTime();                    document.body.removeChild(myform);                }            });        }            You also need append the child in order to send request and remove them after receiving response.

    Read the article

  • Extended JMS Support

    - by ACShorten
    In a previous post I discussed the real time JMS integration we added in FW4.1 and also as patches for FW2.2. There are some additional aspects of this integration I did not mention which may be of interest: JMS Topic Support - In the post I concentrated on talking about JMS Queue support but failed to mention that the MDB and outgoing real time JMS also supports JMS Topics. JMS Queues are typically used for point to point decoupled integration and JMS Topics are used for hub integration that uses Publish and Subscribe. JMS Selector Support - By default the MDB will process every message from a JMS resource (Queue or Topic). If you want to alter this behaviour to selectively filter JMS messages then you can use JMS Selectors to specify the conditions for the MDB to selectively process JMS messages based upon conditions. JMS Selectors allow filters to be specified on elements in the JMS Header and JMS Message Properties using SQL like syntax. Note: JMS Selectors do not support filters on the body elements. JMS Header Support - It is possible to place custom information in the JMS Header and JMS Message Properties for outgoing messages (so that other applications can use JMS selectors if necessary as well). This is only available when installing Patches 11888040 (FW4.1) and 11850795 (FW2.2). These facilities coupled with the JMS facilities described in the previous posts gives the product integration capabilities in JMS which can be used with configuration rather than coding. Of course, the JMS facility I have described can also be used in conjunction with SOA Suite to provide greater levels of traceability and management.

    Read the article

  • Host AngularJS (Html5Mode) in ASP.NET vNext

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/06/10/host-angularjs-html5mode-in-asp.net-vnext.aspxMicrosoft had announced ASP.NET vNext in BUILD and TechED recently and as a developer, I found that we can add features into one ASP.NET vNext application such as MVC, WebAPI, SignalR, etc.. Also it's cross platform which means I can host ASP.NET on Windows, Linux and OS X.   If you are following my blog you should knew that I'm currently working on a project which uses ASP.NET WebAPI, SignalR and AngularJS. Currently the AngularJS part is hosted by Express in Node.js while WebAPI and SignalR are hosted in ASP.NET. I was looking for a solution to host all of them in one platform so that my SignalR can utilize WebSocket. Currently AngularJS and SignalR are hosted in the same domain but different port so it has to use ServerSendEvent. It can be upgraded to WebSocket if I host both of them in the same port.   Host AngularJS in ASP.NET vNext Static File Middleware ASP.NET vNext utilizes middleware pattern to register feature it uses, which is very similar as Express in Node.js. Since AngularJS is a pure client side framework in theory what I need to do is to use ASP.NET vNext as a static file server. This is very easy as there's a build-in middleware shipped alone with ASP.NET vNext. Assuming I have "index.html" as below. 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="angular.js" /> 4: <script type="text/javascript" src="angular-ui-router.js" /> 5: <script type="text/javascript" src="app.js" /> 6: </head> 7: <body> 8: <h1>ASP.NET vNext with AngularJS</h1> 9: <div> 10: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 11: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 12: </div> 13: <div data-ui-view></div> 14: </body> 15: </html> And the AngularJS JavaScript file as below. Notices that I have two views which only contains one line literal indicates the view name. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15: }]); 16:  17: app.controller('View1Ctrl', function ($scope) { 18: }); 19:  20: app.controller('View2Ctrl', function ($scope) { 21: }); All AngularJS files are located in "app" folder and my ASP.NET vNext files are besides it. The "project.json" contains all dependencies I need to host static file server. 1: { 2: "dependencies": { 3: "Helios" : "0.1-alpha-*", 4: "Microsoft.AspNet.FileSystems": "0.1-alpha-*", 5: "Microsoft.AspNet.Http": "0.1-alpha-*", 6: "Microsoft.AspNet.StaticFiles": "0.1-alpha-*", 7: "Microsoft.AspNet.Hosting": "0.1-alpha-*", 8: "Microsoft.AspNet.Server.WebListener": "0.1-alpha-*" 9: }, 10: "commands": { 11: "web": "Microsoft.AspNet.Hosting server=Microsoft.AspNet.Server.WebListener server.urls=http://localhost:22222" 12: }, 13: "configurations" : { 14: "net45" : { 15: }, 16: "k10" : { 17: "System.Diagnostics.Contracts": "4.0.0.0", 18: "System.Security.Claims" : "0.1-alpha-*" 19: } 20: } 21: } Below is "Startup.cs" which is the entry file of my ASP.NET vNext. What I need to do is to let my application use FileServerMiddleware. 1: using System; 2: using Microsoft.AspNet.Builder; 3: using Microsoft.AspNet.FileSystems; 4: using Microsoft.AspNet.StaticFiles; 5:  6: namespace Shaun.AspNet.Plugins.AngularServer.Demo 7: { 8: public class Startup 9: { 10: public void Configure(IBuilder app) 11: { 12: app.UseFileServer(new FileServerOptions() { 13: EnableDirectoryBrowsing = true, 14: FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "app")) 15: }); 16: } 17: } 18: } Next, I need to create "NuGet.Config" file in the PARENT folder so that when I run "kpm restore" command later it can find ASP.NET vNext NuGet package successfully. 1: <?xml version="1.0" encoding="utf-8"?> 2: <configuration> 3: <packageSources> 4: <add key="AspNetVNext" value="https://www.myget.org/F/aspnetvnext/api/v2" /> 5: <add key="NuGet.org" value="https://nuget.org/api/v2/" /> 6: </packageSources> 7: <packageSourceCredentials> 8: <AspNetVNext> 9: <add key="Username" value="aspnetreadonly" /> 10: <add key="ClearTextPassword" value="4d8a2d9c-7b80-4162-9978-47e918c9658c" /> 11: </AspNetVNext> 12: </packageSourceCredentials> 13: </configuration> Now I need to run "kpm restore" to resolve all dependencies of my application. Finally, use "k web" to start the application which will be a static file server on "app" sub folder in the local 22222 port.   Support AngularJS Html5Mode AngularJS works well in previous demo. But you will note that there is a "#" in the browser address. This is because by default AngularJS adds "#" next to its entry page so ensure all request will be handled by this entry page. For example, in this case my entry page is "index.html", so when I clicked "View 1" in the page the address will be changed to "/#/view1" which means it still tell the web server I'm still looking for "index.html". This works, but makes the address looks ugly. Hence AngularJS introduces a feature called Html5Mode, which will get rid off the annoying "#" from the address bar. Below is the "app.js" with Html5Mode enabled, just one line of code. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: // enable html5mode 17: $locationProvider.html5Mode(true); 18: }]); 19:  20: app.controller('View1Ctrl', function ($scope) { 21: }); 22:  23: app.controller('View2Ctrl', function ($scope) { 24: }); Then let's went to the root path of our website and click "View 1" you will see there's no "#" in the address. But the problem is, if we hit F5 the browser will be turn to blank. This is because in this mode the browser told the web server I want static file named "view1" but there's no file on the server. So underlying our web server, which is built by ASP.NET vNext, responded 404. To fix this problem we need to create our own ASP.NET vNext middleware. What it needs to do is firstly try to respond the static file request with the default StaticFileMiddleware. If the response status code was 404 then change the request path value to the entry page and try again. 1: public class AngularServerMiddleware 2: { 3: private readonly AngularServerOptions _options; 4: private readonly RequestDelegate _next; 5: private readonly StaticFileMiddleware _innerMiddleware; 6:  7: public AngularServerMiddleware(RequestDelegate next, AngularServerOptions options) 8: { 9: _next = next; 10: _options = options; 11:  12: _innerMiddleware = new StaticFileMiddleware(next, options.FileServerOptions.StaticFileOptions); 13: } 14:  15: public async Task Invoke(HttpContext context) 16: { 17: // try to resolve the request with default static file middleware 18: await _innerMiddleware.Invoke(context); 19: Console.WriteLine(context.Request.Path + ": " + context.Response.StatusCode); 20: // route to root path if the status code is 404 21: // and need support angular html5mode 22: if (context.Response.StatusCode == 404 && _options.Html5Mode) 23: { 24: context.Request.Path = _options.EntryPath; 25: await _innerMiddleware.Invoke(context); 26: Console.WriteLine(">> " + context.Request.Path + ": " + context.Response.StatusCode); 27: } 28: } 29: } We need an option class where user can specify the host root path and the entry page path. 1: public class AngularServerOptions 2: { 3: public FileServerOptions FileServerOptions { get; set; } 4:  5: public PathString EntryPath { get; set; } 6:  7: public bool Html5Mode 8: { 9: get 10: { 11: return EntryPath.HasValue; 12: } 13: } 14:  15: public AngularServerOptions() 16: { 17: FileServerOptions = new FileServerOptions(); 18: EntryPath = PathString.Empty; 19: } 20: } We also need an extension method so that user can append this feature in "Startup.cs" easily. 1: public static class AngularServerExtension 2: { 3: public static IBuilder UseAngularServer(this IBuilder builder, string rootPath, string entryPath) 4: { 5: var options = new AngularServerOptions() 6: { 7: FileServerOptions = new FileServerOptions() 8: { 9: EnableDirectoryBrowsing = false, 10: FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, rootPath)) 11: }, 12: EntryPath = new PathString(entryPath) 13: }; 14:  15: builder.UseDefaultFiles(options.FileServerOptions.DefaultFilesOptions); 16:  17: return builder.Use(next => new AngularServerMiddleware(next, options).Invoke); 18: } 19: } Now with these classes ready we will change our "Startup.cs", use this middleware replace the default one, tell the server try to load "index.html" file if it cannot find resource. The code below is just for demo purpose. I just tried to load "index.html" in all cases once the StaticFileMiddleware returned 404. In fact we need to validation to make sure this is an AngularJS route request instead of a normal static file request. 1: using System; 2: using Microsoft.AspNet.Builder; 3: using Microsoft.AspNet.FileSystems; 4: using Microsoft.AspNet.StaticFiles; 5: using Shaun.AspNet.Plugins.AngularServer; 6:  7: namespace Shaun.AspNet.Plugins.AngularServer.Demo 8: { 9: public class Startup 10: { 11: public void Configure(IBuilder app) 12: { 13: app.UseAngularServer("app", "/index.html"); 14: } 15: } 16: } Now let's run "k web" again and try to refresh our browser and we can see the page loaded successfully. In the console window we can find the original request got 404 and we try to find "index.html" and return the correct result.   Summary In this post I introduced how to use ASP.NET vNext to host AngularJS application as a static file server. I also demonstrated how to extend ASP.NET vNext, so that it supports AngularJS Html5Mode. You can download the source code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • Remove a Digital Camera’s IR Filter for IR Photography on the Cheap

    - by Jason Fitzpatrick
    Whether you have a DSLR or a point-and-shoot, this simple hack allows you to shoot awesome IR photographs without the expense of a high-quality IR filter (or the accompanying loss of light that comes with using it). How does it work? You’ll need to take apart your camera and remove a single fragile layer of IR blocking glass from the CCD inside the camera body. After doing so, you’ll have a camera that sees infrared light by default, no special add-on filters necessary. Because it sees the IR light without the filters you’ll also skip out on the light loss that occurs with the addition of the add-on IR filter. The downside? You’re altering the camera in permanent and warranty-voiding way. This is most definitely not a hack for your brand new $2,000 DSLR, but it is a really fun hack to try out on an old point and shoot camera or your circa-2004 depreciated DSLR. Hit up the link below to see the process performed on an old Canon point and shoot–we’d strongly recommend searching for a break down guide for your specific camera model before attempting the trick on your own gear. Are You Brave Enough to IR-ize Your Camera [DIY Photography] HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Messing with the Team

    - by Robert May
    Good Product Owners will help the team be the best that they can be.  Bad product owners will mess with the team and won’t care about the team.  If you’re a product owner, seek to do good and avoid bad behavior at all costs.  Remember, this is for YOUR benefit and you have much power given to you.  Use that power wisely. Scope Creep The product owner has several tools at his disposal to inject scope into an iteration.  First, the product owner can use defects to inject scope.  To do this, they’ll tell the team what functionality that they want to see in a feature.  Then, after the feature is developed, the Product Owner will decide that they don’t really like how the functionality behaves.  To change it, rather than creating a new story, they’ll add a defect.  The functionality is correct, as designed, but the Product Owner doesn’t like it.  By creating the defect, the Product Owner destroys the trust that the team has of the product owner.  They may not be able to count the story, because the Product Owner changed the story in the iteration, and the team then ends up looking like they have low velocity for something over which they have no control.  This is bad.  One way to deal with this is to add “Product Owner Time” to the iteration.  This will slow the velocity, but then the ScrumMaster can tell stake holders that this time is strictly in place to deal with bad behavior of the Product Owner. Another mechanism often used to inject Scope is the concept of directed development.  Outside of planning, stand-ups, or any other meeting, the Product Owner will take a developer aside and ask them to complete a task for them.  This is bad!  The team should be allocating all of their time to development.  If the Product Owner asks for a favor, then time that would normally be used for development will be used for a pet project of the Product Owner and the team will not get credit for this work.  Selfish product owners do this, and I typically see people who were “managers” do this behavior.  Authoritarian command and control development environments also see this happen.  The best thing that can happen is for the team member to report the issue to the ScrumMaster and the ScrumMaster to get very aggressive with management and the Product Owner to try and stop the behavior.  This may result in the ScrumMaster being fired, but if the behavior continues, Scrum is doomed.  This problem is especially bad in cases where the team member’s direct supervisor is the Product Owner.  I don’t recommend that the Product Owner or ScrumMaster have a direct report relationship with team members, since team members need the ability to say no.  To work around this issue, team members need to say no.  If that fails, team members need to add extra time to the iteration to deal with the scope creep injection and accept the lower velocity. As discussed above, another mechanism for injecting scope is by changing acceptance tests after the work is complete.  This is similar to adding defects to change scope and is bad.  To get around, add time for Product Owner uncertainty to the iteration and make sure that stakeholders are aware of the need to add this time because of the Product Owner. Refusing to Prioritize Refusing to prioritize causes chaos for the team.  From the team’s perspective, things that are not important will be worked on while things that the team knows are vital will be ignored.  A poor Product Owner will often pick the stories for the iteration on a whim.  This leads to the team working on many different aspects of the product and results in a lower velocity, since each iteration the team must switch context to the new area of development. The team will also experience confusion about priorities.  In one iteration, Feature X was the highest priority and had to be done.  Then, the following iteration, even though parts of Feature X still need to be completed, no stories to address them will be in the iteration.  However, three iterations later, Feature X will again become high priority. This will cause the team to not trust the Product Owner, and eventually, they’ll stop caring about the features they implement.  They won’t know what is important, so to insulate themselves from the ever changing chaos, they’ll become apathetic to all features.  Team members are some of the most creative people in a company.  By losing their engagement, the company is going to have a substandard product because the passion for the product won’t be in the team. Other signs that the Product Owner refuses to prioritize is that no one outside of the product owner will be consulted on priorities.  Additionally, the product, release, and iteration backlogs will be weak or non-existent. Dealing with this issue is not easy.  This really isn’t something the team can fix, short of taking over the role of Product Owner themselves.  An appeal to the stake holders might work, but only if the Product Owner isn’t a “manager” themselves.  The ScrumMaster needs to protect the team and do what they can to either get the Product Owner to prioritize or have the Product Owner replaced. Managing the Team A Product Owner that is also the “boss” of team members is a Scrum team that is waiting to fail.  If your boss tells you to do something, failing to do that something can cause you to be fired.  The team needs the ability to tell the Product Owner NO.  If the product owner introduces scope creep, the team has a responsibility to tell the Product Owner no.  If the Product Owner tries to get the team to commit to more than they can accomplish in an iteration, the team needs the ability to tell the Product Owner no. If the Product Owner is your boss and determines your pay increases, you’re probably not going to ever tell them no, and Scrum will likely fail.  The team can’t do much in this situation. Another aspect of “managing the team” that often happens is the Product Owner tries to tell the team how to develop the stories that are in the iteration.  This is one reason why I recommend that Product Owners are NOT technical people.  That way, the team can come up with the tasks that are needed to accomplish the stories and the Product Owner won’t know better.  If the Product Owner is technical, the ScrumMaster will need to take great care to protect the team from the ScrumMaster changing how the team thinks they need to implement the stories. Product Owners can also try to manage the team by their body language.  If the team says a task is going to take 6 hours to complete, and the Product Owner disagrees, they will use some kind of sour body language to indicate this disagreement.  In weak teams, this may cause the team to revise their estimate down, which will result in them taking longer than estimated and may result in them missing the iteration.  The ScrumMaster will need to make sure that the Product Owner doesn’t send such messages and that the team ignores them and estimates what they REALLY think it will take to complete the tasks.  Forcing the team to deal with such items in the retrospective can be helpful. Absenteeism The team is completely dependent upon the Product Owner to develop features for the customer.  The Product Owner IS the voice of the customer and without them, the team will lack direction.  Being the Product Owner is a full time job!  If the Product Owner cannot dedicate daily time with the team, a different product owner should be found. The Product Owner needs to attend every stand-up, planning meeting, showcase, and retrospective that the team has.  The team also must be able to have instant communication with the product owner.  They must not be required to schedule meetings to speak with their product owner.  The team must be the highest priority task that the Product Owner has. The best way to work around an absent Product Owner is to appoint a new Product Owner in the team.  This person will be responsible for making the decisions that the Product Owner should be making and to act as the liaison to the absent Product Owner.  If the delegate Product Owner doesn’t have authority to make decisions for the team, Scrum will fail.  If the Product Owner is absent, the ScrumMaster should seek to have that Product Owner replaced by someone who has the time and ability to be a real Product Owner. Making it Personal Too often Product Owners will become convinced that their ideas are the ones that matter and that anyone who disagrees is making a personal attack on them.  Remember that Product Owners will inherently be at odds with many people, simply because they have the need to prioritize.  Others will frequently question prioritization because they only see part of the picture that Product Owners face. Product Owners must have a thick skin and think egos.  If they don’t, they tend to make things personal, which causes them to become emotional and causes them to take actions that can destroy the trust that team members have in the Product Owner. If a Product Owner is making things person, the best thing that team members can do is reassure them that its not personal, but be firm about doing what is best for the Company and for the users.  The ScrumMaster should also spend significant time coaching the Product Owner on how to not react emotionally and how to accept criticism without becoming defensive. Conclusion I’m sure there are other ways that a Product Owner can mess with the team, but these are the most common that I’ve seen.  I would encourage all Product Owners to seek to be a good Product Owner.  If you find yourself behaving in any of the bad product owner ways, change your behavior today!  Your team will thank you. Remember, being Product Owner is very difficult!  Product Owner is one of the most difficult roles in Scrum.  However, it can also be one of the most rewarding roles in Scrum, since Product Owners literally see their ideas brought to life on the computer screen.  Product Owners need to be very patient, even in the face of criticism and need to be willing to make tough decisions on priority, but then not become offended when others disagree with those decisions.  Companies should spend the time needed to find the right product owners for their teams.  Doing so will only help the company to write better software. Technorati Tags: Scrum,Product Owner

    Read the article

  • Metro: Query Selectors

    - by Stephen.Walther
    The goal of this blog entry is to explain how to perform queries using selectors when using the WinJS library. In particular, you learn how to use the WinJS.Utilities.query() method and the QueryCollection class to retrieve and modify the elements of an HTML document. Introduction to Selectors When you are building a Web application, you need some way of easily retrieving elements from an HTML document. For example, you might want to retrieve all of the input elements which have a certain class. Or, you might want to retrieve the one and only element with an id of favoriteColor. The standard way of retrieving elements from an HTML document is by using a selector. Anyone who has ever created a Cascading Style Sheet has already used selectors. You use selectors in Cascading Style Sheets to apply formatting rules to elements in a document. For example, the following Cascading Style Sheet rule changes the background color of every INPUT element with a class of .required in a document to the color red: input.red { background-color: red } The “input.red” part is the selector which matches all INPUT elements with a class of red. The W3C standard for selectors (technically, their recommendation) is entitled “Selectors Level 3” and the standard is located here: http://www.w3.org/TR/css3-selectors/ Selectors are not only useful for adding formatting to the elements of a document. Selectors are also useful when you need to apply behavior to the elements of a document. For example, you might want to select a particular BUTTON element with a selector and add a click handler to the element so that something happens whenever you click the button. Selectors are not specific to Cascading Style Sheets. You can use selectors in your JavaScript code to retrieve elements from an HTML document. jQuery is famous for its support for selectors. Using jQuery, you can use a selector to retrieve matching elements from a document and modify the elements. The WinJS library enables you to perform the same types of queries as jQuery using the W3C selector syntax. Performing Queries with the WinJS.Utilities.query() Method When using the WinJS library, you perform a query using a selector by using the WinJS.Utilities.query() method.  The following HTML document contains a BUTTON and a DIV element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <button>Click Me!</button> <div style="display:none"> <h1>Secret Message</h1> </div> </body> </html> The document contains a reference to the following JavaScript file named \js\default.js: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.Utilities.query("button").listen("click", function () { WinJS.Utilities.query("div").clearStyle("display"); }); } }; app.start(); })(); The default.js script uses the WinJS.Utilities.query() method to retrieve all of the BUTTON elements in the page. The listen() method is used to wire an event handler to the BUTTON click event. When you click the BUTTON, the secret message contained in the hidden DIV element is displayed. The clearStyle() method is used to remove the display:none style attribute from the DIV element. Under the covers, the WinJS.Utilities.query() method uses the standard querySelectorAll() method. This means that you can use any selector which is compatible with the querySelectorAll() method when using the WinJS.Utilities.query() method. The querySelectorAll() method is defined in the W3C Selectors API Level 1 standard located here: http://www.w3.org/TR/selectors-api/ Unlike the querySelectorAll() method, the WinJS.Utilities.query() method returns a QueryCollection. We talk about the methods of the QueryCollection class below. Retrieving a Single Element with the WinJS.Utilities.id() Method If you want to retrieve a single element from a document, instead of matching a set of elements, then you can use the WinJS.Utilities.id() method. For example, the following line of code changes the background color of an element to the color red: WinJS.Utilities.id("message").setStyle("background-color", "red"); The statement above matches the one and only element with an Id of message. For example, the statement matches the following DIV element: <div id="message">Hello!</div> Notice that you do not use a hash when matching a single element with the WinJS.Utilities.id() method. You would need to use a hash when using the WinJS.Utilities.query() method to do the same thing like this: WinJS.Utilities.query("#message").setStyle("background-color", "red"); Under the covers, the WinJS.Utilities.id() method calls the standard document.getElementById() method. The WinJS.Utilities.id() method returns the result as a QueryCollection. If no element matches the identifier passed to WinJS.Utilities.id() then you do not get an error. Instead, you get a QueryCollection with no elements (length=0). Using the WinJS.Utilities.children() method The WinJS.Utilities.children() method enables you to retrieve a QueryCollection which contains all of the children of a DOM element. For example, imagine that you have a DIV element which contains children DIV elements like this: <div id="discussContainer"> <div>Message 1</div> <div>Message 2</div> <div>Message 3</div> </div> You can use the following code to add borders around all of the child DIV element and not the container DIV element: var discussContainer = WinJS.Utilities.id("discussContainer").get(0); WinJS.Utilities.children(discussContainer).setStyle("border", "2px dashed red");   It is important to understand that the WinJS.Utilities.children() method only works with a DOM element and not a QueryCollection. Notice that the get() method is used to retrieve the DOM element which represents the discussContainer. Working with the QueryCollection Class Both the WinJS.Utilities.query() method and the WinJS.Utilities.id() method return an instance of the QueryCollection class. The QueryCollection class derives from the base JavaScript Array class and adds several useful methods for working with HTML elements: addClass(name) – Adds a class to every element in the QueryCollection. clearStyle(name) – Removes a style from every element in the QueryCollection. conrols(ctor, options) – Enables you to create controls. get(index) – Retrieves the element from the QueryCollection at the specified index. getAttribute(name) – Retrieves the value of an attribute for the first element in the QueryCollection. hasClass(name) – Returns true if the first element in the QueryCollection has a certain class. include(items) – Includes a collection of items in the QueryCollection. listen(eventType, listener, capture) – Adds an event listener to every element in the QueryCollection. query(query) – Performs an additional query on the QueryCollection and returns a new QueryCollection. removeClass(name) – Removes a class from the every element in the QueryCollection. removeEventListener(eventType, listener, capture) – Removes an event listener from every element in the QueryCollection. setAttribute(name, value) – Adds an attribute to every element in the QueryCollection. setStyle(name, value) – Adds a style attribute to every element in the QueryCollection. template(templateElement, data, renderDonePromiseContract) – Renders a template using the supplied data.  toggleClass(name) – Toggles the specified class for every element in the QueryCollection. Because the QueryCollection class derives from the base Array class, it also contains all of the standard Array methods like forEach() and slice(). Summary In this blog post, I’ve described how you can perform queries using selectors within a Windows Metro Style application written with JavaScript. You learned how to return an instance of the QueryCollection class by using the WinJS.Utilities.query(), WinJS.Utilities.id(), and WinJS.Utilities.children() methods. You also learned about the methods of the QueryCollection class.

    Read the article

  • Create nice animation on your ASP.NET Menu control using jQuery

    - by hajan
    In this blog post, I will show how you can apply some nice animation effects on your ASP.NET Menu control. ASP.NET Menu control offers many possibilities, but together with jQuery, you can make very rich, interactive menu accompanied with animations and effects. Lets start with an example: - Create new ASP.NET Web Application and give it a name - Open your Default.aspx page (or any other .aspx page where you will create the menu) - Our page ASPX code is: <form id="form1" runat="server"> <div id="menu">     <asp:Menu ID="Menu1" runat="server" Orientation="Horizontal" RenderingMode="List">                     <Items>             <asp:MenuItem NavigateUrl="~/Default.aspx" ImageUrl="~/Images/Home.png" Text="Home" Value="Home"  />             <asp:MenuItem NavigateUrl="~/About.aspx" ImageUrl="~/Images/Friends.png" Text="About Us" Value="AboutUs" />             <asp:MenuItem NavigateUrl="~/Products.aspx" ImageUrl="~/Images/Box.png" Text="Products" Value="Products" />             <asp:MenuItem NavigateUrl="~/Contact.aspx" ImageUrl="~/Images/Chat.png" Text="Contact Us" Value="ContactUs" />         </Items>     </asp:Menu> </div> </form> As you can see, we have ASP.NET Menu with Horizontal orientation and RenderMode=”List”. It has four Menu Items where for each I have specified NavigateUrl, ImageUrl, Text and Value properties. All images are in Images folder in the root directory of this web application. The images I’m using for this demo are from Free Web Icons. - Next, lets create CSS for the LI and A tags (place this code inside head tag) <style type="text/css">     li     {         border:1px solid black;         padding:20px 20px 20px 20px;         width:110px;         background-color:Gray;         color:White;         cursor:pointer;     }     a { color:White; font-family:Tahoma; } </style> This is nothing very important and you can change the style as you want. - Now, lets reference the jQuery core library directly from Microsoft CDN. <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js"></script> - And we get to the most interesting part, applying the animations with jQuery Before we move on writing jQuery code, lets see what is the HTML code that our ASP.NET Menu control generates in the client browser.   <ul class="level1">     <li><a class="level1" href="Default.aspx"><img src="Images/Home.png" alt="" title="" class="icon" />Home</a></li>     <li><a class="level1" href="About.aspx"><img src="Images/Friends.png" alt="" title="" class="icon" />About Us</a></li>     <li><a class="level1" href="Products.aspx"><img src="Images/Box.png" alt="" title="" class="icon" />Products</a></li>     <li><a class="level1" href="Contact.aspx"><img src="Images/Chat.png" alt="" title="" class="icon" />Contact Us</a></li> </ul>   So, it generates unordered list which has class level1 and for each item creates li element with an anchor with image + menu text inside it. If we want to access the list element only from our menu (not other list element sin the page), we need to use the following jQuery selector: “ul.level1 li”, which will find all li elements which have parent element ul with class level1. Hence, the jQuery code is:   <script type="text/javascript">     $(function () {         $("ul.level1 li").hover(function () {             $(this).stop().animate({ opacity: 0.7, width: "170px" }, "slow");         }, function () {             $(this).stop().animate({ opacity: 1, width: "110px" }, "slow");         });     }); </script>   I’m using hover, so that the animation will occur once we go over the menu item. The two different functions are one for the over, the other for the out effect. The following line $(this).stop().animate({ opacity: 0.7, width: "170px" }, "slow");     does the real job. So, this will first stop any previous animations (if any) that are in progress and will animate the menu item by giving to it opacity of 0.7 and changing the width to 170px (the default width is 110px as in the defined CSS style for li tag). This happens on mouse over. The second function on mouse out reverts the opacity and width properties to the default ones. The last parameter “slow” is the speed of the animation. The end result is:   The complete ASPX code: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>ASP.NET Menu + jQuery</title>     <style type="text/css">         li         {             border:1px solid black;             padding:20px 20px 20px 20px;             width:110px;             background-color:Gray;             color:White;             cursor:pointer;         }         a { color:White; font-family:Tahoma; }     </style>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js"></script>     <script type="text/javascript">         $(function () {             $("ul.level1 li").hover(function () {                 $(this).stop().animate({ opacity: 0.7, width: "170px" }, "slow");             }, function () {                 $(this).stop().animate({ opacity: 1, width: "110px" }, "slow");             });         });     </script> </head> <body>     <form id="form1" runat="server">     <div id="menu">         <asp:Menu ID="Menu1" runat="server" Orientation="Horizontal" RenderingMode="List">                         <Items>                 <asp:MenuItem NavigateUrl="~/Default.aspx" ImageUrl="~/Images/Home.png" Text="Home" Value="Home"  />                 <asp:MenuItem NavigateUrl="~/About.aspx" ImageUrl="~/Images/Friends.png" Text="About Us" Value="AboutUs" />                 <asp:MenuItem NavigateUrl="~/Products.aspx" ImageUrl="~/Images/Box.png" Text="Products" Value="Products" />                 <asp:MenuItem NavigateUrl="~/Contact.aspx" ImageUrl="~/Images/Chat.png" Text="Contact Us" Value="ContactUs" />             </Items>         </asp:Menu>     </div>     </form> </body> </html> Hope this was useful. Regards, Hajan

    Read the article

  • 2D character controller in unity (trying to get old-school platformers back)

    - by Notbad
    This days I'm trying to create a 2D character controller with unity (using phisics). I'm fairly new to physic engines and it is really hard to get the control feel I'm looking for. I would be really happy if anyone could suggest solution for a problem I'm finding: This is my FixedUpdate right now: public void FixedUpdate() { Vector3 v=new Vector3(0,-10000*Time.fixedDeltaTime,0); _body.AddForce(v); v.y=0; if(state(MovementState.Left)) { v.x=-_walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=-_maxWalkSpeed; } else if(state(MovementState.Right)) { v.x= _walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=_maxWalkSpeed; } _body.velocity=v; Debug.Log("Velocity: "+_body.velocity); } I'm trying here to just move the rigid body applying a gravity and a linear force for left and right. I have setup a physic material that makes no bouncing and 0 friction when moving and 1 friction with stand still. The main problem is that I have colliders with slopes and the velocity changes from going up (slower) , going down the slope (faster) and walk on a straight collider (normal). How could this be fixed? As you see I'm applying allways same velocity for x axis. For the player I have it setup with a sphere at feet position that is the rigidbody I'm applying forces to. Any other tip that could make my life easier with this are welcomed :). P.D. While coming home I have noticed I could solve this applying a constant force parallel to the surface the player is walking, but don't know if it is best method.

    Read the article

  • Continuous Physics Engine's Collision Detection Techniques

    - by Griffin
    I'm working on a purely continuous physics engine, and I need to choose algorithms for broad and narrow phase collision detection. "Purely continuous" means I never do intersection tests, but instead want to find ways to catch every collision before it happens, and put each into "planned collisions" stack that is ordered by TOI. Broad Phase The only continuous broad-phase method I can think of is encasing each body in a circle and testing if each circle will ever overlap another. This seems horribly inefficient however, and lacks any culling. I have no idea what continuous analogs might exist for today's discrete collision culling methods such as quad-trees either. How might I go about preventing inappropriate and pointless broad test's such as a discrete engine does? Narrow Phase I've managed to adapt the narrow SAT to a continuous check rather than discrete, but I'm sure there's other better algorithms out there in papers or sites you guys might have come across. What various fast or accurate algorithm's do you suggest I use and what are the advantages / disatvantages of each? Final Note: I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave, convex, round, or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if I choose an algorithm that breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • The Grenelle II Act In France: A Milestone Towards Integrated Reporting

    - by Evelyn Neumayr
    By Elena Avesani, Principal Product Strategy Manager, Oracle In July of 2010, France took a significant step towards mandating integrated sustainability and financial reporting for all large companies with a new law called Grenelle II. Article 225 of Grenelle II requires that many listed companies on the French stock exchanges incorporate information on the social and environmental consequences of their activities into their annual reports, as well as their societal commitments for sustainable development. The decree that implements Article 225 of Grenelle II was passed in April 2012. Grenelle II is the strongest governmental mandate yet in support of sustainability reporting. The law defines the phase-in process, with large listed companies expected to comply in their 2012 reports and smaller companies expected to comply with their 2014 annual reports. This extra-financial information will have to be embedded in the annual management report, approved by the Board of Directors, verified by a third-party body and given to the annual general meeting. The subjects that must be reported on are grouped into Environmental, Social, and Governance categories. Oracle solutions can help organizations integrate financial and sustainability reporting and provide a more accurate and auditable approach to collecting, consolidating, and reporting such environmental, social, and economic metrics. Through Oracle Environmental Accounting and Reporting and Oracle Hyperion Financial Management Sustainability Starter Kit organizations can collect environmental, social and governance data and collect and consolidate corporate sustainability reporting data from multiple systems and business units. For more information about these solutions please contact [email protected].

    Read the article

  • Gimme Gimme Gimme!

    - by steve.diamond
    Today is my birthday. And you know, there used to be a time when I dreaded birthdays. But now, as I reach my 37th year (that's my Polar Body Test age), I'm re-learning to really really appreciate being here. Now, what the heck does any of this have to do with CRM or this blog? Easy! Here is the present I would like from you. 1) Please tell us how we're doing on this blog. Do you like what you're seeing? Do you NOT like what you're seeing? Why? What types of topics would you like to see more or less of from us? Do you think we're running too much of an Oracle infomercial here? Conversely, would you like us to spend more time focusing on Oracle solutions? If so, which ones are of most interest to you? 2) Let's assume you DO like what you're seeing and reading here. Please tell a friend. Pass it on. You can write a comment below or submit a comment on our Facebook Fan page (http://facebook.com/oraclecrm). If you're an Oracle employee, please simply send me an email. And if you work here at HQ, bring me some key lime pie. And last but not least, thank you!

    Read the article

  • A Reusable Builder Class for Javascript Testing

    - by Liam McLennan
    Continuing on my series of builders for C# and Ruby here is the solution in Javascript. This is probably the implementation with which I am least happy. There are several parts that did not seem to fit the language. This time around I didn’t bother with a testing framework, I just append some values to the page with jQuery. Here is the test code: var initialiseBuilder = function() { var builder = builderConstructor(); builder.configure({ 'Person': function() { return {name: 'Liam', age: 26}}, 'Property': function() { return {street: '127 Creek St', manager: builder.a('Person') }} }); return builder; }; var print = function(s) { $('body').append(s + '<br/>'); }; var build = initialiseBuilder(); // get an object liam = build.a('Person'); print(liam.name + ' is ' + liam.age); // get a modified object liam = build.a('Person', function(person) { person.age = 999; }); print(liam.name + ' is ' + liam.age); home = build.a('Property'); print(home.street + ' manager: ' + home.manager.name); and the implementation: var builderConstructor = function() { var that = {}; var defaults = {}; that.configure = function(d) { defaults = d; }; that.a = function(type, modifier) { var o = defaults[type](); if (modifier) { modifier(o); } return o; }; return that; }; I still like javascript’s syntax for anonymous methods, defaults[type]() is much clearer than the Ruby equivalent @defaults[klass].call(). You can see the striking similarity between Ruby hashes and javascript objects. I also prefer modifier(o) to the equivalent Ruby, yield o.

    Read the article

  • Androidify Turns You into an Android-style Avatar

    - by ETC
    Androidify, a free application for Android phones, lets you create an Android-style avatar of yourself (or anyone, for that matter). Swap out the clothes, adjust the side of your noggin, add a pirate eye-patch, the customization options are abundant. Much like the Mii creator on the Wii, Androidify starts you off with the basics: skin color, hair style and color, adjustment of the body size. After that you can tweak things like clothing, accessories, facial hair and more. Once you’re done you can set it as your contact image, send it to others to use as your contact image, save as a photo, and other wise upload it with the social media tools you have on your phone. Hit up the link below to read more and grab a copy. Androidify [Android Market] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Denali Paging–Key seek lookups

    - by Dave Ballantyne
    In my previous post “Denali Paging – is it win.win ?” I demonstrated the use of using the Paging functionality within Denali.  On reflection,  I think i may of been a little unfair and should of continued always planned to continue my investigations to the next step. In Pauls article, he uses a combination of ctes to first scan the ordered keys which is then filtered using TOP and rownumber and then uses those keys to seek the data.  So what happens if we replace the scanning portion of the code with the denali paging functionality. Heres the original procedure,  we are going to replace the functionality of the Keys and SelectedKeys ctes : CREATE  PROCEDURE dbo.FetchPageKeySeek         @PageSize   BIGINT,         @PageNumber BIGINT AS BEGIN         -- Key-Seek algorithm         WITH    Keys         AS      (                 -- Step 1 : Number the rows from the non-clustered index                 -- Maximum number of rows = @PageNumber * @PageSize                 SELECT  TOP (@PageNumber * @PageSize)                         rn = ROW_NUMBER() OVER (ORDER BY P1.post_id ASC),                         P1.post_id                 FROM    dbo.Post P1                 ORDER   BY                         P1.post_id ASC                 ),                 SelectedKeys         AS      (                 -- Step 2 : Get the primary keys for the rows on the page we want                 -- Maximum number of rows from this stage = @PageSize                 SELECT  TOP (@PageSize)                         SK.rn,                         SK.post_id                 FROM    Keys SK                 WHERE   SK.rn > ((@PageNumber - 1) * @PageSize)                 ORDER   BY                         SK.post_id ASC                 )         SELECT  -- Step 3 : Retrieve the off-index data                 -- We will only have @PageSize rows by this stage                 SK.rn,                 P2.post_id,                 P2.thread_id,                 P2.member_id,                 P2.create_dt,                 P2.title,                 P2.body         FROM    SelectedKeys SK         JOIN    dbo.Post P2                 ON  P2.post_id = SK.post_id         ORDER   BY                 SK.post_id ASC; END; and here is the replacement procedure using paging: CREATE  PROCEDURE dbo.FetchOffsetPageKeySeek         @PageSize   BIGINT,         @PageNumber BIGINT AS BEGIN         -- Key-Seek algorithm         WITH    SelectedKeys         AS      (                 SELECT  post_id                 FROM    dbo.Post P1                 ORDER   BY post_id ASC                 OFFSET  @PageSize * (@PageNumber-1) ROWS                 FETCH NEXT @PageSize ROWS ONLY                 )         SELECT  P2.post_id,                 P2.thread_id,                 P2.member_id,                 P2.create_dt,                 P2.title,                 P2.body         FROM    SelectedKeys SK         JOIN    dbo.Post P2                 ON  P2.post_id = SK.post_id         ORDER   BY                 SK.post_id ASC; END; Notice how all i have done is replace the functionality with the Keys and SelectedKeys CTEs with the paging functionality. So , what is the comparative performance now ?. Exactly the same amount of IO and memory usage , but its now pretty obvious that in terms of CPU and overall duration we are onto a winner.    

    Read the article

  • Frank Buytendijk on Prahalad, Business Best Practices

    - by Bob Rhubart
      In his video on the questionable value of some business best practices, Frank Buytendijk mentions a recent HBR article by business guru C.K. Prahalad. I just learned that Prahalad passed away this past weekend at the age of 68. (Information Week obit) A couple of years ago I had the good fortune to attend Mr. Prahalad’s keynote address at a Gartner event.  He had an audience of software architects absolutely mesmerized as he discussed technology’s role in the changing nature of business competition.  The often dysfunctional relationship between IT and business has and will probably always be hot-button issue. But during Prahalad’s keynote,  there was a palpable sense that the largely technical audience was having some kind of breakthrough, that they had achieved a new level of understanding about the importance of the relationship between the two camps. Fortunately, Prahalad leaves behind a significant body of work that will remain a valuable resource as business and the technology that supports it continues to evolve. Technorati Tags: business best practices,enterprise architecture,prahalad,oracle del.icio.us Tags: business best practices,enterprise architecture,prahalad,oracle

    Read the article

  • Oye! Help Build OTN America Latina!

    - by rickramsey
    Yes, tango is passion, but it is passion born of romance. Not passion born of lust. As it is so often portrayed today. Understand that, and you will begin to understand why life in Latin America is so rich. image courtesy of Continental Magazine. You don't often get a chance to shape the direction of a technical comunidad. Somebody else gets there first and pretty soon everyone is in a rathole about the relevance of rutabagas. Or rutabagels as my public-school-educated hijas prefer to call them. Well, OTN American Latina is just starting up. If you're a techie who speaks Spanish or Portuguese, or if you just like hanging out with techies latinoamericanos (and who doesn't?), here's how to get in on the fun: Why Portuguese Speaking Techies Should Join Why Spanish Speaking Techies Should Join And here are the sites themselves: OTN America Latina in Brazilian Portuguese OTN America Latina in Spanish If you're not sure which site to visit, just remember that Brazilian Portuguese is Spanish spoken with a little body English. Ricardo System Admin and Developer Community of the Oracle Technology Network

    Read the article

  • Reflections based on distance from plane

    - by Andrea Benedetti
    Let's consider, for example, a surface like the volleyball court, we can see that legs and shoes of the players are reflected, with a blur effect, but body and stadium don't (as each object not near to the court). I've already made a reflection effect, but it works as a specular reflection, and I need to achieve an effect like the photo above. So, I would like to make a reflection that is based on the distance between the object and the plane, in this manner a close object would reflect more than an object that is positioned far away from the plane. What is the best way to achieve this effect? My first idea was to use the depth value (taken from the reflected camera), and use that value to blend between reflection and court. But I don't know if it's a correct way. Edit: as rendering engine I use Ogre that already provides a reflections system: reflecting the camera through a plane (obviously I can select the models to draw from the reflected camera). After a render to texture pass I can blend the reflected texture with the original plane. So, if possible, I'm looking for a way that best suits my system.

    Read the article

  • Thoughts on my new template language?

    - by Ralph
    Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed as the closing brace. If no brace is used, they will be closed after taking on element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? Do you like the attribute syntax? (round brackets) I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Who practices, or is likely to practice, the IEEE Software Engineering? [closed]

    - by user72757
    There is an interesting issue in Software Engineering which I'd like to explore. The issue is firstly what is and what is not software engineering. Secondly, if software engineering is what the IEEE defines it to be, what are good examples of companies which practice the SE? Detailed question: Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. [updated definition, originating in 610.12-1990 - IEEE Standard Glossary of Software Engineering Terminology] If we consider as SE only those approaches that 100% match the above definition, we naturally get to SWEBOK (Software Engineering Body of Knowledge) which is created by the IEEE and the ACM. I'm seeking the answer to this: How can I find a company outside the defence industry which practices the SE as defined by IEEE? Clues: SE originates in 1968 NATO conference. The Software Engineering Institute (SEI) is based in the US at Carnegie Mellon University. Funding of the SEI is largely done by the US DoD. Defence industry uses the SE and sometimes has a partnership with the IEEE (as in case of Boeing). Possible decomposition of my big question into smaller chunks: a) Where is anyone who acknowledges the IEEE Software Engineering standards at work and perhaps even uses some of them? http://cs.hbg.psu.edu/cmpsc487/IEEEStds_List.htm b) Where can I find a person or a company building around SWEBOK? http://www.computer.org/portal/web/swebok/html/contents c) What is an example of a company professionally using CSDP (apart from those at IEEE website)? Does anyone have any possible contribution to this question?

    Read the article

  • Inverted textures

    - by brainydexter
    I'm trying to draw textures aligned with this physics body whose coordinate system's origin is at the center of the screen. (XNA)Spritebatch has its default origin set to top-left corner. I got the textures to be positioned correctly, but I noticed my textures are vertically inverted. That is, an arrow texture pointing Up , when rendered points down. I'm not sure where I am going wrong with the math. My approach is to convert everything in physic's meter units and draw accordingly. Matrix proj = Matrix.CreateOrthographic(scale * graphics.GraphicsDevice.Viewport.AspectRatio, scale, 0, 1); Matrix view = Matrix.Identity; effect.World = Matrix.Identity; effect.View = view; effect.Projection = proj; effect.TextureEnabled = true; effect.VertexColorEnabled = true; effect.Techniques[0].Passes[0].Apply(); SpriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, DepthStencilState.Default, RasterizerState.CullNone, effect); m_Paddles[1].Draw(gameTime); SpriteBatch.End(); where Paddle::Draw looks like: SpriteBatch.Draw(paddleTexture, mBody.Position, null, Color.White, 0f, new Vector2(16f, 16f), // origin of the texture 0.1875f, SpriteEffects.None, // width of box is 3*2 = 6 meters. texture is 32 pixels wide. to make it 6 meters wide in world space: 6/32 = 0.1875f 0); The orthographic projection matrix seem fine to me, but I am obviously doing something wrong somewhere! Can someone please help me figure out what am i doing wrong here ? Thanks

    Read the article

  • Ajax-based data loading using jQuery.load() function in ASP.NET

    - by hajan
    In general, jQuery has made Ajax very easy by providing low-level interface, shorthand methods and helper functions, which all gives us great features of handling Ajax requests in our ASP.NET Webs. The simplest way to load data from the server and place the returned HTML in browser is to use the jQuery.load() function. The very firs time when I started playing with this function, I didn't believe it will work that much easy. What you can do with this method is simply call given url as parameter to the load function and display the content in the selector after which this function is chained. So, to clear up this, let me give you one very simple example: $("#result").load("AjaxPages/Page.html"); As you can see from the above image, after clicking the ‘Load Content’ button which fires the above code, we are making Ajax Get and the Response is the entire page HTML. So, rather than using (old) iframes, you can now use this method to load other html pages inside the page from where the script with load function is called. This method is equivalent to the jQuery Ajax Get method $.get(url, data, function () { }) only that the $.load() is method rather than global function and has an implicit callback function. To provide callback to your load, you can simply add function as second parameter, see example: $("#result").load("AjaxPages/Page.html", function () { alert("Page.html has been loaded successfully!") }); Since load is part of the chain which is follower of the given jQuery Selector where the content should be loaded, it means that the $.load() function won't execute if there is no such selector found within the DOM. Another interesting thing to mention, and maybe you've asked yourself is how we know if GET or POST method type is executed? It's simple, if we provide 'data' as second parameter to the load function, then POST is used, otherwise GET is assumed. POST $("#result").load("AjaxPages/Page.html", { "name": "hajan" }, function () { ////callback function implementation });   GET $("#result").load("AjaxPages/Page.html", function () { ////callback function implementation });   Another important feature that $.load() has ($.get() does not) is loading page fragments. Using jQuery's selector capability, you can do this: $("#result").load("AjaxPages/Page.html #resultTable"); In our Page.html, the content now is: So, after the call, only the table with id resultTable will load in our page.   As you can see, we have loaded only the table with id resultTable (1) inside div with id result (2). This is great feature since we won't need to filter the returned HTML content again in our callback function on the master page from where we have called $.load() function. Besides the fact that you can simply call static HTML pages, you can also use this function to load dynamic ASPX pages or ASP.NET ASHX Handlers . Lets say we have another page (ASPX) in our AjaxPages folder with name GetProducts.aspx. This page has repeater control (or anything you want to bind dynamic server-side content) that displays set of data in it. Now, I want to filter the data in the repeater based on the Query String parameter provided when calling that page. For example, if I call the page using GetProducts.aspx?category=computers, it will load only computers… so, this will filter the products automatically by given category. The example ASPX code of GetProducts.aspx page is: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GetProducts.aspx.cs" Inherits="WebApplication1.AjaxPages.GetProducts" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <table id="tableProducts"> <asp:Repeater ID="rptProducts" runat="server"> <HeaderTemplate> <tr> <th>Product</th> <th>Price</th> <th>Category</th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td> <%# Eval("ProductName")%> </td> <td> <%# Eval("Price") %> </td> <td> <%# Eval("Category") %> </td> </tr> </ItemTemplate> </asp:Repeater> </ul> </div> </form> </body> </html> The C# code-behind sample code is: public partial class GetProducts : System.Web.UI.Page { public List<Product> products; protected override void OnInit(EventArgs e) { LoadSampleProductsData(); //load sample data base.OnInit(e); } protected void Page_Load(object sender, EventArgs e) { if (Request.QueryString.Count > 0) { if (!string.IsNullOrEmpty(Request.QueryString["category"])) { string category = Request.QueryString["category"]; //get query string into string variable //filter products sample data by category using LINQ //and add the collection as data source to the repeater rptProducts.DataSource = products.Where(x => x.Category == category); rptProducts.DataBind(); //bind repeater } } } //load sample data method public void LoadSampleProductsData() { products = new List<Product>(); products.Add(new Product() { Category = "computers", Price = 200, ProductName = "Dell PC" }); products.Add(new Product() { Category = "shoes", Price = 90, ProductName = "Nike" }); products.Add(new Product() { Category = "shoes", Price = 66, ProductName = "Adidas" }); products.Add(new Product() { Category = "computers", Price = 210, ProductName = "HP PC" }); products.Add(new Product() { Category = "shoes", Price = 85, ProductName = "Puma" }); } } //sample Product class public class Product { public string ProductName { get; set; } public decimal Price { get; set; } public string Category { get; set; } } Mainly, I just have sample data loading function, Product class and depending of the query string, I am filtering the products list using LINQ Where statement. If we run this page without query string, it will show no data. If we call the page with category query string, it will filter automatically. Example: /AjaxPages/GetProducts.aspx?category=shoes The result will be: or if we use category=computers, like this /AjaxPages/GetProducts.aspx?category=computers, the result will be: So, now using jQuery.load() function, we can call this page with provided query string parameter and load appropriate content… The ASPX code in our Default.aspx page, which will call the AjaxPages/GetProducts.aspx page using jQuery.load() function is: <asp:RadioButtonList ID="rblProductCategory" runat="server"> <asp:ListItem Text="Shoes" Value="shoes" Selected="True" /> <asp:ListItem Text="Computers" Value="computers" /> </asp:RadioButtonList> <asp:Button ID="btnLoadProducts" runat="server" Text="Load Products" /> <!-- Here we will load the products, based on the radio button selection--> <div id="products"></div> </form> The jQuery code: $("#<%= btnLoadProducts.ClientID %>").click(function (event) { event.preventDefault(); //preventing button's default behavior var selectedRadioButton = $("#<%= rblProductCategory.ClientID %> input:checked").val(); //call GetProducts.aspx with the category query string for the selected category in radio button list //filter and get only the #tableProducts content inside #products div $("#products").load("AjaxPages/GetProducts.aspx?category=" + selectedRadioButton + " #tableProducts"); }); The end result: You can download the code sample from here. You can read more about jQuery.load() function here. I hope this was useful blog post for you. Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Why does XFBML work everywhere but in Chrome?

    - by Andrei
    I try to add simple Like button to my Facebook Canvas app (iframe). The button (and all other XFBML elements) works in Safari, Firefox, Opera, but in Google Chrome. How can I find the problem? EDIT1: This is ERB-layout in my Rails app <html xmlns:fb='http://www.facebook.com/2008/fbml' xmlns='http://www.w3.org/1999/xhtml'> ... <body> ... <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId: '<%= @app_id %>', status: true, cookie: true, xfbml: true }); FB.XFBML.parse(); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js#appId=<%=@app_id%>&amp;amp;xfbml=1'; document.getElementById('fb-root').appendChild(e); }()); FB.XFBML.parse(); </script> <fb:like></fb:like> ... JS error message in Chrome inspector: Uncaught ReferenceError: FB is not defined (anonymous function) Uncaught TypeError: Cannot call method 'appendChild' of null window (anonymous function) Probably similar to http://forum.developers.facebook.net/viewtopic.php?id=84684

    Read the article

  • Oracle Sun Solaris 11.1 Completes EAL4+ Common Criteria Evaluation

    - by Joshua Brickman-Oracle
    Oracle is pleased to announce that the Oracle Solaris 11.1 operating system has achieved a Common Criteria certification at Evaluation Assurance Level (EAL) 4 augmented by Flaw Remediation under the Canadian Communications Security Establishment’s (CSEC) Canadian Common Criteria  Scheme (CCCS).  EAL4 is the highest level achievable for commercial software, and is the highest level mutually recognized by 26 countries under the current Common Criteria Recognition Arrangement (CCRA).  Oracle Solaris 11.1 is conformant to the BSI Operating System Protection Profile v2.0 with the following four extended packages. (1) Advanced Management, (2) Extended Identification and Authentication, (3) Labeled Security, and (4) Virtualization. Common Criteria is an international framework (ISO/IEC 15408) which defines a common approach for evaluating security features and capabilities of Information Technology security products. A certified product is one that a recognized Certification Body asserts as having been evaluated by a qualified, accredited, and independent evaluation laboratory competent in the field of IT security evaluation to the requirements of the Common Criteria and Common Methodology for Information Technology Security Evaluation. Oracle Solaris is the industry’s most widely deployed UNIXtm operating system, delivers mission critical cloud infrastructure with built-in virtualization, simplified software lifecycle management, cloud scale data management, and advanced protection for public, private, and hybrid cloud environments. It provides a suite of technologies and applications that create an operating system with optimal performance. Oracle Solaris 11.1 includes key technologies such as Trusted Extensions, the Oracle Solaris Cryptographic Framework, Zones, the ZFS File System, Image Packaging System (IPS), and multiple boot environments. The Oracle Solaris 11.1 Certification Report and Security Target can be viewed on the Communications Security Establishment Canada (CSEC) site and on the Common Criteria Portal. For more information on Oracle’s participation in the Common Criteria program, please visit the main Common Criteria information page here: (http://www.oracle.com/technetwork/topics/security/oracle-common-criteria-095703.html) For a complete list of Oracle products with Common Criteria certifications and FIPS 140-2 validations, please see the Security Evaluations website here: (http://www.oracle.com/technetwork/topics/security/security-evaluations-099357.html).

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >