Search Results

Search found 19966 results on 799 pages for 'wild thing'.

Page 795/799 | < Previous Page | 791 792 793 794 795 796 797 798 799  | Next Page >

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers

    - by mv
    Configuring Oracle iPlanet Web Server / Oracle Traffic Director to use crypto accelerators on T4-1 servers Jyri had written a technical article on Configuring Solaris Cryptographic Framework and Sun Java System Web Server 7 on Systems With UltraSPARC T1 Processors. I tried to find out what has changed since then in T4. I have used a T4-1 SPARC system with Solaris 10. Results slightly vary for Solaris 11.  For Solaris 11, the T4 optimization was implemented in libsoftcrypto.so while it was in pkcs11_softtoken_extra.so for Solaris 10. Overview of T4 processors is here in this blog. Many thanx to Chi-Chang Lin and Julien for their help. 1. Install Oracle iPlanet Web Server / Oracle Traffic Director.  Go to instance/config directory.  # cd /opt/oracle/webserver7/https-hostname.fqdn/config 2. List default PKCS#11 Modules # ../../bin/modutil -dbdir . -listListing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. Root Certslibrary name: libnssckbi.soslots: 1 slot attachedstatus: loadedslot: NSS Builtin Objectstoken: Builtin Object Token----------------------------------------------------------- 3. Initialize the soft token data store in the $HOME/.sunw/pkcs11_softtoken/ directory # pktool setpin keystore=pkcs11Enter token passphrase: olderpasswordCreate new passphrase: passwordRe-enter new passphrase: passwordPassphrase changed. 4. Offload crypto operations to Solaris Crypto Framework on T4 $ ../../bin/modutil -dbdir . -nocertdb -add SCF -libfile /usr/lib/libpkcs11.so -mechanisms RSA:AES:SHA1:MD5 Module "SCF" added to database. Note that -nocertdb means modutil won't try to open the NSS softoken key database. It doesn't even have to be present. PKCS#11 library used is /usr/lib/libpkcs11.so. If the server is running in 64 bit mode, we have to use /usr/lib/64/libpkcs11.so Unlike T1 and T2, in T4 we do not have to disable mechanisms in softtoken provider using cryptoadm. 5. List again to check that a new module SCF is added # ../../bin/modutil -dbdir . -list Listing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. SCFlibrary name: /usr/lib/libpkcs11.soslots: 2 slots attachedstatus: loadedslot: Sun Metaslottoken: Sun Metaslotslot: n2rng/0 SUNW_N2_Random_Number_Generator token: n2rng/0 SUNW_N2_RNG 3. Root Certs library name: libnssckbi.so slots: 1 slot attached status: loaded slot: NSS Builtin Objects token: Builtin Object Token----------------------------------------------------------- 6.  Create certificate in “Sun Metaslot” : I have used certutil, but you must use Admin Server CLI / GUI # ../../bin/certutil -S -x -n "Server-Cert" -t "CT,CT,CT" -s "CN=*.fqdn" -d . -h "Sun Metaslot"Enter Password or Pin for "Sun Metaslot": password 7. Verify that the certificate is created properly in “Sun Metslaot” # ../../bin/certutil -L -d . -h "Sun Metaslot"Certificate Nickname Trust AttributesSSL,S/MIME,JAR/XPIEnter Password or Pin for "Sun Metaslot": passwordSun Metaslot:Server-Cert CTu,Cu,Cu# 8. Associate this newly created certificate to http listener using Admin CLI/GUI. After that server.xml should have <http-listener> ...    <ssl>        <server-cert-nickname>Sun Metaslot:Server-Cert</server-cert-nicknamer>    </ssl> Note the prefix "Sun Metaslot" 9. Disable PKCS#11 bypass To use the accelerated AES algorithm, turn off PKCS#11 bypass, and configure modutil to have the AES mechanism go to the Metaslot. After you disable PKCS#11 bypasss using Admin GUI/CLI,  check that server.xml should have <server> ....    <pkcs11>         <enabled>1</enabled>         <allow-bypass>0</allow-bypass>     </pkcs11> With PKCS#11 bypass enabled, Oracle iPlanet Web Server will only use the RSA capability of the T4, provided certificate and key are stored in the T4 slot (Metaslot). Actually, the RSA op is never bypassed in NSS, it's always done with PKCS#11 calls. So the bypass settings won't affect the behavior of the probes for RSA at all. The only thing that matters if where the RSA key and certificate live, ie. which PKCS#11 token, and thus which PKCS#11 module gets called to do the work. If your certificate/key are in the NSS certificate/key db, you will see libsoftokn3/libfreebl libraries doing the RSA work. If they are in the Sun Metaslot, it should be the Solaris code. 10. Start the server instance # ../bin/startserv Oracle iPlanet Web Server 7.0.16 B09/14/2012 03:33Please enter the PIN for the "Sun Metaslot" token: password...info: HTTP3072: http-listener-1: https://hostname.fqdn:80 ready to accept requestsinfo: CORE3274: successful server startup 11. Figure out which process to run this DTrace script on # ps -eaf | grep webservd | grep -v dogwebservd 18224 18223 0 13:17:25 ? 0:07 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/root 18225 18224 0 13:17:25 ? 0:00 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/ (For Oracle Traffic Director look for process named "trafficd") We see that the child process id is “18225” 12. Clients for testing : You can use any browser. I used NSS tool tstclnt for testing $cat > req.txtGET /index.html HTTP/1.0 For checking both RSA and AES, I used cipher “:0035” which is TLS_RSA_WITH_AES_256_CBC_SHA $./tstclnt -h hostname -p 80 -d . -T -f -o -v -c “:0035” < req.txt 13. How do I make sure that crypto accelerator is being used 13.1 Create DTrace script The following D script should be able to uncover whether T4-specific crypto routine are being called or not. It also displays stats per second. # cat > t4crypto.d#!/usr/sbin/dtrace -spid$target::*rsa*:entry,pid$target::*yf*:entry{    @ops[probemod, probefunc] = count();}tick-1sec{    printa(@ops);    trunc(@ops);} Invoke with './t4crypto.d -p <pid> ' 13.2 EXPECTED PROBES FOR Solaris 10 : If offloading to T4 HW are correctly set up, the expected DTrace output would have these probes and libraries library Operations PROBES pkcs11_softtoken_extra.so RSA soft_decrypt_rsa_pkcs_decode, soft_encrypt_rsa_pkcs_encode soft_rsa_crypt_init_common soft_rsa_decrypt, soft_rsa_encrypt soft_rsa_decrypt_common, soft_rsa_encrypt_common AES yf_aes_instructions_present yf_aes_expand256, yf_aes256_cbc_decrypt, yf_aes256_cbc_encrypt, yf_aes256_load_keys_for_decrypt, yf_aes256_load_keys_for_encrypt, Note that these are for 256, same for 128, 192... these are for cbc, same for ecb, ctr, cfb128... DES yf_des_expand, yf_des_instructions_present yf_des_encrypt libmd_psr.so MD5 yf_md5_multiblock, yf_md5_instruction_present SHA1 yf_sha1_instruction_present, yf_sha1_multibloc 13.3 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITHOUT PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode    1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common      1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt                1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                   2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                    2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                    2 pkcs11_softtoken_extra.so.1   rijndael_key_setup_enc_yf       2 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common         2 pkcs11_softtoken_extra.so.1   yf_aes_expand256                2 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_decrypt           3 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_decrypt 3 pkcs11_softtoken_extra.so.1   big_mont_mul_yf                 6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                   6 pkcs11_softtoken_extra.so.1   yf_des_instructions_present     6 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_encrypt           8 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_encrypt 8 pkcs11_softtoken_extra.so.1   yf_mpmul_present                8 pkcs11_softtoken_extra.so.1   yf_aes_instructions_present    13 pkcs11_softtoken_extra.so.1   yf_des_encrypt                 18 libmd_psr.so.1                yf_md5_multiblock              41 libmd_psr.so.1                yf_md5_instruction_present     72 libmd_psr.so.1                yf_sha1_instruction_present    82 libmd_psr.so.1                yf_sha1_multiblock             82 This indicates that both RSA and AES ops are done in Solaris Crypto Framework. 13.4 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITH PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode 1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common   1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt             1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common      1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                 2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                 2 pkcs11_softtoken_extra.so.1   big_mont_mul_yf              6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                6 pkcs11_softtoken_extra.so.1   yf_mpmul_present             8 For this cipher, when I enable PKCS#11 bypass, Only RSA probes are being hit AES probes are not being hit. 13.5 ustack() for RSA operations / probefunc == "soft_rsa_decrypt" / Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so for both cases with and without bypass. When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`soft_unwrapkey+0x258 pkcs11_softtoken_extra.so.1`C_UnwrapKey+0x1ec libpkcs11.so.1`meta_unwrap_key+0x17c libpkcs11.so.1`meta_UnwrapKey+0xc4 libpkcs11.so.1`C_UnwrapKey+0xfc libnss3.so`pk11_AnyUnwrapKey+0x6b8 libnss3.so`PK11_PubUnwrapSymKey+0x8c libssl3.so`ssl3_HandleRSAClientKeyExchange+0x1a0 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc When PKCS#11 bypass is enabled (allow-bypass is 1) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`C_Decrypt+0x164 libpkcs11.so.1`meta_do_operation+0x27c libpkcs11.so.1`meta_Decrypt+0x4c libpkcs11.so.1`C_Decrypt+0xcc libnss3.so`PK11_PrivDecryptPKCS1+0x1ac libssl3.so`ssl3_HandleRSAClientKeyExchange+0xe4 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc libnsprwrap.so`ThreadMain+0x1c libnspr4.so`_pt_root+0xe8 13.6 ustack() FOR AES operations / probefunc == "yf_aes256_cbc_encrypt" / When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`yf_aes256_cbc_encrypt pkcs11_softtoken_extra.so.1`aes_block_process_contiguous_whole_blocks+0xb4 pkcs11_softtoken_extra.so.1`aes_crypt_contiguous_blocks+0x1cc pkcs11_softtoken_extra.so.1`soft_aes_encrypt_common+0x22c pkcs11_softtoken_extra.so.1`C_EncryptUpdate+0x10c libpkcs11.so.1`meta_do_operation+0x1fc libpkcs11.so.1`meta_EncryptUpdate+0x4c libpkcs11.so.1`C_EncryptUpdate+0xcc libnss3.so`PK11_CipherOp+0x1a0 libssl3.so`ssl3_CompressMACEncryptRecord+0x264 libssl3.so`ssl3_SendRecord+0x300 libssl3.so`ssl3_FlushHandshake+0x54 libssl3.so`ssl3_SendFinished+0x1fc libssl3.so`ssl3_HandleFinished+0x314 libssl3.so`ssl3_HandleHandshakeMessage+0x4ac libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so However when PKCS#11 bypass is disabled (allow-bypass is 1) this stack isn't getting called. 14. LIST OF ALL THE PROBES MATCHED BY D SCRIPT FOR REFERENCE # ./t4crypto.d -p 18225 -l ID PROVIDER MODULE FUNCTION NAME ... 55720 pid18225 libmd_psr.so.1 yf_md5_instruction_present entry 55721 pid18225 libmd_psr.so.1 yf_sha256_instruction_present entry 55722 pid18225 libmd_psr.so.1 yf_sha512_instruction_present entry 55723 pid18225 libmd_psr.so.1 yf_sha1_instruction_present entry 55724 pid18225 libmd_psr.so.1 yf_sha256 entry 55725 pid18225 libmd_psr.so.1 yf_sha256_multiblock entry 55726 pid18225 libmd_psr.so.1 yf_sha512 entry 55727 pid18225 libmd_psr.so.1 yf_sha512_multiblock entry 55728 pid18225 libmd_psr.so.1 yf_sha1 entry 55729 pid18225 libmd_psr.so.1 yf_sha1_multiblock entry 55730 pid18225 libmd_psr.so.1 yf_md5 entry 55731 pid18225 libmd_psr.so.1 yf_md5_multiblock entry 55732 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_instructions_present entry 55733 pid18225 pkcs11_softtoken_extra.so.1 rijndael_key_setup_enc_yf entry 55734 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand128 entry 55735 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt128 entry 55736 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt128 entry 55737 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand192 entry 55738 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt192 entry 55739 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt192 entry 55740 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand256 entry 55741 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt256 entry 55742 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt256 entry 55743 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_encrypt entry 55744 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_encrypt entry 55745 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_encrypt entry 55746 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_encrypt entry 55747 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_encrypt entry 55748 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_encrypt entry 55749 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_encrypt entry 55750 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_encrypt entry 55751 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_encrypt entry 55752 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ctr_crypt entry 55753 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ctr_crypt entry 55754 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ctr_crypt entry 55755 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_encrypt entry 55756 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_encrypt entry 55757 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_encrypt entry 55758 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_decrypt entry 55759 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_decrypt entry 55760 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_decrypt entry 55761 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_decrypt entry 55762 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_decrypt entry 55763 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_decrypt entry 55764 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_decrypt entry 55765 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_decrypt entry 55766 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_decrypt entry 55767 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_decrypt entry 55768 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_decrypt entry 55769 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_decrypt entry 55771 pid18225 pkcs11_softtoken_extra.so.1 yf_des_instructions_present entry 55772 pid18225 pkcs11_softtoken_extra.so.1 yf_des_expand entry 55773 pid18225 pkcs11_softtoken_extra.so.1 yf_des_encrypt entry 55774 pid18225 pkcs11_softtoken_extra.so.1 yf_mpmul_present entry 55775 pid18225 pkcs11_softtoken_extra.so.1 yf_montmul_present entry 55776 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montmul entry 55777 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montsqr entry 55778 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_restore_func entry 55779 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_ret_from_mont_func entry 55780 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_execute_slp entry 55781 pid18225 pkcs11_softtoken_extra.so.1 big_modexp_ncp_yf entry 55782 pid18225 pkcs11_softtoken_extra.so.1 big_mont_mul_yf entry 55783 pid18225 pkcs11_softtoken_extra.so.1 mpmul_arr_yf entry 55784 pid18225 pkcs11_softtoken_extra.so.1 big_mp_mul_yf entry 55785 pid18225 pkcs11_softtoken_extra.so.1 mpm_yf_mpmul entry 55786 pid18225 libns-httpd40.so nsapi_rsa_set_priv_fn entry ... 55795 pid18225 libnss3.so prepare_rsa_priv_key_export_for_asn1 entry 55796 pid18225 libresolv.so.2 sunw_dst_rsaref_init entry 55797 pid18225 libnssutil3.so NSS_Get_SEC_UniversalStringTemplate entry ... 55813 pid18225 libsoftokn3.so prepare_low_rsa_priv_key_for_asn1 entry 55814 pid18225 libsoftokn3.so rsa_FormatOneBlock entry 55815 pid18225 libsoftokn3.so rsa_FormatBlock entry 55816 pid18225 libnssdbm3.so lg_prepare_low_rsa_priv_key_for_asn1 entry 55817 pid18225 libfreebl_32fpu_3.so rsa_build_from_primes entry 55818 pid18225 libfreebl_32fpu_3.so rsa_is_prime entry 55819 pid18225 libfreebl_32fpu_3.so rsa_get_primes_from_exponents entry 55820 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpNoCRT entry 55821 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTNoCheck entry 55822 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTCheckedPubKey entry 55823 pid18225 pkcs11_kernel.so.1 key_gen_rsa_by_value entry 55824 pid18225 pkcs11_kernel.so.1 get_rsa_private_key entry 55825 pid18225 pkcs11_kernel.so.1 get_rsa_public_key entry 55826 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt entry 55827 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt entry 55828 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_crypt_init_common entry 55829 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt_common entry 55830 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt_common entry 55831 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_verify_init_common entry 55832 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_common entry 55833 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_common entry 55834 pid18225 pkcs11_softtoken_extra.so.1 generate_rsa_key entry 55835 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_genkey_pair entry 55836 pid18225 pkcs11_softtoken_extra.so.1 get_rsa_sha1_prefix entry 55837 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_sign_common entry 55838 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_verify_common entry 55839 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_recover entry 55840 pid18225 pkcs11_softtoken_extra.so.1 rsa_pri_to_asn1 entry 55841 pid18225 pkcs11_softtoken_extra.so.1 asn1_to_rsa_pri entry 55842 pid18225 pkcs11_softtoken_extra.so.1 soft_encrypt_rsa_pkcs_encode entry 55843 pid18225 pkcs11_softtoken_extra.so.1 soft_decrypt_rsa_pkcs_decode entry 55844 pid18225 pkcs11_softtoken_extra.so.1 soft_sign_rsa_pkcs_encode entry 55845 pid18225 pkcs11_softtoken_extra.so.1 soft_verify_rsa_pkcs_decode entry 55770 profile tick-1sec

    Read the article

  • Skinny controller in ASP.NET MVC 4

    - by thangchung
    Rails community are always inspire a lot of best ideas. I really love this community by the time. One of these is "Fat models and skinny controllers". I have spent a lot of time on ASP.NET MVC, and really I did some miss-takes, because I made the controller so fat. That make controller is really dirty and very hard to maintain in the future. It is violate seriously SRP principle and KISS as well. But how can we achieve that in ASP.NET MVC? That question is really clear after I read "Manning ASP.NET MVC 4 in Action". It is simple that we can separate it into ActionResult, and try to implementing logic and persistence data inside this. In last 2 years, I have read this from Jimmy Bogard blog, but in that time I never had a consideration about it. That's enough for talking now. I just published a sample on ASP.NET MVC 4, implemented on Visual Studio 2012 RC at here. I used EF framework at here for implementing persistence layer, and also use 2 free templates from internet to make the UI for this sample. In this sample, I try to implementing the simple magazine website that managing all articles, categories and news. It is not finished at all in this time, but no problems, because I just show you about how can we make the controller skinny. And I wanna hear more about your ideas. The first thing, I am abstract the base ActionResult class like this:    public abstract class MyActionResult : ActionResult, IEnsureNotNull     {         public abstract void EnsureAllInjectInstanceNotNull();     }     public abstract class ActionResultBase<TController> : MyActionResult where TController : Controller     {         protected readonly Expression<Func<TController, ActionResult>> ViewNameExpression;         protected readonly IExConfigurationManager ConfigurationManager;         protected ActionResultBase (Expression<Func<TController, ActionResult>> expr)             : this(DependencyResolver.Current.GetService<IExConfigurationManager>(), expr)         {         }         protected ActionResultBase(             IExConfigurationManager configurationManager,             Expression<Func<TController, ActionResult>> expr)         {             Guard.ArgumentNotNull(expr, "ViewNameExpression");             Guard.ArgumentNotNull(configurationManager, "ConfigurationManager");             ViewNameExpression = expr;             ConfigurationManager = configurationManager;         }         protected ViewResult GetViewResult<TViewModel>(TViewModel viewModel)         {             var m = (MethodCallExpression)ViewNameExpression.Body;             if (m.Method.ReturnType != typeof(ActionResult))             {                 throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");             }             var result = new ViewResult             {                 ViewName = m.Method.Name             };             result.ViewData.Model = viewModel;             return result;         }         public override void ExecuteResult(ControllerContext context)         {             EnsureAllInjectInstanceNotNull();         }     } I also have an interface for validation all inject objects. This interface make sure all inject objects that I inject using Autofac container are not null. The implementation of this as below public interface IEnsureNotNull     {         void EnsureAllInjectInstanceNotNull();     } Afterwards, I am just simple implementing the HomePageViewModelActionResult class like this public class HomePageViewModelActionResult<TController> : ActionResultBase<TController> where TController : Controller     {         #region variables & ctors         private readonly ICategoryRepository _categoryRepository;         private readonly IItemRepository _itemRepository;         private readonly int _numOfPage;         public HomePageViewModelActionResult(Expression<Func<TController, ActionResult>> viewNameExpression)             : this(viewNameExpression,                    DependencyResolver.Current.GetService<ICategoryRepository>(),                    DependencyResolver.Current.GetService<IItemRepository>())         {         }         public HomePageViewModelActionResult(             Expression<Func<TController, ActionResult>> viewNameExpression,             ICategoryRepository categoryRepository,             IItemRepository itemRepository)             : base(viewNameExpression)         {             _categoryRepository = categoryRepository;             _itemRepository = itemRepository;             _numOfPage = ConfigurationManager.GetAppConfigBy("NumOfPage").ToInteger();         }         #endregion         #region implementation         public override void ExecuteResult(ControllerContext context)         {             base.ExecuteResult(context);             var cats = _categoryRepository.GetCategories();             var mainViewModel = new HomePageViewModel();             var headerViewModel = new HeaderViewModel();             var footerViewModel = new FooterViewModel();             var mainPageViewModel = new MainPageViewModel();             headerViewModel.SiteTitle = "Magazine Website";             if (cats != null && cats.Any())             {                 headerViewModel.Categories = cats.ToList();                 footerViewModel.Categories = cats.ToList();             }             mainPageViewModel.LeftColumn = BindingDataForMainPageLeftColumnViewModel();             mainPageViewModel.RightColumn = BindingDataForMainPageRightColumnViewModel();             mainViewModel.Header = headerViewModel;             mainViewModel.DashBoard = new DashboardViewModel();             mainViewModel.Footer = footerViewModel;             mainViewModel.MainPage = mainPageViewModel;             GetViewResult(mainViewModel).ExecuteResult(context);         }         public override void EnsureAllInjectInstanceNotNull()         {             Guard.ArgumentNotNull(_categoryRepository, "CategoryRepository");             Guard.ArgumentNotNull(_itemRepository, "ItemRepository");             Guard.ArgumentMustMoreThanZero(_numOfPage, "NumOfPage");         }         #endregion         #region private functions         private MainPageRightColumnViewModel BindingDataForMainPageRightColumnViewModel()         {             var mainPageRightCol = new MainPageRightColumnViewModel();             mainPageRightCol.LatestNews = _itemRepository.GetNewestItem(_numOfPage).ToList();             mainPageRightCol.MostViews = _itemRepository.GetMostViews(_numOfPage).ToList();             return mainPageRightCol;         }         private MainPageLeftColumnViewModel BindingDataForMainPageLeftColumnViewModel()         {             var mainPageLeftCol = new MainPageLeftColumnViewModel();             var items = _itemRepository.GetNewestItem(_numOfPage);             if (items != null && items.Any())             {                 var firstItem = items.First();                 if (firstItem == null)                     throw new NoNullAllowedException("First Item".ToNotNullErrorMessage());                 if (firstItem.ItemContent == null)                     throw new NoNullAllowedException("First ItemContent".ToNotNullErrorMessage());                 mainPageLeftCol.FirstItem = firstItem;                 if (items.Count() > 1)                 {                     mainPageLeftCol.RemainItems = items.Where(x => x.ItemContent != null && x.Id != mainPageLeftCol.FirstItem.Id).ToList();                 }             }             return mainPageLeftCol;         }         #endregion     }  Final step, I get into HomeController and add some line of codes like this [Authorize]     public class HomeController : BaseController     {         [AllowAnonymous]         public ActionResult Index()         {             return new HomePageViewModelActionResult<HomeController>(x=>x.Index());         }         [AllowAnonymous]         public ActionResult Details(int id)         {             return new DetailsViewModelActionResult<HomeController>(x => x.Details(id), id);         }         [AllowAnonymous]         public ActionResult Category(int id)         {             return new CategoryViewModelActionResult<HomeController>(x => x.Category(id), id);         }     } As you see, the code in controller is really skinny, and all the logic I move to the custom ActionResult class. Some people said, it just move the code out of controller and put it to another class, so it is still hard to maintain. Look like it just move the complicate codes from one place to another place. But if you have a look and think it in details, you have to find out if you have code for processing all logic that related to HttpContext or something like this. You can do it on Controller, and try to delegating another logic  (such as processing business requirement, persistence data,...) to custom ActionResult class. Tell me more your thinking, I am really willing to hear all of its from you guys. All source codes can be find out at here. Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="http://weblogs.asp.net//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

    Read the article

  • How to deal with configuration style warnings occuring from TexLive 2012 installation?

    - by JJD
    I followed the advice of izx on how to install TexLive 2012 using the texlive-backports PPA. Before I started I removed all TexLive-related packages. The installation finished and everything seems to work fine. The only thing I noticed are some warnings in the output of the installer. Here is an excerpt of the output: Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg There are more of that kind in the rest of the output: $ sudo apt-get install texlive Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym Suggested packages: texlive-doc-en purifyeps chktex latexmk dvipng xindy dvidvi fragmaster lacheck latexdiff t1utils The following NEW packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym 0 upgraded, 29 newly installed, 0 to remove and 17 not upgraded. Need to get 0 B/274 MB of archives. After this operation, 450 MB of additional disk space will be used. Do you want to continue [Y/n]? Preconfiguring packages ... Selecting previously unselected package tex-common. (Reading database ... 290206 files and directories currently installed.) Unpacking tex-common (from .../tex-common_3.13~ubuntu12.04.1_all.deb) ... Selecting previously unselected package lmodern. Unpacking lmodern (from .../lmodern_2.004.1-5~precise1_all.deb) ... Selecting previously unselected package tex-gyre. Unpacking tex-gyre (from .../tex-gyre_2.004.1-4~precise1_all.deb) ... Selecting previously unselected package libgraphite3. Unpacking libgraphite3 (from .../libgraphite3_1%3a2.3.1-0.2build1_amd64.deb) ... Selecting previously unselected package libkpathsea6. Unpacking libkpathsea6 (from .../libkpathsea6_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package libptexenc1. Unpacking libptexenc1 (from .../libptexenc1_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-common. Unpacking texlive-common (from .../texlive-common_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-binaries. Unpacking texlive-binaries (from .../texlive-binaries_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-doc-base. Unpacking texlive-doc-base (from .../texlive-doc-base_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-base. Unpacking texlive-base (from .../texlive-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base. Unpacking texlive-latex-base (from .../texlive-latex-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended. Unpacking texlive-latex-recommended (from .../texlive-latex-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package latex-xcolor. Unpacking latex-xcolor (from .../latex-xcolor_2.11-1_all.deb) ... Selecting previously unselected package pgf. Unpacking pgf (from .../archives/pgf_2.10-1_all.deb) ... Selecting previously unselected package latex-beamer. Unpacking latex-beamer (from .../latex-beamer_3.10-1_all.deb) ... Selecting previously unselected package texlive-generic-recommended. Unpacking texlive-generic-recommended (from .../texlive-generic-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks. Unpacking texlive-pstricks (from .../texlive-pstricks_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package prosper. Unpacking prosper (from .../prosper_1.00.4+cvs.2007.05.01-4_all.deb) ... Selecting previously unselected package ps2eps. Unpacking ps2eps (from .../ps2eps_1.68-1_amd64.deb) ... Selecting previously unselected package ttf-marvosym. Unpacking ttf-marvosym (from .../ttf-marvosym_0.1+dfsg-2_all.deb) ... Selecting previously unselected package texlive-fonts-recommended. Unpacking texlive-fonts-recommended (from .../texlive-fonts-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive. Unpacking texlive (from .../texlive_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-extra-utils. Unpacking texlive-extra-utils (from .../texlive-extra-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-font-utils. Unpacking texlive-font-utils (from .../texlive-font-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-fonts-recommended-doc. Unpacking texlive-fonts-recommended-doc (from .../texlive-fonts-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base-doc. Unpacking texlive-latex-base-doc (from .../texlive-latex-base-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended-doc. Unpacking texlive-latex-recommended-doc (from .../texlive-latex-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks-doc. Unpacking texlive-pstricks-doc (from .../texlive-pstricks-doc_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package tipa. Unpacking tipa (from .../tipa_2%3a1.3-17~precise1_all.deb) ... Processing triggers for doc-base ... Processing 5 added doc-base files... Registering documents with scrollkeeper... Processing triggers for man-db ... Processing triggers for fontconfig ... Processing triggers for install-info ... Setting up tex-common (3.13~ubuntu12.04.1) ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call texlive-base is not ready, skipping fmtutil-sys --all call Setting up lmodern (2.004.1-5~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tex-gyre (2.004.1-4~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up libgraphite3 (1:2.3.1-0.2build1) ... Setting up libkpathsea6 (2012.20120628-1~ubuntu12.04.1) ... Setting up libptexenc1 (2012.20120628-1~ubuntu12.04.1) ... Setting up texlive-common (2012.20120611-3~ubuntu12.04.1) ... Setting up texlive-binaries (2012.20120628-1~ubuntu12.04.1) ... update-alternatives: using /usr/bin/xdvi-xaw to provide /usr/bin/xdvi.bin (xdvi.bin) in auto mode. update-alternatives: using /usr/bin/bibtex.original to provide /usr/bin/bibtex (bibtex) in auto mode. mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Building format(s) --refresh. This may take some time... done. Setting up texlive-doc-base (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up ps2eps (1.68-1) ... Setting up ttf-marvosym (0.1+dfsg-2) ... Setting up texlive-fonts-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-pstricks-doc (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call Setting up texlive-base (2012.20120611-3~ubuntu12.04.1) ... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. /usr/bin/tl-paper: setting paper size for dvips to a4. /usr/bin/tl-paper: setting paper size for dvipdfmx to a4. /usr/bin/tl-paper: setting paper size for xdvi to a4. /usr/bin/tl-paper: setting paper size for pdftex to a4. Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all. This may take some time... done. Processing triggers for tex-common ... Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-generic-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-fonts-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-extra-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-font-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all --cnffile /etc/texmf/fmt.d/10texlive-latex-base.cnf. This may take some time... done. Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-pstricks (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tipa (2:1.3-17~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up prosper (1.00.4+cvs.2007.05.01-4) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Setting up texlive (2012.20120611-3~ubuntu12.04.1) ... Setting up latex-xcolor (2.11-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Setting up pgf (2.10-1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Setting up latex-beamer (3.10-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Processing triggers for libc-bin ... ldconfig deferred processing now taking place What exactly is 10lmodern.cfg good for? How can I prevent this warnings? Here is the output of sudo update-updmap: $ sudo update-updmap Regenerating '/var/lib/texmf/updmap.cfg-DEBIAN'... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg done. Regenerating '/var/lib/texmf/updmap.cfg-TEXLIVEDIST'... done. update-updmap has updated the following file(s): /var/lib/texmf/updmap.cfg-DEBIAN /var/lib/texmf/updmap.cfg-TEXLIVEDIST If you want to enable the map files with this new file, you should run updmap-sys or updmap.

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • iPhone SDK vs Windows Phone 7 Series SDK Challenge, Part 1: Hello World!

    In this series, I will be taking sample applications from the iPhone SDK and implementing them on Windows Phone 7 Series.  My goal is to do as much of an apples-to-apples comparison as I can.  This series will be written to not only compare and contrast how easy or difficult it is to complete tasks on either platform, how many lines of code, etc., but Id also like it to be a way for iPhone developers to either get started on Windows Phone 7 Series development, or for developers in general to learn the platform. Heres my methodology: Run the iPhone SDK app in the iPhone Simulator to get a feel for what it does and how it works, without looking at the implementation Implement the equivalent functionality on Windows Phone 7 Series using Silverlight. Compare the two implementations based on complexity, functionality, lines of code, number of files, etc. Add some functionality to the Windows Phone 7 Series app that shows off a way to make the scenario more interesting or leverages an aspect of the platform, or uses a better design pattern to implement the functionality. You can download Microsoft Visual Studio 2010 Express for Windows Phone CTP here, and the Expression Blend 4 Beta here. Hello World! Of course no first post would be allowed if it didnt focus on the hello world scenario.  The iPhone SDK follows that tradition with the Your First iPhone Application walkthrough.  I will say that the developer documentation for iPhone is pretty good.  There are plenty of walkthoughs and they break things down into nicely sized steps and do a good job of bringing the user along.  As expected, this application is quite simple.  It comprises of a text box, a label, and a button.  When you push the button, the label changes to Hello plus the  word you typed into the text box.  Makes perfect sense for a starter application.  Theres not much to this but it covers a few basic elements: Laying out basic UI Handling user input Hooking up events Formatting text     So, lets get started building a similar app for Windows Phone 7 Series! Implementing the UI: UI in Silverlight (and therefore Windows Phone 7) is defined in XAML, which is a declarative XML language also used by WPF on the desktop.  For anyone thats familiar with similar types of markup, its relatively straightforward to learn, but has a lot of power in it once you get it figured out.  Well talk more about that. This UI is very simple.  When I look at this, I note a couple of things: Elements are arranged vertically They are all centered So, lets create our Application and then start with the UI.  Once you have the the VS 2010 Express for Windows Phone tool running, create a new Windows Phone Project, and call it Hello World: Once created, youll see the designer on one side and your XAML on the other: Now, we can create our UI in one of three ways: Use the designer in Visual Studio to drag and drop the components Use the designer in Expression Blend 4 to drag and drop the components Enter the XAML by hand in either of the above Well start with (1), then kind of move to (3) just for instructional value. To develop this UI in the designer: First, delete all of the markup between inside of the Grid element (LayoutRoot).  You should be left with just this XAML for your MainPage.xaml (i shortened all the xmlns declarations below for brevity): 1: <phoneNavigation:PhoneApplicationPage 2: x:Class="HelloWorld.MainPage" 3: xmlns="...[snip]" 4: FontFamily="{StaticResource PhoneFontFamilyNormal}" 5: FontSize="{StaticResource PhoneFontSizeNormal}" 6: Foreground="{StaticResource PhoneForegroundBrush}"> 7:   8: <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> 9:   10: </Grid> 11:   12: </phoneNavigation:PhoneApplicationPage> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Well be adding XAML at line 9, so thats the important part. Now, Click on the center area of the phone surface Open the Toolbox and double click StackPanel Double click TextBox Double click TextBlock Double click Button That will create the necessary UI elements but they wont be arranged quite right.  Well fix it in a second.    Heres the XAML that we end up with: 1: <StackPanel Height="100" HorizontalAlignment="Left" Margin="10,10,0,0" Name="stackPanel1" VerticalAlignment="Top" Width="200"> 2: <TextBox Height="32" Name="textBox1" Text="TextBox" Width="100" /> 3: <TextBlock Height="23" Name="textBlock1" Text="TextBlock" /> 4: <Button Content="Button" Height="70" Name="button1" Width="160" /> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The designer does its best at guessing what we want, but in this case we want things to be a bit simpler. So well just clean it up a bit.  We want the items to be centered and we want them to have a little bit of a margin on either side, so heres what we end up with.  Ive also made it match the values and style from the iPhone app: 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <TextBlock Name="textBlock1" HorizontalAlignment="Center" Margin="0,100,0,0" Text="Hello You!" /> 4: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello"/> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now lets take a look at what weve done there. Line 1: We removed all of the formatting from the StackPanel, except for Margin, as thats all we need.  Since our parent element is a Grid, by default the StackPanel will be sized to fit in that space.  The Margin says that we want to reserve 10 pixels on each side of the StackPanel. Line 2: Weve set the HorizontalAlignment of the TextBox to Stretch, which says that it should fill its parents size horizontally.  We want to do this so the TextBox is always full-width.  We also set TextAlignment to Center, to center the text. Line 3: In contrast to the TextBox above, we dont care how wide the TextBlock is, just so long as it is big enough for its text.  Thatll happen automatically, so we just set its Horizontal alignment to Center.  We also set a Margin above the TextBlock of 100 pixels to bump it down a bit, per the iPhone UI. Line 4: We do the same things here as in Line 3. Heres how the UI looks in the designer: Believe it or not, were almost done! Implementing the App Logic Now, we want the TextBlock to change its text when the Button is clicked.  In the designer, double click the Button to be taken to the Event Handler for the Buttons Click event.  In that event handler, we take the Text property from the TextBox, and format it into a string, then set it into the TextBlock.  Thats it! 1: private void button1_Click(object sender, RoutedEventArgs e) 2: { 3: string name = textBox1.Text; 4:   5: // if there isn't a name set, just use "World" 6: if (String.IsNullOrEmpty(name)) 7: { 8: name = "World"; 9: } 10:   11: // set the value into the TextBlock 12: textBlock1.Text = String.Format("Hello {0}!", name); 13:   14: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We use the String.Format() method to handle the formatting for us.    Now all thats left is to test the app in the Windows Phone Emulator and verify it does what we think it does! And it does! Comparing against the iPhone Looking at the iPhone example, there are basically three things that you have to touch as the developer: 1) The UI in the Nib file 2) The app delegate 3) The view controller Counting lines is a bit tricky here, but to try to keep this even, Im going to only count lines of code that I could not have (or would not have) generated with the tooling.  Meaning, Im not counting XAML and Im not counting operations that happen in the Nib file with the XCode designer tool.  So in the case of the above, even though I modified the XAML, I could have done all of those operations using the visual designer tool.  And normally I would have, but the XAML is more instructive (and less steps!).  Im interested in things that I, as the developer have to figure out in code.  Im also not counting lines that just have a curly brace on them, or lines that are generated for me (e.g. method names that are generated for me when I make a connection, etc.) So, by that count, heres what I get from the code listing for the iPhone app found here: HelloWorldAppDelegate.h: 6 HelloWorldAppDelegate.m: 12 MyViewController.h: 8 MyViewController.m: 18 Which gives me a grand total of about 44 lines of code on iPhone.  I really do recommend looking at the iPhone code for a comparison to the above. Now, for the Windows Phone 7 Series application, the only code I typed was in the event handler above Main.Xaml.cs: 4 So a total of 4 lines of code on Windows Phone 7.  And more importantly, the process is just A LOT simpler.  For example, I was surprised that the User Interface Designer in XCode doesnt automatically create instance variables for me and wire them up to the corresponding elements.  I assumed I wouldnt have to write this code myself (and risk getting it wrong!).  I dont need to worry about view controllers or anything.  I just write my code.  This blog post up to this point has covered almost every aspect of this apps development in a few pages.  The iPhone tutorial has 5 top level steps with 2-3 sub sections of each. Now, its worth pointing out that the iPhone development model uses the Model View Controller (MVC) pattern, which is a very flexible and powerful pattern that enforces proper separation of concerns.  But its fairly complex and difficult to understand when you first walk up to it.  Here at Microsoft weve dabbled in MVC a bit, with frameworks like MFC on Visual C++ and with the ASP.NET MVC framework now.  Both are very powerful frameworks.  But one of the reasons weve stayed away from MVC with client UI frameworks is that its difficult to tool.  We havent seen the type of value that beats double click, write code! for the broad set of scenarios. Another thing to think about is how many of those lines of code were focused on my apps functionality?.  Or, the converse of How many lines of code were boilerplate plumbing?  In both examples, the actual number of functional code lines is similar.  I count most of them in MyViewController.m, in the changeGreeting method.  Its about 7 lines of code that do the work of taking the value from the TextBox and putting it into the label.  Versus 4 on the Windows Phone 7 side.  But, unfortunately, on iPhone I still have to write that other 37 lines of code, just to get there. 10% of the code, 1 file instead of 4, its just much simpler. Making Some Tweaks It turns out, I can actually do this application with ZERO  lines of code, if Im willing to change the spec a bit. The data binding functionality in Silverlight is incredibly powerful.  And what I can do is databind the TextBoxs value directly to the TextBlock.  Take some time looking at this XAML below.  Youll see that I have added another nested StackPanel and two more TextBlocks.  Why?  Because thats how I build that string, and the nested StackPanel will lay things out Horizontally for me, as specified by the Orientation property. 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <StackPanel Orientation="Horizontal" HorizontalAlignment="Center" Margin="0,100,0,0" > 4: <TextBlock Text="Hello " /> 5: <TextBlock Name="textBlock1" Text="{Binding ElementName=textBox1, Path=Text}" /> 6: <TextBlock Text="!" /> 7: </StackPanel> 8: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello" Click="button1_Click" /> 9: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, the real action is there in the bolded TextBlock.Text property: Text="{Binding ElementName=textBox1, Path=Text}" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That does all the heavy lifting.  It sets up a databinding between the TextBox.Text property on textBox1 and the TextBlock.Text property on textBlock1. As I change the text of the TextBox, the label updates automatically. In fact, I dont even need the button any more, so I could get rid of that altogether.  And no button means no event handler.  No event handler means no C# code at all.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Did I lose my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful that it even can reshape arrays, so you can start with a smaller array and then grow it as you need. I've bought 3x1Tb drives and made a RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as spares and run a command to grow array to 7-disks. It started working, but speed was ridiculously slow, ~100kb/sec. After processing first 37Mb at such an amazing speed, one of old HDDs fails. I properly shutdown the PC and disconnected the failed drive. After bootup it appeared that it recreated the intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlock, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lose my data again? Any suggestions on how can I make this work? OS is Ubuntu Server 10.04.2. PS. Needless to say, the data is inaccessible - I cannot mount /dev/md0 to save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it. Update: It appears there is some nice info in kern.log: 21:38:48 ...: [ 166.522055] raid5: reshape will continue 21:38:48 ...: [ 166.522085] raid5: device sdb1 operational as raid disk 1 21:38:48 ...: [ 166.522091] raid5: device sdg1 operational as raid disk 4 21:38:48 ...: [ 166.522097] raid5: device sdf1 operational as raid disk 5 21:38:48 ...: [ 166.522102] raid5: device sde1 operational as raid disk 6 21:38:48 ...: [ 166.522107] raid5: device sdd1 operational as raid disk 0 21:38:48 ...: [ 166.522111] raid5: device sdc1 operational as raid disk 3 21:38:48 ...: [ 166.523942] raid5: allocated 7438kB for md0 21:38:48 ...: [ 166.524041] 1: w=1 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524050] 4: w=2 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524056] 5: w=3 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524062] 6: w=4 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524068] 0: w=5 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524073] 3: w=6 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524079] raid5: raid level 6 set md0 active with 6 out of 7 devices, algorithm 2 21:38:48 ...: [ 166.524519] RAID5 conf printout: 21:38:48 ...: [ 166.524523] --- rd:7 wd:6 21:38:48 ...: [ 166.524528] disk 0, o:1, dev:sdd1 21:38:48 ...: [ 166.524532] disk 1, o:1, dev:sdb1 21:38:48 ...: [ 166.524537] disk 3, o:1, dev:sdc1 21:38:48 ...: [ 166.524541] disk 4, o:1, dev:sdg1 21:38:48 ...: [ 166.524545] disk 5, o:1, dev:sdf1 21:38:48 ...: [ 166.524550] disk 6, o:1, dev:sde1 21:38:48 ...: [ 166.524553] ...ok start reshape thread 21:38:48 ...: [ 166.524727] md0: detected capacity change from 0 to 2999995858944 21:38:48 ...: [ 166.524735] md: reshape of RAID array md0 21:38:48 ...: [ 166.524740] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. 21:38:48 ...: [ 166.524745] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape. 21:38:48 ...: [ 166.524756] md: using 128k window, over a total of 976561152 blocks. 21:39:05 ...: [ 166.525013] md0: 21:42:04 ...: [ 362.520063] INFO: task mdadm:1937 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520068] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520073] mdadm D 00000000ffffffff 0 1937 1 0x00000000 21:42:04 ...: [ 362.520083] ffff88002ef4f5d8 0000000000000082 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520092] ffff88002eb5b198 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5ade0 21:42:04 ...: [ 362.520100] 0000000000015bc0 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5b198 21:42:04 ...: [ 362.520107] Call Trace: 21:42:04 ...: [ 362.520133] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:42:04 ...: [ 362.520148] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:42:04 ...: [ 362.520159] [<ffffffffa0228413>] make_request+0x243/0x4b0 [raid456] 21:42:04 ...: [ 362.520169] [<ffffffffa0221a90>] ? release_stripe+0x50/0x70 [raid456] 21:42:04 ...: [ 362.520179] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:42:04 ...: [ 362.520188] [<ffffffff81414df0>] md_make_request+0xc0/0x130 21:42:04 ...: [ 362.520194] [<ffffffff81414df0>] ? md_make_request+0xc0/0x130 21:42:04 ...: [ 362.520205] [<ffffffff8129f8c1>] generic_make_request+0x1b1/0x4f0 21:42:04 ...: [ 362.520214] [<ffffffff810f6515>] ? mempool_alloc_slab+0x15/0x20 21:42:04 ...: [ 362.520222] [<ffffffff8116c2ec>] ? alloc_buffer_head+0x1c/0x60 21:42:04 ...: [ 362.520230] [<ffffffff8129fc80>] submit_bio+0x80/0x110 21:42:04 ...: [ 362.520236] [<ffffffff8116c849>] submit_bh+0xf9/0x140 21:42:04 ...: [ 362.520244] [<ffffffff8116f124>] block_read_full_page+0x274/0x3b0 21:42:04 ...: [ 362.520251] [<ffffffff81172c90>] ? blkdev_get_block+0x0/0x70 21:42:04 ...: [ 362.520258] [<ffffffff8110d875>] ? __inc_zone_page_state+0x35/0x40 21:42:04 ...: [ 362.520265] [<ffffffff810f46d8>] ? add_to_page_cache_locked+0xe8/0x160 21:42:04 ...: [ 362.520272] [<ffffffff81173d78>] blkdev_readpage+0x18/0x20 21:42:04 ...: [ 362.520279] [<ffffffff810f484b>] __read_cache_page+0x7b/0xe0 21:42:04 ...: [ 362.520285] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:42:04 ...: [ 362.520290] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:42:04 ...: [ 362.520297] [<ffffffff810f57dc>] do_read_cache_page+0x3c/0x120 21:42:04 ...: [ 362.520304] [<ffffffff810f5909>] read_cache_page_async+0x19/0x20 21:42:04 ...: [ 362.520310] [<ffffffff810f591e>] read_cache_page+0xe/0x20 21:42:04 ...: [ 362.520317] [<ffffffff811a6cb0>] read_dev_sector+0x30/0xa0 21:42:04 ...: [ 362.520324] [<ffffffff811a7fcd>] amiga_partition+0x6d/0x460 21:42:04 ...: [ 362.520331] [<ffffffff811a7938>] check_partition+0x138/0x190 21:42:04 ...: [ 362.520338] [<ffffffff811a7a7a>] rescan_partitions+0xea/0x2f0 21:42:04 ...: [ 362.520344] [<ffffffff811744c7>] __blkdev_get+0x267/0x3d0 21:42:04 ...: [ 362.520350] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:42:04 ...: [ 362.520356] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:42:04 ...: [ 362.520362] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:42:04 ...: [ 362.520369] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:42:04 ...: [ 362.520377] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:42:04 ...: [ 362.520385] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:42:04 ...: [ 362.520391] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:42:04 ...: [ 362.520398] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:42:04 ...: [ 362.520406] [<ffffffff811134a8>] ? unmap_vmas+0x178/0x310 21:42:04 ...: [ 362.520414] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:42:04 ...: [ 362.520421] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:42:04 ...: [ 362.520428] [<ffffffff811418b0>] sys_open+0x20/0x30 21:42:04 ...: [ 362.520437] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:42:04 ...: [ 362.520446] INFO: task mdadm:2283 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520450] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520454] mdadm D 0000000000000000 0 2283 2212 0x00000000 21:42:04 ...: [ 362.520462] ffff88002cca7d98 0000000000000086 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520470] ffff88002ededf78 ffff88002cca7fd8 0000000000015bc0 ffff88002ededbc0 21:42:04 ...: [ 362.520478] 0000000000015bc0 ffff88002cca7fd8 0000000000015bc0 ffff88002ededf78 21:42:04 ...: [ 362.520485] Call Trace: 21:42:04 ...: [ 362.520495] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:42:04 ...: [ 362.520502] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:42:04 ...: [ 362.520508] [<ffffffff8117404d>] __blkdev_put+0x3d/0x190 21:42:04 ...: [ 362.520514] [<ffffffff811741b0>] blkdev_put+0x10/0x20 21:42:04 ...: [ 362.520520] [<ffffffff811741f3>] blkdev_close+0x33/0x60 21:42:04 ...: [ 362.520527] [<ffffffff81145375>] __fput+0xf5/0x210 21:42:04 ...: [ 362.520534] [<ffffffff811454b5>] fput+0x25/0x30 21:42:04 ...: [ 362.520540] [<ffffffff811415ad>] filp_close+0x5d/0x90 21:42:04 ...: [ 362.520546] [<ffffffff81141697>] sys_close+0xb7/0x120 21:42:04 ...: [ 362.520553] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:42:04 ...: [ 362.520559] INFO: task md0_reshape:2287 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520563] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520567] md0_reshape D ffff88003aee96f0 0 2287 2 0x00000000 21:42:04 ...: [ 362.520575] ffff88003cf05a70 0000000000000046 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520582] ffff88003aee9aa8 ffff88003cf05fd8 0000000000015bc0 ffff88003aee96f0 21:42:04 ...: [ 362.520590] 0000000000015bc0 ffff88003cf05fd8 0000000000015bc0 ffff88003aee9aa8 21:42:04 ...: [ 362.520597] Call Trace: 21:42:04 ...: [ 362.520608] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:42:04 ...: [ 362.520616] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:42:04 ...: [ 362.520626] [<ffffffffa0226f80>] reshape_request+0x4c0/0x9a0 [raid456] 21:42:04 ...: [ 362.520634] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:42:04 ...: [ 362.520644] [<ffffffffa022777a>] sync_request+0x31a/0x3a0 [raid456] 21:42:04 ...: [ 362.520651] [<ffffffff81052713>] ? __wake_up+0x53/0x70 21:42:04 ...: [ 362.520658] [<ffffffff814156b1>] md_do_sync+0x621/0xbb0 21:42:04 ...: [ 362.520668] [<ffffffff810387b9>] ? default_spin_lock_flags+0x9/0x10 21:42:04 ...: [ 362.520675] [<ffffffff8141640c>] md_thread+0x5c/0x130 21:42:04 ...: [ 362.520681] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:42:04 ...: [ 362.520688] [<ffffffff814163b0>] ? md_thread+0x0/0x130 21:42:04 ...: [ 362.520694] [<ffffffff81084416>] kthread+0x96/0xa0 21:42:04 ...: [ 362.520701] [<ffffffff810131ea>] child_rip+0xa/0x20 21:42:04 ...: [ 362.520707] [<ffffffff81084380>] ? kthread+0x0/0xa0 21:42:04 ...: [ 362.520713] [<ffffffff810131e0>] ? child_rip+0x0/0x20 21:42:04 ...: [ 362.520718] INFO: task mdadm:2288 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520721] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520725] mdadm D 0000000000000000 0 2288 1 0x00000000 21:42:04 ...: [ 362.520733] ffff88002cca9c18 0000000000000086 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520741] ffff88003aee83b8 ffff88002cca9fd8 0000000000015bc0 ffff88003aee8000 21:42:04 ...: [ 362.520748] 0000000000015bc0 ffff88002cca9fd8 0000000000015bc0 ffff88003aee83b8 21:42:04 ...: [ 362.520755] Call Trace: 21:42:04 ...: [ 362.520763] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:42:04 ...: [ 362.520771] [<ffffffff812a6d50>] ? exact_match+0x0/0x10 21:42:04 ...: [ 362.520777] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:42:04 ...: [ 362.520783] [<ffffffff811742c8>] __blkdev_get+0x68/0x3d0 21:42:04 ...: [ 362.520790] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:42:04 ...: [ 362.520795] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:42:04 ...: [ 362.520801] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:42:04 ...: [ 362.520808] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:42:04 ...: [ 362.520815] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:42:04 ...: [ 362.520821] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:42:04 ...: [ 362.520828] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:42:04 ...: [ 362.520834] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:42:04 ...: [ 362.520841] [<ffffffff810ff0e1>] ? lru_cache_add_lru+0x21/0x40 21:42:04 ...: [ 362.520848] [<ffffffff8111109c>] ? do_anonymous_page+0x11c/0x330 21:42:04 ...: [ 362.520855] [<ffffffff81115d5f>] ? handle_mm_fault+0x31f/0x3c0 21:42:04 ...: [ 362.520862] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:42:04 ...: [ 362.520868] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:42:04 ...: [ 362.520874] [<ffffffff811418b0>] sys_open+0x20/0x30 21:42:04 ...: [ 362.520882] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:44:04 ...: [ 482.520065] INFO: task mdadm:1937 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520077] mdadm D 00000000ffffffff 0 1937 1 0x00000000 21:44:04 ...: [ 482.520087] ffff88002ef4f5d8 0000000000000082 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520096] ffff88002eb5b198 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5ade0 21:44:04 ...: [ 482.520104] 0000000000015bc0 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5b198 21:44:04 ...: [ 482.520112] Call Trace: 21:44:04 ...: [ 482.520139] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:44:04 ...: [ 482.520154] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:44:04 ...: [ 482.520165] [<ffffffffa0228413>] make_request+0x243/0x4b0 [raid456] 21:44:04 ...: [ 482.520175] [<ffffffffa0221a90>] ? release_stripe+0x50/0x70 [raid456] 21:44:04 ...: [ 482.520185] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:44:04 ...: [ 482.520194] [<ffffffff81414df0>] md_make_request+0xc0/0x130 21:44:04 ...: [ 482.520201] [<ffffffff81414df0>] ? md_make_request+0xc0/0x130 21:44:04 ...: [ 482.520212] [<ffffffff8129f8c1>] generic_make_request+0x1b1/0x4f0 21:44:04 ...: [ 482.520221] [<ffffffff810f6515>] ? mempool_alloc_slab+0x15/0x20 21:44:04 ...: [ 482.520229] [<ffffffff8116c2ec>] ? alloc_buffer_head+0x1c/0x60 21:44:04 ...: [ 482.520237] [<ffffffff8129fc80>] submit_bio+0x80/0x110 21:44:04 ...: [ 482.520244] [<ffffffff8116c849>] submit_bh+0xf9/0x140 21:44:04 ...: [ 482.520252] [<ffffffff8116f124>] block_read_full_page+0x274/0x3b0 21:44:04 ...: [ 482.520258] [<ffffffff81172c90>] ? blkdev_get_block+0x0/0x70 21:44:04 ...: [ 482.520266] [<ffffffff8110d875>] ? __inc_zone_page_state+0x35/0x40 21:44:04 ...: [ 482.520273] [<ffffffff810f46d8>] ? add_to_page_cache_locked+0xe8/0x160 21:44:04 ...: [ 482.520280] [<ffffffff81173d78>] blkdev_readpage+0x18/0x20 21:44:04 ...: [ 482.520286] [<ffffffff810f484b>] __read_cache_page+0x7b/0xe0 21:44:04 ...: [ 482.520293] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:44:04 ...: [ 482.520299] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:44:04 ...: [ 482.520306] [<ffffffff810f57dc>] do_read_cache_page+0x3c/0x120 21:44:04 ...: [ 482.520313] [<ffffffff810f5909>] read_cache_page_async+0x19/0x20 21:44:04 ...: [ 482.520319] [<ffffffff810f591e>] read_cache_page+0xe/0x20 21:44:04 ...: [ 482.520327] [<ffffffff811a6cb0>] read_dev_sector+0x30/0xa0 21:44:04 ...: [ 482.520334] [<ffffffff811a7fcd>] amiga_partition+0x6d/0x460 21:44:04 ...: [ 482.520341] [<ffffffff811a7938>] check_partition+0x138/0x190 21:44:04 ...: [ 482.520348] [<ffffffff811a7a7a>] rescan_partitions+0xea/0x2f0 21:44:04 ...: [ 482.520355] [<ffffffff811744c7>] __blkdev_get+0x267/0x3d0 21:44:04 ...: [ 482.520361] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:44:04 ...: [ 482.520367] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:44:04 ...: [ 482.520373] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:44:04 ...: [ 482.520380] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:44:04 ...: [ 482.520388] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:44:04 ...: [ 482.520396] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:44:04 ...: [ 482.520403] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:44:04 ...: [ 482.520410] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:44:04 ...: [ 482.520417] [<ffffffff811134a8>] ? unmap_vmas+0x178/0x310 21:44:04 ...: [ 482.520426] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:44:04 ...: [ 482.520432] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:44:04 ...: [ 482.520438] [<ffffffff811418b0>] sys_open+0x20/0x30 21:44:04 ...: [ 482.520447] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:44:04 ...: [ 482.520458] INFO: task mdadm:2283 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520462] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520467] mdadm D 0000000000000000 0 2283 2212 0x00000000 21:44:04 ...: [ 482.520475] ffff88002cca7d98 0000000000000086 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520483] ffff88002ededf78 ffff88002cca7fd8 0000000000015bc0 ffff88002ededbc0 21:44:04 ...: [ 482.520490] 0000000000015bc0 ffff88002cca7fd8 0000000000015bc0 ffff88002ededf78 21:44:04 ...: [ 482.520498] Call Trace: 21:44:04 ...: [ 482.520508] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:44:04 ...: [ 482.520515] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:44:04 ...: [ 482.520521] [<ffffffff8117404d>] __blkdev_put+0x3d/0x190 21:44:04 ...: [ 482.520527] [<ffffffff811741b0>] blkdev_put+0x10/0x20 21:44:04 ...: [ 482.520533] [<ffffffff811741f3>] blkdev_close+0x33/0x60 21:44:04 ...: [ 482.520541] [<ffffffff81145375>] __fput+0xf5/0x210 21:44:04 ...: [ 482.520547] [<ffffffff811454b5>] fput+0x25/0x30 21:44:04 ...: [ 482.520554] [<ffffffff811415ad>] filp_close+0x5d/0x90 21:44:04 ...: [ 482.520560] [<ffffffff81141697>] sys_close+0xb7/0x120 21:44:04 ...: [ 482.520568] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:44:04 ...: [ 482.520574] INFO: task md0_reshape:2287 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520578] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520582] md0_reshape D ffff88003aee96f0 0 2287 2 0x00000000 21:44:04 ...: [ 482.520590] ffff88003cf05a70 0000000000000046 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520597] ffff88003aee9aa8 ffff88003cf05fd8 0000000000015bc0 ffff88003aee96f0 21:44:04 ...: [ 482.520605] 0000000000015bc0 ffff88003cf05fd8 0000000000015bc0 ffff88003aee9aa8 21:44:04 ...: [ 482.520612] Call Trace: 21:44:04 ...: [ 482.520623] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:44:04 ...: [ 482.520633] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:44:04 ...: [ 482.520643] [<ffffffffa0226f80>] reshape_request+0x4c0/0x9a0 [raid456] 21:44:04 ...: [ 482.520651] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:44:04 ...: [ 482.520661] [<ffffffffa022777a>] sync_request+0x31a/0x3a0 [raid456] 21:44:04 ...: [ 482.520668] [<ffffffff81052713>] ? __wake_up+0x53/0x70 21:44:04 ...: [ 482.520675] [<ffffffff814156b1>] md_do_sync+0x621/0xbb0 21:44:04 ...: [ 482.520685] [<ffffffff810387b9>] ? default_spin_lock_flags+0x9/0x10 21:44:04 ...: [ 482.520692] [<ffffffff8141640c>] md_thread+0x5c/0x130 21:44:04 ...: [ 482.520699] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:44:04 ...: [ 482.520705] [<ffffffff814163b0>] ? md_thread+0x0/0x130 21:44:04 ...: [ 482.520711] [<ffffffff81084416>] kthread+0x96/0xa0 21:44:04 ...: [ 482.520718] [<ffffffff810131ea>] child_rip+0xa/0x20 21:44:04 ...: [ 482.520725] [<ffffffff81084380>] ? kthread+0x0/0xa0 21:44:04 ...: [ 482.520730] [<ffffffff810131e0>] ? child_rip+0x0/0x20 21:44:04 ...: [ 482.520735] INFO: task mdadm:2288 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520739] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520743] mdadm D 0000000000000000 0 2288 1 0x00000000 21:44:04 ...: [ 482.520751] ffff88002cca9c18 0000000000000086 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520759] ffff88003aee83b8 ffff88002cca9fd8 0000000000015bc0 ffff88003aee8000 21:44:04 ...: [ 482.520767] 0000000000015bc0 ffff88002cca9fd8 0000000000015bc0 ffff88003aee83b8 21:44:04 ...: [ 482.520774] Call Trace: 21:44:04 ...: [ 482.520782] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:44:04 ...: [ 482.520790] [<ffffffff812a6d50>] ? exact_match+0x0/0x10 21:44:04 ...: [ 482.520797] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:44:04 ...: [ 482.520804] [<ffffffff811742c8>] __blkdev_get+0x68/0x3d0 21:44:04 ...: [ 482.520810] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:44:04 ...: [ 482.520816] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:44:04 ...: [ 482.520822] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:44:04 ...: [ 482.520829] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:44:04 ...: [ 482.520837] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:44:04 ...: [ 482.520843] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:44:04 ...: [ 482.520850] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:44:04 ...: [ 482.520857] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:44:04 ...: [ 482.520864] [<ffffffff810ff0e1>] ? lru_cache_add_lru+0x21/0x40 21:44:04 ...: [ 482.520871] [<ffffffff8111109c>] ? do_anonymous_page+0x11c/0x330 21:44:04 ...: [ 482.520878] [<ffffffff81115d5f>] ? handle_mm_fault+0x31f/0x3c0 21:44:04 ...: [ 482.520885] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:44:04 ...: [ 482.520891] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:44:04 ...: [ 482.520897] [<ffffffff811418b0>] sys_open+0x20/0x30 21:44:04 ...: [ 482.520905] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:46:04 ...: [ 602.520053] INFO: task mdadm:1937 blocked for more than 120 seconds. 21:46:04 ...: [ 602.520059] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:46:04 ...: [ 602.520065] mdadm D 00000000ffffffff 0 1937 1 0x00000000 21:46:04 ...: [ 602.520075] ffff88002ef4f5d8 0000000000000082 0000000000015bc0 0000000000015bc0 21:46:04 ...: [ 602.520084] ffff88002eb5b198 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5ade0 21:46:04 ...: [ 602.520091] 0000000000015bc0 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5b198 21:46:04 ...: [ 602.520099] Call Trace: 21:46:04 ...: [ 602.520127] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:46:04 ...: [ 602.520142] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:46:04 ...: [ 602.520153] [<ffffffffa0228413>] make_request+0x243/0x4b0 [raid456] 21:46:04 ...: [ 602.520162] [<ffffffffa0221a90>] ? release_stripe+0x50/0x70 [raid456] 21:46:04 ...: [ 602.520171] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:46:04 ...: [ 602.520180] [<ffffffff81414df0>] md_make_request+0xc0/0x130 21:46:04 ...: [ 602.520187] [<ffffffff81414df0>] ? md_make_request+0xc0/0x130 21:46:04 ...: [ 602.520197] [<ffffffff8129f8c1>] generic_make_request+0x1b1/0x4f0 21:46:04 ...: [ 602.520206] [<ffffffff810f6515>] ? mempool_alloc_slab+0x15/0x20 21:46:04 ...: [ 602.520215] [<ffffffff8116c2ec>] ? alloc_buffer_head+0x1c/0x60 21:46:04 ...: [ 602.520222] [<ffffffff8129fc80>] submit_bio+0x80/0x110 21:46:04 ...: [ 602.520229] [<ffffffff8116c849>] submit_bh+0xf9/0x140 21:46:04 ...: [ 602.520237] [<ffffffff8116f124>] block_read_full_page+0x274/0x3b0 21:46:04 ...: [ 602.520244] [<ffffffff81172c90>] ? blkdev_get_block+0x0/0x70 21:46:04 ...: [ 602.520252] [<ffffffff8110d875>] ? __inc_zone_page_state+0x35/0x40 21:46:04 ...: [ 602.520259] [<ffffffff810f46d8>] ? add_to_page_cache_locked+0xe8/0x160 21:46:04 ...: [ 602.520266] [<ffffffff81173d78>] blkdev_readpage+0x18/0x20 21:46:04 ...: [ 602.520273] [<ffffffff810f484b>] __read_cache_page+0x7b/0xe0 21:46:04 ...: [ 602.520279] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:46:04 ...: [ 602.520285] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:46:04 ...: [ 602.520292] [<ffffffff810f57dc>] do_read_cache_page+0x3c/0x120 21:46:04 ...: [ 602.520300] [<ffffffff810f5909>] read_cache_page_async+0x19/0x20 21:46:04 ...: [ 602.520306] [<ffffffff810f591e>] read_cache_page+0xe/0x20 21:46:04 ...: [ 602.520314] [<ffffffff811a6cb0>] read_dev_sector+0x30/0xa0 21:46:04 ...: [ 602.520321] [<ffffffff811a7fcd>] amiga_partition+0x6d/0x460 21:46:04 ...: [ 602.520328] [<ffffffff811a7938>] check_partition+0x138/0x190 21:46:04 ...: [ 602.520335] [<ffffffff811a7a7a>] rescan_partitions+0xea/0x2f0 21:46:04 ...: [ 602.520342] [<ffffffff811744c7>] __blkdev_get+0x267/0x3d0 21:46:04 ...: [ 602.520348] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:46:04 ...: [ 602.520354] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:46:04 ...: [ 602.520359] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:46:04 ...: [ 602.520367] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:46:04 ...: [ 602.520375] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:46:04 ...: [ 602.520383] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:46:04 ...: [ 602.520390] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:46:04 ...: [ 602.520397] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:46:04 ...: [ 602.520404] [<ffffffff811134a8>] ? unmap_vmas+0x178/0x310 21:46:04 ...: [ 602.520413] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:46:04 ...: [ 602.520419] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:46:04 ...: [ 602.520425] [<ffffffff811418b0>] sys_open+0x20/0x30 21:46:04 ...: [ 602.520434] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:46:04 ...: [ 602.520443] INFO: task mdadm:2283 blocked for more than 120 seconds. 21:46:04 ...: [ 602.520447] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:46:04 ...: [ 602.520451] mdadm D 0000000000000000 0 2283 2212 0x00000000 21:46:04 ...: [ 602.520460] ffff88002cca7d98 0000000000000086 0000000000015bc0 0000000000015bc0 21:46:04 ...: [ 602.520468] ffff88002ededf78 ffff88002cca7fd8 0000000000015bc0 ffff88002ededbc0 21:46:04 ...: [ 602.520475] 0000000000015bc0 ffff88002cca7fd8 0000000000015bc0 ffff88002ededf78 21:46:04 ...: [ 602.520483] Call Trace: 21:46:04 ...: [ 602.520492] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:46:04 ...: [ 602.520500] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:46:04 ...: [ 602.520506] [<ffffffff8117404d>] __blkdev_put+0x3d/0x190 21:46:04 ...: [ 602.520512] [<ffffffff811741b0>] blkdev_put+0x10/0x20 21:46:04 ...: [ 602.520518] [<ffffffff811741f3>] blkdev_close+0x33/0x60 21:46:04 ...: [ 602.520526] [<ffffffff81145375>] __fput+0xf5/0x210 21:46:04 ...: [ 602.520533] [<ffffffff811454b5>] fput+0x25/0x30 21:46:04 ...: [ 602.520539] [<ffffffff811415ad>] filp_close+0x5d/0x90 21:46:04 ...: [ 602.520545] [<ffffffff81141697>] sys_close+0xb7/0x120 21:46:04 ...: [ 602.520552] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b

    Read the article

  • Mouse Clicks, Reactive Extensions and StreamInsight Mashup

    I had an hour spare this afternoon so I wanted to have another play with Reactive Extensions in .Net and StreamInsight.  I also didn’t want to simply use a console window as a way of gathering events so I decided to use a windows form instead. The task I set myself was this. Whenever I click on my form I want to subscribe to the event and output its location to the console window and also the timestamp of the event.  In addition to this I want to know for every mouse click I do, how many mouse clicks have happened in the last 5 seconds. The second point here is really interesting.  I have often found this when working with people on problems.  It is how you ask the question that determines how you tackle the problem.  I will show 2 ways of possibly answering the second question depending on how the question was interpreted. As a side effect of this example I will show how time in StreamInsight can stand still.  This is an important concept and we can see it in the output later. Now to the code.  I will break it all down in this blogpost but you can download the solution and see it all together. I created a Console application and then instantiate a windows form.   frm = new Form(); Thread g = new Thread(CallUI); g.SetApartmentState(ApartmentState.STA); g.Start();   Call UI looks like this   static void CallUI() { System.Windows.Forms.Application.Run(frm); frm.Activate(); frm.BringToFront(); }   Now what we need to do is create an observable from the MouseClick event on the form.  For this we use the Reactive Extensions.   var lblevt = Observable.FromEvent<MouseEventArgs>(frm, "MouseClick").Timestamp();   As mentioned earlier I have two objectives in this example and to solve the first I am going to again use the Reactive extensions.  Let’s subscribe to the MouseClick event and output the location and timestamp to the console. lblevt.Subscribe(evt => { Console.WriteLine("Clicked: {0}, {1} ", evt.Value.EventArgs.Location,evt.Timestamp); }); That should take care of obective #1 but what about the second objective.  For that we need some temporal windowing and this means StreamInsight.  First we need to turn our Observable collection of MouseClick events into a PointStream Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("MouseClicks"); var input = lblevt.ToPointStream( a, evt => PointEvent.CreateInsert( evt.Timestamp, new { loc = evt.Value.EventArgs.Location.ToString(), ts = evt.Timestamp.ToLocalTime().ToString() }), AdvanceTimeSettings.IncreasingStartTime);   Now that we have created out PointStream we need to do something with it and this is where we get to our second objective.  It is pretty clear that we want some kind of windowing but what? Here is one way of doing it.  It might not be what you wanted but again it is how the second objective is interpreted   var q = from i in input.TumblingWindow(TimeSpan.FromSeconds(5), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { CountOfClicks = i.Count() };   The above code creates tumbling windows of 5 seconds and counts the number of events in the windows.  If there are no events in the window then no result is output.  Likewise until an event (MouseClick) is issued then we do not see anything in the output (that is not strictly true because it is the CTI strapped to our MouseClick events that flush the events through the StreamInsight engine not the events themselves).  This approach is centred around the windows and not the events.  Until the windows complete and a CTI is issued then no events are pushed through. An alternate way of answering our second question is below   var q = from i in input.AlterEventDuration(evt => TimeSpan.FromSeconds(5)).SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { CountOfClicks = i.Count() };   In this code we extend the duration of each MouseClick to five seconds.  We then create  Snapshot Windows over those events.  Snapshot windows are discussed in detail here.  With this solution we are centred around the events.  It is the events that are driving the output.  Let’s have a look at the output from this solution as it may be a little confusing. First though let me show how we get the output from StreamInsight into the Console window. foreach (var x in q.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.CountOfClicks); }   Ok so now to the output.   The table at the top shows the output from our routine and the table at the bottom helps to explain the output.  One of the things that will help as well is, you will note that for our PointStream we set the issuing of CTIs to be IncreasingStartTime.  What this means is that the CTI is placed right at the start of the event so will not flush the event with which it was issued but will flush those prior to it.  In the bottom table the Blue fill is where we issued a click.  Yellow fill is the duration and boundaries of our events.  The numbers at the bottom indicate the count of events   Clicked 22:40:16                                 Clicked 23:40:18                                 1                                   Clicked 23:40:20                                 2                                   Clicked 23:40:22                                 3                                   2                                   Clicked 23:40:24                                 3                                   2                                   Clicked 23:40:32                                 3                                   2                                   1                                                                                                         secs 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32                                                                                                                                                                                                                         counts   1   2 3 2 3 2 3   2   1           What we can see here in the output is that the counts include all the end edges that have occurred between the mouse clicks.  If we look specifically at the mouse click at 22:40:32. then we see that 3 events are returned to us. These include the following End Edge count at 22:40:25 End Edge count at 22:40:27 End Edge count at 22:40:29 Another thing we notice is that until we actually issue a CTI at 22:40:32 then those last 3 snapshot window counts will never be reported. Hopefully this has helped to explain  a few concepts around StreamInsight and the IObservable() pattern.   You can download this solution from here and play.  You will need the Reactive Framework from here and StreamInsight 1.1

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Hibernate + PostgreSQL : relation does not exist - SQL Error: 0, SQLState: 42P01

    - by tommy599
    Hello, I am having some problems trying to work with PostgreSQL and Hibernate, more specifically, the issue mentioned in the title. I've been searching the net for a few hours now but none of the found solutions worked for me. I am using Eclipse Java EE IDE for Web Developers. Build id: 20090920-1017 with HibernateTools, Hibernate 3, PostgreSQL 8.4.3 on Ubuntu 9.10. Here are the relevant files: Message.class package hello; public class Message { private Long id; private String text; public Message() { } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getText() { return text; } public void setText(String text) { this.text = text; } } Message.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="hello"> <class name="Message" table="public.messages"> <id name="id" column="id"> <generator class="assigned"/> </id> <property name="text" column="messagetext"/> </class> </hibernate-mapping> hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.postgresql.Driver</property> <property name="hibernate.connection.password">bar</property> <property name="hibernate.connection.url">jdbc:postgresql:postgres/tommy</property> <property name="hibernate.connection.username">foo</property> <property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property> <property name="show_sql">true</property> <property name="log4j.logger.org.hibernate.type">DEBUG</property> <mapping resource="hello/Message.hbm.xml"/> </session-factory> </hibernate-configuration> Main package hello; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; public class App { public static void main(String[] args) { SessionFactory sessionFactory = new Configuration().configure() .buildSessionFactory(); Message message = new Message(); message.setText("Hello Cruel World"); message.setId(2L); Session session = null; Transaction transaction = null; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); session.save(message); } catch (Exception e) { System.out.println("Exception attemtping to Add message: " + e.getMessage()); } finally { if (session != null && session.isOpen()) { if (transaction != null) transaction.commit(); session.flush(); session.close(); } } } } Table structure: foo=# \d messages Table "public.messages" Column | Type | Modifiers -------------+---------+----------- id | integer | messagetext | text | Eclipse console output when I run it Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: Hibernate 3.5.1-Final Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: hibernate.properties not found Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment buildBytecodeProvider INFO: Bytecode provider name : javassist Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: using JDK 1.4 java.sql.Timestamp handling Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration configure INFO: configuring from resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration getConfigurationInputStream INFO: Configuration resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration addResource INFO: Reading mappings from resource : hello/Message.hbm.xml Apr 28, 2010 11:13:54 PM org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues INFO: Mapping class: hello.Message -> public.messages Apr 28, 2010 11:13:54 PM org.hibernate.cfg.Configuration doConfigure INFO: Configured SessionFactory: null Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Using Hibernate built-in connection pool (not for production use!) Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Hibernate connection pool size: 20 Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: autocommit mode: false Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: using driver: org.postgresql.Driver at URL: jdbc:postgresql:postgres/tommy Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: connection properties: {user=foo, password=****} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: RDBMS: PostgreSQL, version: 8.4.3 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC driver: PostgreSQL Native Driver, version: PostgreSQL 8.4 JDBC4 (build 701) Apr 28, 2010 11:13:54 PM org.hibernate.dialect.Dialect <init> INFO: Using dialect: org.hibernate.dialect.PostgreSQLDialect Apr 28, 2010 11:13:54 PM org.hibernate.engine.jdbc.JdbcSupportLoader useContextualLobCreation INFO: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionFactoryFactory buildTransactionFactory INFO: Using default transaction strategy (direct JDBC transactions) Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionManagerLookupFactory getTransactionManagerLookup INFO: No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic flush during beforeCompletion(): disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic session close at end of transaction: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch size: 15 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch updates for versioned data: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Scrollable result sets: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC3 getGeneratedKeys(): enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Connection release mode: auto Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default batch fetch size: 1 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Generate SQL with comments: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL updates by primary key: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL inserts for batching: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createQueryTranslatorFactory INFO: Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.hql.ast.ASTQueryTranslatorFactory <init> INFO: Using ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query language substitutions: {} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JPA-QL strict compliance: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Second-level cache: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query cache: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createRegionFactory INFO: Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Optimize cache for minimal puts: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Structured second-level cache entries: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Echoing all SQL to stdout Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Statistics: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Deleted entity synthetic identifier rollback: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default entity-mode: pojo Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Named query checking : enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Check Nullability in Core (should be disabled when Bean Validation is on): enabled Apr 28, 2010 11:13:54 PM org.hibernate.impl.SessionFactoryImpl <init> INFO: building session factory Apr 28, 2010 11:13:55 PM org.hibernate.impl.SessionFactoryObjectFactory addInstance INFO: Not binding factory to JNDI, no JNDI name configured Hibernate: insert into public.messages (messagetext, id) values (?, ?) Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: ERROR: relation "public.messages" does not exist Position: 13 Apr 28, 2010 11:13:55 PM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more Exception in thread "main" org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more PostgreSQL log file 2010-04-28 23:13:55 EEST LOG: execute S_1: BEGIN 2010-04-28 23:13:55 EEST ERROR: relation "public.messages" does not exist at character 13 2010-04-28 23:13:55 EEST STATEMENT: insert into public.messages (messagetext, id) values ($1, $2) 2010-04-28 23:13:55 EEST LOG: unexpected EOF on client connection If I copy/paste the query into the postgre command line and put the values in and ; after it, it works. Everything is lowercase, so I don't think that it's that issue. If I switch to MySQL, the same code same project (I only change driver,URL, authentication), it works. In Eclipse Datasource Explorer, I can ping the DB and it succeeds. Weird thing is that I can't see the tables from there either. It expands the public schema but it doesn't expand the tables. Could it be some permission issue? Thanks!

    Read the article

  • How to implement EntityDataSource Where IN entity sql clause

    - by TonyS
    I want to pass a number of values into a parameter of the EntityDataSource, e.g.: Where="it.ORDER_ID IN {@OrderIdList}" (this is a property on the EntityDataSource) <WhereParameters> <asp:ControlParameter Name="OrderIdList" Type="Int16" ControlID="OrderFilterControl" PropertyName="OrderIdList" /> </WhereParameters> This doesn't work as ORDER_ID is of type int32 and I need to pass in multiple values, e.g. {1,2,3} etc The next thing I tried was setting the Where clause in code-behind and this all works except I can't get data binding on DropDownLists to work. By this I mean no value is returned from the bound dropdownlists in the EntityDataSource Updating Event. My ideal solution would be to use a WhereParameter on the EntityDataSource but any help is appreciated. Thanks, Tony. A complete code example follows using the AdventureWorks db: Public Class EntityDataSourceWhereInClause Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load CustomersEntityDataSource.Where = WhereClause ''# reset after each postback as its lost otherwise End Sub Private Sub cmdFilterCustomers_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles cmdFilterCustomers.Click Dim CustomerIdList As New Generic.List(Of Int32) For Each item As ListItem In CustomerIdCheckBoxList.Items If item.Selected Then CustomerIdList.Add(item.Value) End If Next Dim CustomerCsvList As String = String.Join(", ", CustomerIdList.Select(Function(o) o.ToString()).ToArray()) WhereClause = "it.CustomerID IN {" & CustomerCsvList & "}" CustomersEntityDataSource.Where = WhereClause FormView1.PageIndex = 0 End Sub ''# save between postbacks the custom Where IN clause Public Property WhereClause() As String Get Return ViewState("WhereClause") End Get Set(ByVal value As String) ViewState.Add("WhereClause", value) End Set End Property Private Sub CustomersEntityDataSource_Updating(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.EntityDataSourceChangingEventArgs) Handles CustomersEntityDataSource.Updating Dim c = CType(e.Entity, EntityFrameworkTest.Customer) If c.Title.Length = 0 Then Response.Write("Title is empty string, so will save like this!") End If End Sub End Class <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="EntityDataSourceWhereInClause.aspx.vb" Inherits="EntityFrameworkTest.EntityDataSourceWhereInClause" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <%''# filter control %> <div> <asp:EntityDataSource ID="CustomerIdListEntityDataSource" runat="server" ConnectionString="name=AdventureWorksLT2008Entities" DefaultContainerName="AdventureWorksLT2008Entities" EnableFlattening="False" EntitySetName="Customers" Select="it.[CustomerID]" OrderBy="it.[CustomerID]"> </asp:EntityDataSource> <asp:CheckBoxList ID="CustomerIdCheckBoxList" runat="server" DataSourceID="CustomerIdListEntityDataSource" DataTextField="CustomerID" DataValueField="CustomerID" RepeatDirection="Horizontal"> </asp:CheckBoxList> <asp:Button ID="cmdFilterCustomers" runat="server" Text="Apply Filter" /> </div> <% ''# you get this error passing in CSV in the where clause ''# The element type 'Edm.Int32' and the CollectionType 'Transient.collection[Edm.String(Nullable=True,DefaultValue=,MaxLength=,Unicode=,FixedLength=)]' are not compatible. The IN expression only supports entity, primitive, and reference types. Near WHERE predicate, line 6, column 15. ''# so have coded it manually in code-behind Where="it.CustomerID IN {@OrderIdList}" %> <asp:EntityDataSource ID="CustomersEntityDataSource" runat="server" ConnectionString="name=AdventureWorksLT2008Entities" DefaultContainerName="AdventureWorksLT2008Entities" EnableFlattening="False" EnableUpdate="True" EntitySetName="Customers" AutoGenerateOrderByClause="false"> </asp:EntityDataSource> <%''# updating works with DropDownLists until the Where clause is set in code %> <asp:FormView ID="FormView1" runat="server" AllowPaging="True" CellPadding="4" DataKeyNames="CustomerID" DataSourceID="CustomersEntityDataSource" ForeColor="#333333"> <EditItemTemplate> CustomerID: <asp:Label ID="CustomerIDLabel1" runat="server" Text='<%# Eval("CustomerID") %>' /> <br /> NameStyle: <asp:CheckBox ID="NameStyleCheckBox" runat="server" Checked='<%# Bind("NameStyle") %>' /> <br /> Title: <%''# the SelectedValue is not Bound to the EF object if the Where clause is updated in code-behind %> <asp:DropDownList ID="ddlTitleBound" runat="server" DataSourceID="TitleEntityDataSource" DataTextField="Title" DataValueField="Title" AutoPostBack="false" AppendDataBoundItems="true" SelectedValue='<%# Bind("Title") %>'> </asp:DropDownList> <asp:EntityDataSource ID="TitleEntityDataSource" runat="server" ConnectionString="name=AdventureWorksLT2008Entities" DefaultContainerName="AdventureWorksLT2008Entities" EnableFlattening="False" EntitySetName="Customers" Select="it.[Title]" GroupBy="it.[Title]" ViewStateMode="Enabled"> </asp:EntityDataSource> <br /> FirstName: <asp:TextBox ID="FirstNameTextBox" runat="server" Text='<%# Bind("FirstName") %>' /> <br /> MiddleName: <asp:TextBox ID="MiddleNameTextBox" runat="server" Text='<%# Bind("MiddleName") %>' /> <br /> LastName: <asp:TextBox ID="LastNameTextBox" runat="server" Text='<%# Bind("LastName") %>' /> <br /> Suffix: <asp:TextBox ID="SuffixTextBox" runat="server" Text='<%# Bind("Suffix") %>' /> <br /> CompanyName: <asp:TextBox ID="CompanyNameTextBox" runat="server" Text='<%# Bind("CompanyName") %>' /> <br /> SalesPerson: <asp:TextBox ID="SalesPersonTextBox" runat="server" Text='<%# Bind("SalesPerson") %>' /> <br /> EmailAddress: <asp:TextBox ID="EmailAddressTextBox" runat="server" Text='<%# Bind("EmailAddress") %>' /> <br /> Phone: <asp:TextBox ID="PhoneTextBox" runat="server" Text='<%# Bind("Phone") %>' /> <br /> PasswordHash: <asp:TextBox ID="PasswordHashTextBox" runat="server" Text='<%# Bind("PasswordHash") %>' /> <br /> PasswordSalt: <asp:TextBox ID="PasswordSaltTextBox" runat="server" Text='<%# Bind("PasswordSalt") %>' /> <br /> rowguid: <asp:TextBox ID="rowguidTextBox" runat="server" Text='<%# Bind("rowguid") %>' /> <br /> ModifiedDate: <asp:TextBox ID="ModifiedDateTextBox" runat="server" Text='<%# Bind("ModifiedDate") %>' /> <br /> <asp:LinkButton ID="UpdateButton" runat="server" CausesValidation="True" CommandName="Update" Text="Update" /> &nbsp;<asp:LinkButton ID="UpdateCancelButton" runat="server" CausesValidation="False" CommandName="Cancel" Text="Cancel" /> </EditItemTemplate> <EditRowStyle BackColor="#999999" /> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <ItemTemplate> CustomerID: <asp:Label ID="CustomerIDLabel" runat="server" Text='<%# Eval("CustomerID") %>' /> <br /> NameStyle: <asp:CheckBox ID="NameStyleCheckBox" runat="server" Checked='<%# Bind("NameStyle") %>' Enabled="false" /> <br /> Title: <asp:Label ID="TitleLabel" runat="server" Text='<%# Bind("Title") %>' /> <br /> FirstName: <asp:Label ID="FirstNameLabel" runat="server" Text='<%# Bind("FirstName") %>' /> <br /> MiddleName: <asp:Label ID="MiddleNameLabel" runat="server" Text='<%# Bind("MiddleName") %>' /> <br /> LastName: <asp:Label ID="LastNameLabel" runat="server" Text='<%# Bind("LastName") %>' /> <br /> Suffix: <asp:Label ID="SuffixLabel" runat="server" Text='<%# Bind("Suffix") %>' /> <br /> CompanyName: <asp:Label ID="CompanyNameLabel" runat="server" Text='<%# Bind("CompanyName") %>' /> <br /> SalesPerson: <asp:Label ID="SalesPersonLabel" runat="server" Text='<%# Bind("SalesPerson") %>' /> <br /> EmailAddress: <asp:Label ID="EmailAddressLabel" runat="server" Text='<%# Bind("EmailAddress") %>' /> <br /> Phone: <asp:Label ID="PhoneLabel" runat="server" Text='<%# Bind("Phone") %>' /> <br /> PasswordHash: <asp:Label ID="PasswordHashLabel" runat="server" Text='<%# Bind("PasswordHash") %>' /> <br /> PasswordSalt: <asp:Label ID="PasswordSaltLabel" runat="server" Text='<%# Bind("PasswordSalt") %>' /> <br /> rowguid: <asp:Label ID="rowguidLabel" runat="server" Text='<%# Bind("rowguid") %>' /> <br /> ModifiedDate: <asp:Label ID="ModifiedDateLabel" runat="server" Text='<%# Bind("ModifiedDate") %>' /> <br /> <asp:LinkButton ID="EditButton" runat="server" CausesValidation="False" CommandName="Edit" Text="Edit" /> </ItemTemplate> <PagerSettings Position="Top" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> </asp:FormView> </form>

    Read the article

  • axis2 web service behave differently when tested with web service client or with local test class

    - by Stefano
    Hello I need to update a facade to some web service proxy classes to a third party web service, and expose them as a service. This for two reason : to maintain the same interface for all application that need to use the system : actually its migrating and there are a few differences in the third party ws (method names); and to expose a simplified interface. The third party has provided me with a manual and some pregenerated proxy classes to their service (the java file says generated with axis2 1,4) . I've used netbeans 6.8 and the axis2 plugin to create an axis2 service . This service contains the proxy classes and the facade class which instantiate the web service proxy and calls its method; the facade class is exposed as service. I've used axis2 1.4 (at beginnig and later 1.5 ) and tomcat 6.0. The first test i did was to call the facede methods from inside the project itself and it worked. Then i've created a new project with a jax-ws web service client to call my class deployed on axis2. At this point has happened two strange thing : In the axis2 services page has appeared the third party proxy class as if it were a new service (if i try to get the wsdl axis raises an error ). eg. the proxy interface is named WebServiceAPI (_stub is the concrete class) and , after the first call to my service , i find a new "WebServicesAPI1272968932531_1" service inside axis . The call obvoiusly fail i've began to sniff soap messages with wireshark and i've found they differs when using proxy classes direclty from my facade test class by the messages created after being deployed on axis. i've noticed they differs for the presence of the soap header in the failing message. any help would be greatly appreciated : maybe i messed up something, there might be some incompatibilities or version mistakes? below i've added the signature of the third party proxy, its impementation and the different soap messages: /* * WebServicesAPI.java * This file was auto-generated from WSDL * by the Apache Axis2 version: 1.4 Built on : Apr 26, 2008 (06:24:30 EDT) */ package com.ibm.eci.wsapi; public interface WebServicesAPI { public com.ibm.eci.wsapi.ArrayOfstring getWorkItemHistory( java.lang.String stateKey,java.lang.String logonID,com.ibm.eci.wsapi.RepoItemHandle workItemHandle) throws java.rmi.RemoteException,com.ibm.eci.wsapi.ExceptionException0; ...etc the concrete class is : /** * WebServicesAPIStub.java * * This file was auto-generated from WSDL * by the Apache Axis2 version: 1.4 Built on : Apr 26, 2008 (06:24:30 EDT) */ package com.ibm.eci.wsapi; /* * WebServicesAPIStub java implementation */ public class WebServicesAPIStub extends org.apache.axis2.client.Stub implements WebServicesAPI{ protected org.apache.axis2.description.AxisOperation[] _operations; ... public com.ibm.eci.wsapi.ArrayOfstring getWorkItemHistory( java.lang.String stateKey297,java.lang.String logonID298,com.ibm.eci.wsapi.RepoItemHandle workItemHandle299) throws java.rmi.RemoteException ,com.ibm.eci.wsapi.ExceptionException0{ org.apache.axis2.context.MessageContext _messageContext = null; try{ org.apache.axis2.client.OperationClient _operationClient = _serviceClient.createClient(_operations[0].getName()); _operationClient.getOptions().setAction("\"\""); _operationClient.getOptions().setExceptionToBeThrownOnSOAPFault(true); addPropertyToOperationClient(_operationClient,org.apache.axis2.description.WSDL2Constants.ATTR_WHTTP_QUERY_PARAMETER_SEPARATOR,"&"); // create a message context _messageContext = new org.apache.axis2.context.MessageContext(); // create SOAP envelope with that payload org.apache.axiom.soap.SOAPEnvelope env = null; com.ibm.eci.wsapi.GetWorkItemHistoryE dummyWrappedType = null; env = toEnvelope(getFactory(_operationClient.getOptions().getSoapVersionURI()), stateKey297, logonID298, workItemHandle299, dummyWrappedType, optimizeContent(new javax.xml.namespace.QName("http://wsapi.eci.ibm.com", "getWorkItemHistory"))); //adding SOAP soap_headers _serviceClient.addHeadersToEnvelope(env); // set the message context with that soap envelope _messageContext.setEnvelope(env); // add the message contxt to the operation client _operationClient.addMessageContext(_messageContext); //execute the operation client _operationClient.execute(true); org.apache.axis2.context.MessageContext _returnMessageContext = _operationClient.getMessageContext( org.apache.axis2.wsdl.WSDLConstants.MESSAGE_LABEL_IN_VALUE); org.apache.axiom.soap.SOAPEnvelope _returnEnv = _returnMessageContext.getEnvelope(); java.lang.Object object = fromOM( _returnEnv.getBody().getFirstElement() , com.ibm.eci.wsapi.GetWorkItemHistoryResponseE.class, getEnvelopeNamespaces(_returnEnv)); return getGetWorkItemHistoryResponse_return((com.ibm.eci.wsapi.GetWorkItemHistoryResponseE)object); ... the failing soap message (generated by jax-ws client to the axis deployed service) is : POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsa:To>http://n0611049:9083/vbr_wsapi/services/WebServicesAPI.Endpoint</wsa:To> <wsa:MessageID>urn:uuid:A31AD99897F9045E981272964443982</wsa:MessageID><wsa:Action>""</wsa:Action> </soapenv:Header> <soapenv:Body> <ns1:initializeProps xmlns:ns1="http://wsapi.eci.ibm.com"> <props><val>client.locale=it_IT</val> </props> </ns1:initializeProps> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 500 Internal Server Error Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Connection: Close Date: Tue, 04 May 2010 09:16:15 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsa:Action>http://www.w3.org/2005/08/addressing/fault</wsa:Action> <wsa:RelatesTo>urn:uuid:A31AD99897F9045E981272964443982</wsa:RelatesTo> <wsa:FaultDetail> <wsa:ProblemAction> <wsa:Action>""</wsa:Action> </wsa:ProblemAction> </wsa:FaultDetail> </soapenv:Header> <soapenv:Body> <soapenv:Fault xmlns:wsa="http://www.w3.org/2005/08/addressing"> <faultcode>wsa:ActionNotSupported</faultcode> <faultstring>The [action] cannot be processed at the receiver.</faultstring> <detail /> </soapenv:Fault> </soapenv:Body> </soapenv:Envelope> the succesful call (generated by my local test class, not being deployed to axis yet) : POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns1:initializeProps xmlns:ns1="http://wsapi.eci.ibm.com"> <props> <val>client.locale=it_IT</val> </props> </ns1:initializeProps> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Date: Tue, 04 May 2010 09:40:03 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <dlwmin:initializePropsResponse xmlns:dlwmin="http://wsapi.eci.ibm.com"> <return>e0e40cc51ceb0adf96c582bb6e047b3d0f</return> </dlwmin:initializePropsResponse> </soapenv:Body> </soapenv:Envelope> POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns1:logon xmlns:ns1="http://wsapi.eci.ibm.com"> <stateKey>e0e40cc51ceb0adf96c582bb6e047b3d0f</stateKey> <systemID>----</systemID> <authBundle> <password>-----</password> <sealed>false</sealed> <username>---</username> </authBundle> </ns1:logon> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Date: Tue, 04 May 2010 09:40:21 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <dlwmin:logonResponse xmlns:dlwmin="http://wsapi.eci.ibm.com"> <return>e0e40cc51ceb0adf96c582bb6e047b3d10</return> </dlwmin:logonResponse> </soapenv:Body> </soapenv:Envelope> ... goes on with other calls

    Read the article

  • Understanding and Implementing a Force based graph layout algorithm

    - by zcourts
    I'm trying to implement a force base graph layout algorithm, based on http://en.wikipedia.org/wiki/Force-based_algorithms_(graph_drawing) My first attempt didn't work so I looked at http://blog.ivank.net/force-based-graph-drawing-in-javascript.html and https://github.com/dhotson/springy I changed my implementation based on what I thought I understood from those two but I haven't managed to get it right and I'm hoping someone can help? JavaScript isn't my strong point so be gentle... If you're wondering why write my own. In reality I have no real reason to write my own I'm just trying to understand how the algorithm is implemented. Especially in my first link, that demo is brilliant. This is what I've come up with //support function.bind - https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/bind#Compatibility if (!Function.prototype.bind) { Function.prototype.bind = function (oThis) { if (typeof this !== "function") { // closest thing possible to the ECMAScript 5 internal IsCallable function throw new TypeError("Function.prototype.bind - what is trying to be bound is not callable"); } var aArgs = Array.prototype.slice.call(arguments, 1), fToBind = this, fNOP = function () {}, fBound = function () { return fToBind.apply(this instanceof fNOP ? this : oThis || window, aArgs.concat(Array.prototype.slice.call(arguments))); }; fNOP.prototype = this.prototype; fBound.prototype = new fNOP(); return fBound; }; } (function() { var lastTime = 0; var vendors = ['ms', 'moz', 'webkit', 'o']; for(var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) { window.requestAnimationFrame = window[vendors[x]+'RequestAnimationFrame']; window.cancelAnimationFrame = window[vendors[x]+'CancelAnimationFrame'] || window[vendors[x]+'CancelRequestAnimationFrame']; } if (!window.requestAnimationFrame) window.requestAnimationFrame = function(callback, element) { var currTime = new Date().getTime(); var timeToCall = Math.max(0, 16 - (currTime - lastTime)); var id = window.setTimeout(function() { callback(currTime + timeToCall); }, timeToCall); lastTime = currTime + timeToCall; return id; }; if (!window.cancelAnimationFrame) window.cancelAnimationFrame = function(id) { clearTimeout(id); }; }()); function Graph(o){ this.options=o; this.vertices={}; this.edges={};//form {vertexID:{edgeID:edge}} } /** *Adds an edge to the graph. If the verticies in this edge are not already in the *graph then they are added */ Graph.prototype.addEdge=function(e){ //if vertex1 and vertex2 doesn't exist in this.vertices add them if(typeof(this.vertices[e.vertex1])==='undefined') this.vertices[e.vertex1]=new Vertex(e.vertex1); if(typeof(this.vertices[e.vertex2])==='undefined') this.vertices[e.vertex2]=new Vertex(e.vertex2); //add the edge if(typeof(this.edges[e.vertex1])==='undefined') this.edges[e.vertex1]={}; this.edges[e.vertex1][e.id]=e; } /** * Add a vertex to the graph. If a vertex with the same ID already exists then * the existing vertex's .data property is replaced with the @param v.data */ Graph.prototype.addVertex=function(v){ if(typeof(this.vertices[v.id])==='undefined') this.vertices[v.id]=v; else this.vertices[v.id].data=v.data; } function Vertex(id,data){ this.id=id; this.data=data?data:{}; //initialize to data.[x|y|z] or generate random number for each this.x = this.data.x?this.data.x:-100 + Math.random()*200; this.y = this.data.y?this.data.y:-100 + Math.random()*200; this.z = this.data.y?this.data.y:-100 + Math.random()*200; //set initial velocity to 0 this.velocity = new Point(0, 0, 0); this.mass=this.data.mass?this.data.mass:Math.random(); this.force=new Point(0,0,0); } function Edge(vertex1ID,vertex2ID){ vertex1ID=vertex1ID?vertex1ID:Math.random() vertex2ID=vertex2ID?vertex2ID:Math.random() this.id=vertex1ID+"->"+vertex2ID; this.vertex1=vertex1ID; this.vertex2=vertex2ID; } function Point(x, y, z) { this.x = x; this.y = y; this.z = z; } Point.prototype.plus=function(p){ this.x +=p.x this.y +=p.y this.z +=p.z } function ForceLayout(o){ this.repulsion = o.repulsion?o.repulsion:200; this.attraction = o.attraction?o.attraction:0.06; this.damping = o.damping?o.damping:0.9; this.graph = o.graph?o.graph:new Graph(); this.total_kinetic_energy =0; this.animationID=-1; } ForceLayout.prototype.draw=function(){ //vertex velocities initialized to (0,0,0) when a vertex is created //vertex positions initialized to random position when created cc=0; do{ this.total_kinetic_energy =0; //for each vertex for(var i in this.graph.vertices){ var thisNode=this.graph.vertices[i]; // running sum of total force on this particular node var netForce=new Point(0,0,0) //for each other node for(var j in this.graph.vertices){ if(thisNode!=this.graph.vertices[j]){ //net-force := net-force + Coulomb_repulsion( this_node, other_node ) netForce.plus(this.CoulombRepulsion( thisNode,this.graph.vertices[j])) } } //for each spring connected to this node for(var k in this.graph.edges[thisNode.id]){ //(this node, node its connected to) //pass id of this node and the node its connected to so hookesattraction //can update the force on both vertices and return that force to be //added to the net force this.HookesAttraction(thisNode.id, this.graph.edges[thisNode.id][k].vertex2 ) } // without damping, it moves forever // this_node.velocity := (this_node.velocity + timestep * net-force) * damping thisNode.velocity.x=(thisNode.velocity.x+thisNode.force.x)*this.damping; thisNode.velocity.y=(thisNode.velocity.y+thisNode.force.y)*this.damping; thisNode.velocity.z=(thisNode.velocity.z+thisNode.force.z)*this.damping; //this_node.position := this_node.position + timestep * this_node.velocity thisNode.x=thisNode.velocity.x; thisNode.y=thisNode.velocity.y; thisNode.z=thisNode.velocity.z; //normalize x,y,z??? //total_kinetic_energy := total_kinetic_energy + this_node.mass * (this_node.velocity)^2 this.total_kinetic_energy +=thisNode.mass*((thisNode.velocity.x+thisNode.velocity.y+thisNode.velocity.z)* (thisNode.velocity.x+thisNode.velocity.y+thisNode.velocity.z)) } cc+=1; }while(this.total_kinetic_energy >0.5) console.log(cc,this.total_kinetic_energy,this.graph) this.cancelAnimation(); } ForceLayout.prototype.HookesAttraction=function(v1ID,v2ID){ var a=this.graph.vertices[v1ID] var b=this.graph.vertices[v2ID] var force=new Point(this.attraction*(b.x - a.x),this.attraction*(b.y - a.y),this.attraction*(b.z - a.z)) // hook's attraction a.force.x += force.x; a.force.y += force.y; a.force.z += force.z; b.force.x += this.attraction*(a.x - b.x); b.force.y += this.attraction*(a.y - b.y); b.force.z += this.attraction*(a.z - b.z); return force; } ForceLayout.prototype.CoulombRepulsion=function(vertex1,vertex2){ //http://en.wikipedia.org/wiki/Coulomb's_law // distance squared = ((x1-x2)*(x1-x2)) + ((y1-y2)*(y1-y2)) + ((z1-z2)*(z1-z2)) var distanceSquared = ( (vertex1.x-vertex2.x)*(vertex1.x-vertex2.x)+ (vertex1.y-vertex2.y)*(vertex1.y-vertex2.y)+ (vertex1.z-vertex2.z)*(vertex1.z-vertex2.z) ); if(distanceSquared==0) distanceSquared = 0.001; var coul = this.repulsion / distanceSquared; return new Point(coul * (vertex1.x-vertex2.x),coul * (vertex1.y-vertex2.y), coul * (vertex1.z-vertex2.z)); } ForceLayout.prototype.animate=function(){ if(this.animating) this.animationID=requestAnimationFrame(this.animate.bind(this)); this.draw(); } ForceLayout.prototype.cancelAnimation=function(){ cancelAnimationFrame(this.animationID); this.animating=false; } ForceLayout.prototype.redraw=function(){ this.animating=true; this.animate(); } $(document).ready(function(){ var g= new Graph(); for(var i=0;i<=100;i++){ var v1=new Vertex(Math.random(), {}) var v2=new Vertex(Math.random(), {}) var e1= new Edge(v1.id,v2.id); g.addEdge(e1); } console.log(g); var l=new ForceLayout({ graph:g }); l.redraw(); });

    Read the article

  • Optimizing transition/movement smoothness for a 2D flash game.

    - by Tom
    Update 6: Fenomenas suggested me to re-create everything as simple as possible. I had my doubts that this would make any difference as the algorithm remains the same, and performance did not seem to be the issue. Anyway, it was the only suggestion I got so here it is: 30 FPS: http://www.feedpostal.com/test/simple/30/SimpleMovement.html 40 FPS: http://www.feedpostal.com/test/simple/40/SimpleMovement.html 60 FPS: http://www.feedpostal.com/test/simple/60/SimpleMovement.html 100 FPS: http://www.feedpostal.com/test/simple/100/SimpleMovement.html The code: package { import flash.display.Sprite; import flash.events.Event; import flash.events.KeyboardEvent; import flash.utils.getTimer; [SWF(width="800", height="600", frameRate="40", backgroundColor="#000000")] public class SimpleMovement extends Sprite { private static const TURNING_SPEED:uint = 180; private static const MOVEMENT_SPEED:uint = 400; private static const RADIAN_DIVIDE:Number = Math.PI/180; private var playerObject:Sprite; private var shipContainer:Sprite; private var moving:Boolean = false; private var turningMode:uint = 0; private var movementTimestamp:Number = getTimer(); private var turningTimestamp:Number = movementTimestamp; public function SimpleMovement() { //step 1: create player object playerObject = new Sprite(); playerObject.graphics.lineStyle(1, 0x000000); playerObject.graphics.beginFill(0x6D7B8D); playerObject.graphics.drawRect(0, 0, 25, 50); //make it rotate around the center playerObject.x = 0 - playerObject.width / 2; playerObject.y = 0 - playerObject.height / 2; shipContainer = new Sprite(); shipContainer.addChild(playerObject); shipContainer.x = 100; shipContainer.y = 100; shipContainer.rotation = 180; addChild(shipContainer); //step 2: install keyboard hook when stage is ready addEventListener(Event.ADDED_TO_STAGE, stageReady, false, 0, true); //step 3: install rendering update poll addEventListener(Event.ENTER_FRAME, updatePoller, false, 0, true); } private function updatePoller(event:Event):void { var newTime:Number = getTimer(); //turning if (turningMode != 0) { var turningDeltaTime:Number = newTime - turningTimestamp; turningTimestamp = newTime; var rotation:Number = TURNING_SPEED * turningDeltaTime / 1000; if (turningMode == 1) shipContainer.rotation -= rotation; else shipContainer.rotation += rotation; } //movement if (moving) { var movementDeltaTime:Number = newTime - movementTimestamp; movementTimestamp = newTime; var distance:Number = MOVEMENT_SPEED * movementDeltaTime / 1000; var rAngle:Number = shipContainer.rotation * RADIAN_DIVIDE; //convert degrees to radian shipContainer.x += distance * Math.sin(rAngle); shipContainer.y -= distance * Math.cos(rAngle); } } private function stageReady(event:Event):void { //install keyboard hook stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDown, false, 0, true); stage.addEventListener(KeyboardEvent.KEY_UP, keyUp, false, 0, true); } private final function keyDown(event:KeyboardEvent):void { if ((event.keyCode == 87) && (!moving)) //87 = W { movementTimestamp = getTimer(); moving = true; } if ((event.keyCode == 65) && (turningMode != 1)) //65 = A { turningTimestamp = getTimer(); turningMode = 1; } else if ((event.keyCode == 68) && (turningMode != 2)) //68 = D { turningTimestamp = getTimer(); turningMode = 2; } } private final function keyUp(event:KeyboardEvent):void { if ((event.keyCode == 87) && (moving)) moving = false; //87 = W if (((event.keyCode == 65) || (event.keyCode == 68)) && (turningMode != 0)) turningMode = 0; //65 = A, 68 = D } } } The results were as I expected. Absolutely no improvement. I really hope that someone has another suggestion as this thing needs fixing. Also, I doubt it's my system as I have a pretty good one (8GB RAM, Q9550 QuadCore intel, ATI Radeon 4870 512MB). Also, everyone else I asked so far had the same issue with my client. Update 5: another example of a smooth flash game just to demonstrate that my movement definitely is different! See http://www.spel.nl/game/bumpercraft.html Update 4: I traced the time before rendering (EVENT.RENDER) and right after rendering (EVENT.ENTER_FRAME), the results: rendering took: 14 ms rendering took: 14 ms rendering took: 12 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 14 ms rendering took: 12 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 14 ms rendering took: 12 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 14 ms rendering took: 14 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 24 ms rendering took: 18 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 232 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms rendering took: 14 ms rendering took: 16 ms rendering took: 12 ms rendering took: 14 ms rendering took: 12 ms The range is 12-16 ms. During these differences, the shocking/warping/flickering movement was already going on. There is also 1 peak of 232ms, at this time there was a relatively big warp. This is however not the biggest problme, the biggest problem are the continuous small warps during normal movement. Does this give anyone a clue? Update 3: After testing, I know that the following factors are not causing my problem: Bitmap's quality - changed with photoshop to an uglier 8 colours optimized graphic, no improvement at all. Constant rotation of image while turning - disabled it, no improvement at all Browser rendering - tried to use the flash player standalone, no improvement at all I am 100% convinced that the problem lies in either my code or in my algorithm. Please, help me out. It has been almost two weeks (1 week that I asked this question on SO) now and I still have to get my golden answer. Update 1: see bottom for full flex project source and a live demo demonstrating my problem. I'm working on a 2d flash game. Player ships are created as an object: ships[id] = new GameShip(); When movement and rotation information is available, this is being directed to the corresponding ship: ships[id].setMovementMode(1); //move forward Now, within this GameShip object movement works using the "Event.ENTER_FRAME" event: addEventListener(Event.ENTER_FRAME, movementHandler); The following function is then being run: private final function movementHandler(event:Event):void { var newTimeStamp:uint = UtilLib.getTimeStamp(); //set current timeStamp var distance:Number = (newTimeStamp - movementTimeStamp) / 1000 * movementSpeed; //speed = x pixels forward every 1 second movementTimeStamp = newTimeStamp; //update old timeStamp var diagonalChange:Array = getDiagonalChange(movementAngle, distance); //the diagonal position update based on angle and distance charX += diagonalChange[0]; charY += diagonalChange[1]; if (shipContainer) { //when the container is ready to be worked with shipContainer.x = charX; shipContainer.y = charY; } } private final function getDiagonalChange(angle:Number, distance:Number):Array { var rAngle:Number = angle * Math.PI/180; //convert degrees to radian return [Math.sin(rAngle) * distance, (Math.cos(rAngle) * distance) * -1]; } When the object is no longer moving, the event listener will be removed. The same method is being used for rotation. Everything works almost perfect. I've set the project's target FPS to 100 and created a FPS counter. According to the FPS counter, the average FPS in firefox is around 100, while the top is 1000 and the bottom is 22. I think that the bottom and top FPSs are only happening during the initialization of the client (startup). The problem is that the ship appears to be almost perfectly smooth, while it should be just that without the "almost" part. It's almost as if the ship is "flickering" very very fast, you can't actually see it but it's hard to focus on the object while it's moving with your eyes. Also, every now and then, there seems to be a bit of a framerate spike, as if the client is skipping a couple of frames, you then see it quickly warp. It is very difficult to explain what the real problem is, but in general it's that the movement is not perfectly smooth. So, do you have any suggestions on how to make the movement or transition of objects perfectly smooth? Update 1: I re-created the client to demonstrate my problem. Please check it out. The client: http://feedpostal.com/test/MovementTest.html The Actionscript Project (full source): http://feedpostal.com/test/MovementTest.rar An example of a smooth flash game (not created by me): http://www.gamesforwork.com/games/swf/Mission%20Racing_august_10th_2009.swf It took me a pretty long time to recreate this client side version, I hope this will help with solving the problem. Please note: yes, it is actually pretty smooth. But it is definitely not smooth enough.

    Read the article

  • How to use Koala Facebook Graph API?

    - by reko
    I am a Rails newbie. I want to use Koala's Graph API. In my controller @graph = Koala::Facebook::API.new('myFacebookAccessToken') @hello = @graph.get_object("my.Name") When I do this, I get something like this { "id"=>"123456", "name"=>"First Middle Last", "first_name"=>"First", "middle_name"=>"Middle", "last_name"=>"Last", "link"=>"http://www.facebook.com/MyName", "username"=>"my.name", "birthday"=>"12/12/1212", "hometown"=>{"id"=>"115200305133358163", "name"=>"City, State"}, "location"=>{"id"=>"1054648928202133335", "name"=>"City, State"}, "bio"=>"This is my awesome Bio.", "quotes"=>"I am the master of my fate; I am the captain of my soul. - William Ernest Henley\r\n\r\n"Don't go around saying the world owes you a living. The world owes you nothing. It was here first.\" - Mark Twain", "work"=>[{"employer"=>{"id"=>"100751133333", "name"=>"Company1"}, "position"=>{"id"=>"105763693332790962", "name"=>"Position1"}, "start_date"=>"2010-08", "end_date"=>"2011-07"}], "sports"=>[{"id"=>"104019549633137", "name"=>"Sport1"}, {"id"=>"103992339636529", "name"=>"Sport2"}], "favorite_teams"=>[{"id"=>"105467226133353743", "name"=>"Fav1"}, {"id"=>"19031343444432369133", "name"=>"Fav2"}, {"id"=>"98027790139333", "name"=>"Fav3"}, {"id"=>"104055132963393331", "name"=>"Fav4"}, {"id"=>"191744431437533310", "name"=>"Fav5"}], "favorite_athletes"=>[{"id"=>"10836600585799922", "name"=>"Fava1"}, {"id"=>"18995689436787722", "name"=>"Fava2"}, {"id"=>"11156342219404022", "name"=>"Fava4"}, {"id"=>"11169998212279347", "name"=>"Fava5"}, {"id"=>"122326564475039", "name"=>"Fava6"}], "inspirational_people"=>[{"id"=>"16383141733798", "name"=>"Fava7"}, {"id"=>"113529011990793335", "name"=>"fava8"}, {"id"=>"112032333138809855566", "name"=>"Fava9"}, {"id"=>"10810367588423324", "name"=>"Fava10"}], "education"=>[{"school"=>{"id"=>"13478880321332322233663", "name"=>"School1"}, "type"=>"High School", "with"=>[{"id"=>"1401052755", "name"=>"Friend1"}]}, {"school"=>{"id"=>"11482777188037224", "name"=>"School2"}, "year"=>{"id"=>"138383069535219", "name"=>"2005"}, "type"=>"High School"}, {"school"=>{"id"=>"10604484633093514", "name"=>"School3"}, "year"=>{"id"=>"142963519060927", "name"=>"2010"}, "concentration"=>[{"id"=>"10407695629335773", "name"=>"c1"}], "type"=>"College"}, {"school"=>{"id"=>"22030497466330708", "name"=>"School4"}, "degree"=>{"id"=>"19233130157477979", "name"=>"c3"}, "year"=>{"id"=>"201638419856163", "name"=>"2011"}, "type"=>"Graduate School"}], "gender"=>"male", "interested_in"=>["female"], "relationship_status"=>"Single", "religion"=>"Religion1", "political"=>"Political1", "email"=>"[email protected]", "timezone"=>-8, "locale"=>"en_US", "languages"=>[{"id"=>"10605952233759137", "name"=>"English"}, {"id"=>"10337617475934611", "name"=>"L2"}, {"id"=>"11296944428713061", "name"=>"L3"}], "verified"=>true, "updated_time"=>"2012-02-24T04:18:05+0000" } How do I show this entire hash in the view in a good format? This is what I did from what ever I learnt.. In my view <% @hello.each do |key, value| %> <li><%=h "#{key.to_s} : #{value.to_s}" %></li> <% end %> This will get the entire thing converted to a list... It works awesome if its just one key.. but how to work with multiple keys and show only the information... something like when it outputs hometown : City, State rather than something like hometown : {"id"=>"115200305133358163", "name"=>"City, State"} Also for education if I just say education[school][name] to display list of schools attended? The error i get is can't convert String into Integer I also tried to do this in my controller, but I get the same error.. @fav_teams = @hello["favorite_teams"]["name"] Also, how can I save all these to the database.. something like just the list of all schools.. not their id no's? Update: The way I plan to save to my database is.. lets say for a user model, i want to save to database as :facebook_id, :facebook_name, :facebook_firstname, ...., :facebook_hometown .. here I only want to save name... when it comes to education.. I want to save.. school, concentration and type.. I have no idea on how to achieve this.. Looking forward for help! thanks!

    Read the article

< Previous Page | 791 792 793 794 795 796 797 798 799  | Next Page >