Search Results

Search found 3766 results on 151 pages for 'singleton scope'.

Page 46/151 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • How to execute "eval" without writing "eval" in JavaScript

    - by Infinity
    Here's the deal, we have a big JS library that we want to compress, but YUI compressor doesn't fully compress the code if it finds an "eval" statement, out of fear that it will break something else. That's great and all, but we know exactly what is getting eval'd, so we don't want it to get conservative because there's an eval statement in MooTools JSON.decode So basically the question is, is there any alternative (maybe creative) way of writing a expression that returns the eval function? I tried a few, but no dice: window['eval'](stuff); window['e'+'val'](stuff); // stuff runs in the global scope, we need local scope this['eval'](stuff); // this.eval is not a function (new Function( "with(this) { return " + '(' + stuff + ')' + "}"))() // global scope again Any ideas? Thx

    Read the article

  • WCF InProcFactory error

    - by Terence Lewis
    I'm using IDesign's ServiceModelEx assembly to provide additional functionality over and above what's available in standard WCF. In particular I'm making use of InProcFactory to host some WCF services within my process using Named Pipes. However, my process also declares a TCP endpoint in its configuration file, which I host and open when the process starts. At some later point, when I try to host a second instance of this service using the InProcFactory through the named pipe (from a different service in the same process), for some reason it picks up the TCP endpoint in the configuration file and tries to re-host this endpoint, which throws an exception as the TCP port is already in use from the first hosting. Here is the relevant code from InProcFactory.cs in ServiceModelEx: static HostRecord GetHostRecord<S,I>() where I : class where S : class,I { HostRecord hostRecord; if(m_Hosts.ContainsKey(typeof(S))) { hostRecord = m_Hosts[typeof(S)]; } else { ServiceHost<S> host; if(m_Singletons.ContainsKey(typeof(S))) { S singleton = m_Singletons[typeof(S)] as S; Debug.Assert(singleton != null); host = new ServiceHost<S>(singleton,BaseAddress); } else { host = new ServiceHost<S>(BaseAddress); } string address = BaseAddress.ToString() + Guid.NewGuid().ToString(); hostRecord = new HostRecord(host,address); m_Hosts.Add(typeof(S),hostRecord); host.AddServiceEndpoint(typeof(I),Binding,address); if(m_Throttles.ContainsKey(typeof(S))) { host.SetThrottle(m_Throttles[typeof(S)]); } // This line fails because it tries to open two endpoints, instead of just the named-pipe one host.Open(); } return hostRecord; }

    Read the article

  • Resolution Problem with HttpRequestScoped in Autofac

    - by Page Brooks
    I'm trying to resolve the AccountController in my application, but it seems that I have a lifetime scoping issue. builder.Register(c => new MyDataContext(connectionString)).As<IDatabase>().HttpRequestScoped(); builder.Register(c => new UnitOfWork(c.Resolve<IDatabase>())).As<IUnitOfWork>().HttpRequestScoped(); builder.Register(c => new AccountService(c.Resolve<IDatabase>())).As<IAccountService>().InstancePerLifetimeScope(); builder.Register(c => new AccountController(c.Resolve<IAccountService>())).InstancePerDependency(); I need MyDataContext and UnitOfWork to be scoped at the HttpRequestLevel. When I try to resolve the AccountController, I get the following error: No scope matching the expression 'value(Autofac.Builder.RegistrationBuilder`3+<c__DisplayClass0[...]).lifetimeScopeTag.Equals(scope.Tag)' is visible from the scope in which the instance was requested. Do I have my dependency lifetimes set up incorrectly?

    Read the article

  • Resolving HttpRequestScoped Instances outside of a HttpRequest in Autofac

    - by Page Brooks
    Suppose I have a dependency that is registered as HttpRequestScoped so there is only one instance per request. How could I resolve a dependency of the same type outside of an HttpRequest? For example: // Global.asax.cs Registration builder.Register(c => new MyDataContext(connString)).As<IDatabase>().HttpRequestScoped(); _containerProvider = new ContainerProvider(builder.Build()); // This event handler gets fired outside of a request // when a cached item is removed from the cache. public void CacheItemRemoved(string k, object v, CacheItemRemovedReason r) { // I'm trying to resolve like so, but this doesn't work... var dataContext = _containerProvider.ApplicationContainer.Resolve<IDatabase>(); // Do stuff with data context. } The above code throws a DependencyResolutionException when it executes the CacheItemRemoved handler: No scope matching the expression 'value(Autofac.Builder.RegistrationBuilder`3+<c__DisplayClass0[MyApp.Core.Data.MyDataContext,Autofac.Builder.SimpleActivatorData,Autofac.Builder.SingleRegistrationStyle]).lifetimeScopeTag.Equals(scope.Tag)' is visible from the scope in which the instance was requested.

    Read the article

  • static void classes

    - by ivor
    Hello, I'm tidying up some of my code with the correct scope on some methods and attributes (I have two classes and at the moment I have a number which I just declared as public to get working, but I feel I should look into this and make private where possible, for better practice) When working in eclipse it's suggested on one method, when i change it private from public, that I can fix it by dropping off the scope so the method just says "static void" instead of public/private static void. Is this a better scenario to have nothing, rather than private or public - or is the default scope equivelant to public anyway ? Thanks

    Read the article

  • How can I make rake assets:precompile build to the right location?

    - by Micah Gideon Modell
    I'm deploying my Rails 3 app to a subdirectory of my hosting service and therefore I'm using both a scope statement in my routes.rb and a config.assets.prefix. However, this causes my rake assets:precompile to build into public//assets instead of just into assets (since my prefix simply accounts for the scope). I can copy the files to the right location and everything will work, but I'd love for someone to tell me a better way (one must exist, right?). /config/application.rb config.assets.prefix = "/sapa/assets" /config/routes.rb scope "sapa" do … end Any help would be appreciated.

    Read the article

  • Adding a row to an existing datatable in JSF

    - by shyamb
    Hi, I have a requirement of changing an existing JSF 1.1 project where I need to add an additional row to a datatable on click of a button. Currently the datatable loads 3 rows from the backing bean and this new button should add additional rows to the datatable on each click. Using the suggestion provided by http://balusc.blogspot.com/2006/06/using-datatables.html I was able to display the additional row on the UI but I could not save the new data back to the database because the backing bean is in request scope and I cannot change the scope of this bean as it would create other issues. Can somebody provide me a solution to display the new row and also to save the data back to the database when the backing bean is in request scope. Thanks Shyam

    Read the article

  • Using clases in PHP to store function

    - by Artur
    Hello! I need some advise on my PHP code organisation. I need classes where I can store different functions, and I need access to those classes in different parts of my project. Making an object of this classes each time is too sadly, so I've found a two ways have to solve it. First is to use static methods, like class car { public static $wheels_count = 4; public static function change_wheels_count($new_count) { car::$wheels_count = $new_count; } } Second is to use singleton pattern: class Example { // Hold an instance of the class private static $instance; // The singleton method public static function singleton() { if (!isset(self::$instance)) { $c = __CLASS__; self::$instance = new $c; } return self::$instance; } } But author of the article about singletons said, that if I have too much singletons in my code I should reconstruct it. But I need a lot of such classes. Can anybody explain prons and cons of each way? Which is mostly used? Are there more beautiful ways?

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • watchpoint in GDB

    - by Tim
    Hi, If I set a watchpoint for a variable local to the current scope, it will be auto deleted when going out of the scope. Is there any way to set it once and keep it auto alive whenever entering the same scope? Is there anyway to set conditional watchpoint, like "watch var1 if var1==0"? In my case, the condition does't work. gdb stops whenever var1's value is changed, instead of untill "var1==0" is true. My gdb is GNU gdb 6.8-debian. Thanks and regards!

    Read the article

  • C# going nuts when I declare variables with the same name as the ones in a lambda

    - by Rubys
    I have the following code (generates a quadratic function given the a, b, and c) Func<double, double, double, Func<double, double>> funcGenerator = (a, b, c) => f => f * f * a + b * f + c; Up until now, lovely. But then, if i try to declare a variable named a, b, c or f, visual studio pops a "A local variable named 'f' could not be declared at this scope because it would give a different meaning to 'f' which is used in a child scope." Basically, this fails, and I have no idea why, because a child scope doesn't even make any sense. Func funcGenerator = (a, b, c) = f = f * f * a + b * f + c; var f = 3; // Fails var d = 3; // Fine What's going on here?

    Read the article

  • what's the UNC path for local computer from a remote machine ?

    - by KaluSingh Gabbar
    I am writing a small utility program in IronPython to install applications on remote machine using managementclass which uses WMI. Now, the script would install an application on Machine_B from Machine_A, it works fine as long as you have the msi file on the local drive of the Target machine (Machine_B, in this case). I want to be able to do same thing with .msi file being on the Host (Machine_A) machine. network_scope = r"\\%Machine_B\root\cimv2" scope = ManagementScope(network_scope, options) scope.Connect() mp = ManagementPath("Win32_Product") ogo = ObjectGetOptions() mc = ManagementClass(scope, mp, ogo) inParams = mc.GetMethodParameters ("Install") inParams["PackageLocation"] = r"C:\installs\python-3.1.1.msi" inParams["AllUsers"] = True retVal = mc.InvokeMethod ("Install", inParams, None) print retVal ["ReturnValue"].ToString() PROBLEM : [Machine A] --- Where I am running the script, and want to host the .msi file [Machine B] --- where I want to install the application So, How can I define the UNC path for local machine ? what will be inParams["PackageLocation"] = ??

    Read the article

  • Is It Possible To Spring Autowire the same Instance of a protoype scoped class in two places

    - by Mark
    Hi ** changed the example to better express the situation i am using spring 2.5 and have the following situation @Component @Scope("prototype") Class Foo { } class A { @Autowired Foo fooA; } class B { @Autowired Foo fooB; } class C { @Autowired Foo fooC; } i am trying to understand if there is some way to use @Autowired and bind the same instance of FOO onto fooA and fooB while binding a different instance to fooC i understand that if the scope of FOO will be singleton it will work but i am wandering if there is a way to achieve the same goal while using a protoype scope. also please explain is this the correct usage of the autowiring concept ? am i trying to abuse the spring framework purpose

    Read the article

  • changing css properties via javascript

    - by tic
    I need a function to change the appearance of some elements in my html page "on the fly", but I am not able to do. The problem is that I cannot use a command like document.write ('body {background-color: #cccccc;}'); because I need to make the changes effective when the page is already loaded, using a link like <a onmouseclick="Clicker(1)" href="#">clic</a> and I cannot use a command like document.body.style.background='#cccccc'; because I do not know if it can be applied to other not so easy cases, because I need to change the appearance of elements such as td.myclass or sibling elements such as th[scope=col]+th[scope=col]+th[scope=col]. How can I do it? Thanks!

    Read the article

  • How can I redirect the stdout of ironpython in C#?

    - by Begtostudy
    public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button3_Click(object sender, EventArgs e) { try { var strExpression = @" import sys sys.stdout=my.write print 'ABC' "; var engine = Python.CreateEngine(); var scope = engine.CreateScope(); var sourceCode = engine.CreateScriptSourceFromString(strExpression); scope.SetVariable("my", this); var actual = sourceCode.Execute(scope); textBox1.Text += actual; } catch (System.Exception ex) { MessageBox.Show(ex.ToString()); } } public void write(string s) { textBox1.Text += s; } }<code> But Exception says there is not write. Where is wrong? Thanks!

    Read the article

  • scala xml rewrite rule (or, simple pattern help)

    - by williamstw
    I'm missing some fairly simple syntax I gather. I'm trying to rewrite an element label to something else and keep everything else intact. object htmlRule extends RewriteRule { override def transform(n: Node): Seq[Node] = n match { case Elem(prefix, "document", attribs, scope, child@_*) => Elem(prefix, "html", attribs, scope, child) case other => other } } Now, I ask for an explanation of two things: 1) What exactly does "child@_*" mean in plain English? 2) How can I capture the value of "child@_*" and just let it pass right through to the new element? Currently, I get the following error, which makes sense. [error] found : Seq[scala.xml.Node] [error] required: scala.xml.Node [error] Elem(prefix, "html", attribs, scope, child) I'm not wedded to this either, so if there's a better way to simply change the element name of a specific node, let's here it... Thanks, --tim

    Read the article

  • Spring FactoryBean and scopes working together

    - by TTar
    I'd like to use FactoryBeans and scopes together. Specifically, I'd like the object created and returned by a FactoryBean to be placed into a specified (perhaps custom) scope. The issue is that doing the following: <bean class="x.y.z.TestFactoryBean" scope="test" /> Results in the FactoryBean itself being scoped, and has somewhat unpredictable behaviour on the object created by the factory. I understand why this is; the factory itself is a first-class spring-managed bean, and has its own lifecycle. However, I can't find a way to specify that the object returned from the factory should itself be scoped. On the other hand, this does exactly what I want (as long as TestFactoryBean does NOT implement the FactoryBean interface): <bean class="x.y.z.TestFactoryBean" name="testFactory"> <bean class="x.y.z.TestBean" factory-bean="testFactory" factory-method="getObject" scope="test" /> So the real question is, how can I make Spring behave like it does for the 2nd example above, but using real FactoryBeans?

    Read the article

  • How to define an angular directive inside an angular directive's link function?

    - by user2316667
    I want to create an angular directive inside of a link function, however; the directive created is not able to be compiled. See this JSFiddle: http://jsfiddle.net/v47uvsj5/5/ Uncommenting this directive in the global space works as expected. app.directive('test', function () { return { templateUrl: 'myform', // wraps script tag with id 'myform' restrict: 'E', require: "^mydir", replace: true, scope: { }, link: function (scope, element, attrs, mydirCtrl) { scope.remove = function () { element.remove(); mydirCtrl.remove(); } } } }); But the exact same code inside the link function fails. The reason I want to do this is because I want the user (who is going to be myself) to be able to provide only a script tag's id via an id attribute to my main directive which will in turn create a 'wrapper' directive with a 'remove' method. This way, in the script tag, all one needs to do is implement the 'remove'.

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Dependency Injection in ASP.NET MVC NerdDinner App using Unity 2.0

    - by shiju
    In my previous post Dependency Injection in ASP.NET MVC NerdDinner App using Ninject, we did dependency injection in NerdDinner application using Ninject. In this post, I demonstrate how to apply Dependency Injection in ASP.NET MVC NerdDinner App using Microsoft Unity Application Block (Unity) v 2.0.Unity 2.0Unity 2.0 is available on Codeplex at http://unity.codeplex.com . In earlier versions of Unity, the ObjectBuilder generic dependency injection mechanism, was distributed as a separate assembly, is now integrated with Unity core assembly. So you no longer need to reference the ObjectBuilder assembly in your applications. Two additional Built-In Lifetime Managers - HierarchicalifetimeManager and PerResolveLifetimeManager have been added to Unity 2.0.Dependency Injection in NerdDinner using UnityIn my Ninject post on NerdDinner, we have discussed the interfaces and concrete types of NerdDinner application and how to inject dependencies controller constructors. The following steps will configure Unity 2.0 to apply controller injection in NerdDinner application. Step 1 – Add reference for Unity Application BlockOpen the NerdDinner solution and add  reference to Microsoft.Practices.Unity.dll and Microsoft.Practices.Unity.Configuration.dllYou can download Unity from at http://unity.codeplex.com .Step 2 – Controller Factory for Unity The controller factory is responsible for creating controller instances.We extend the built in default controller factory with our own factory for working Unity with ASP.NET MVC. public class UnityControllerFactory : DefaultControllerFactory {     protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType)     {         IController controller;         if (controllerType == null)             throw new HttpException(                     404, String.Format(                         "The controller for path '{0}' could not be found" +         "or it does not implement IController.",                     reqContext.HttpContext.Request.Path));           if (!typeof(IController).IsAssignableFrom(controllerType))             throw new ArgumentException(                     string.Format(                         "Type requested is not a controller: {0}",                         controllerType.Name),                         "controllerType");         try         {             controller = MvcUnityContainer.Container.Resolve(controllerType)                             as IController;         }         catch (Exception ex)         {             throw new InvalidOperationException(String.Format(                                     "Error resolving controller {0}",                                     controllerType.Name), ex);         }         return controller;     }   }   public static class MvcUnityContainer {     public static IUnityContainer Container { get; set; } }  Step 3 – Register Types and Set Controller Factory private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()     .RegisterType<IFormsAuthentication, FormsAuthenticationService>()     .RegisterType<IMembershipService, AccountMembershipService>()     .RegisterInstance<MembershipProvider>(Membership.Provider)     .RegisterType<IDinnerRepository, DinnerRepository>();     //Set container for Controller Factory     MvcUnityContainer.Container = container;     //Set Controller Factory as UnityControllerFactory     ControllerBuilder.Current.SetControllerFactory(                         typeof(UnityControllerFactory));            } Unity 2.0 provides a fluent interface for type configuration. Now you can call all the methods in a single statement.The above Unity configuration specified in the ConfigureUnity method tells that, to inject instance of DinnerRepositiry when there is a request for IDinnerRepositiry and  inject instance of FormsAuthenticationService when there is a request for IFormsAuthentication and inject instance of AccountMembershipService when there is a request for IMembershipService. The AccountMembershipService class has a dependency with ASP.NET Membership provider. So we configure that inject the instance of Membership Provider.After the registering the types, we set UnityControllerFactory as the current controller factory. //Set container for Controller Factory MvcUnityContainer.Container = container; //Set Controller Factory as UnityControllerFactory ControllerBuilder.Current.SetControllerFactory(                     typeof(UnityControllerFactory)); When you register a type  by using the RegisterType method, the default behavior is for the container to use a transient lifetime manager. It creates a new instance of the registered, mapped, or requested type each time you call the Resolve or ResolveAll method or when the dependency mechanism injects instances into other classes. The following are the LifetimeManagers provided by Unity 2.0ContainerControlledLifetimeManager - Implements a singleton behavior for objects. The object is disposed of when you dispose of the container.ExternallyControlledLifetimeManager - Implements a singleton behavior but the container doesn't hold a reference to object which will be disposed of when out of scope.HierarchicalifetimeManager - Implements a singleton behavior for objects. However, child containers don't share instances with parents.PerResolveLifetimeManager - Implements a behavior similar to the transient lifetime manager except that instances are reused across build-ups of the object graph.PerThreadLifetimeManager - Implements a singleton behavior for objects but limited to the current thread.TransientLifetimeManager - Returns a new instance of the requested type for each call. (default behavior)We can also create custome lifetime manager for Unity container. The following code creating a custom lifetime manager to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName]             = newValue;     }     public void Dispose()     {         RemoveValue();     } }  Step 4 – Modify Global.asax.cs for configure Unity container In the Application_Start event, we call the ConfigureUnity method for configuring the Unity container and set controller factory as UnityControllerFactory void Application_Start() {     RegisterRoutes(RouteTable.Routes);       ViewEngines.Engines.Clear();     ViewEngines.Engines.Add(new MobileCapableWebFormViewEngine());     ConfigureUnity(); }Download CodeYou can download the modified NerdDinner code from http://nerddinneraddons.codeplex.com

    Read the article

  • Metro: Namespaces and Modules

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can use the Windows JavaScript (WinJS) library to create namespaces. In particular, you learn how to use the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. You also learn how to hide private methods by using the module pattern. Why Do We Need Namespaces? Before we do anything else, we should start by answering the question: Why do we need namespaces? What function do they serve? Do they just add needless complexity to our Metro applications? After all, plenty of JavaScript libraries do just fine without introducing support for namespaces. For example, jQuery has no support for namespaces and jQuery is the most popular JavaScript library in the universe. If jQuery can do without namespaces, why do we need to worry about namespaces at all? Namespaces perform two functions in a programming language. First, namespaces prevent naming collisions. In other words, namespaces enable you to create more than one object with the same name without conflict. For example, imagine that two companies – company A and company B – both want to make a JavaScript shopping cart control and both companies want to name the control ShoppingCart. By creating a CompanyA namespace and CompanyB namespace, both companies can create a ShoppingCart control: a CompanyA.ShoppingCart and a CompanyB.ShoppingCart control. The second function of a namespace is organization. Namespaces are used to group related functionality even when the functionality is defined in different physical files. For example, I know that all of the methods in the WinJS library related to working with classes can be found in the WinJS.Class namespace. Namespaces make it easier to understand the functionality available in a library. If you are building a simple JavaScript application then you won’t have much reason to care about namespaces. If you need to use multiple libraries written by different people then namespaces become very important. Using WinJS.Namespace.define() In the WinJS library, the most basic method of creating a namespace is to use the WinJS.Namespace.define() method. This method enables you to declare a namespace (of arbitrary depth). The WinJS.Namespace.define() method has the following parameters: · name – A string representing the name of the new namespace. You can add nested namespace by using dot notation · members – An optional collection of objects to add to the new namespace For example, the following code sample declares two new namespaces named CompanyA and CompanyB.Controls. Both namespaces contain a ShoppingCart object which has a checkout() method: // Create CompanyA namespace with ShoppingCart WinJS.Namespace.define("CompanyA"); CompanyA.ShoppingCart = { checkout: function (){ return "Checking out from A"; } }; // Create CompanyB.Controls namespace with ShoppingCart WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); // Call CompanyA ShoppingCart checkout method console.log(CompanyA.ShoppingCart.checkout()); // Writes "Checking out from A" // Call CompanyB.Controls checkout method console.log(CompanyB.Controls.ShoppingCart.checkout()); // Writes "Checking out from B" In the code above, the CompanyA namespace is created by calling WinJS.Namespace.define(“CompanyA”). Next, the ShoppingCart is added to this namespace. The namespace is defined and an object is added to the namespace in separate lines of code. A different approach is taken in the case of the CompanyB.Controls namespace. The namespace is created and the ShoppingCart object is added to the namespace with the following single line of code: WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); Notice that CompanyB.Controls is a nested namespace. The top level namespace CompanyB contains the namespace Controls. You can declare a nested namespace using dot notation and the WinJS library handles the details of creating one namespace within the other. After the namespaces have been defined, you can use either of the two shopping cart controls. You call CompanyA.ShoppingCart.checkout() or you can call CompanyB.Controls.ShoppingCart.checkout(). Using WinJS.Namespace.defineWithParent() The WinJS.Namespace.defineWithParent() method is similar to the WinJS.Namespace.define() method. Both methods enable you to define a new namespace. The difference is that the defineWithParent() method enables you to add a new namespace to an existing namespace. The WinJS.Namespace.defineWithParent() method has the following parameters: · parentNamespace – An object which represents a parent namespace · name – A string representing the new namespace to add to the parent namespace · members – An optional collection of objects to add to the new namespace The following code sample demonstrates how you can create a root namespace named CompanyA and add a Controls child namespace to the CompanyA parent namespace: WinJS.Namespace.define("CompanyA"); WinJS.Namespace.defineWithParent(CompanyA, "Controls", { ShoppingCart: { checkout: function () { return "Checking out"; } } } ); console.log(CompanyA.Controls.ShoppingCart.checkout()); // Writes "Checking out" One significant advantage of using the defineWithParent() method over the define() method is the defineWithParent() method is strongly-typed. In other words, you use an object to represent the base namespace instead of a string. If you misspell the name of the object (CompnyA) then you get a runtime error. Using the Module Pattern When you are building a JavaScript library, you want to be able to create both public and private methods. Some methods, the public methods, are intended to be used by consumers of your JavaScript library. The public methods act as your library’s public API. Other methods, the private methods, are not intended for public consumption. Instead, these methods are internal methods required to get the library to function. You don’t want people calling these internal methods because you might need to change them in the future. JavaScript does not support access modifiers. You can’t mark an object or method as public or private. Anyone gets to call any method and anyone gets to interact with any object. The only mechanism for encapsulating (hiding) methods and objects in JavaScript is to take advantage of functions. In JavaScript, a function determines variable scope. A JavaScript variable either has global scope – it is available everywhere – or it has function scope – it is available only within a function. If you want to hide an object or method then you need to place it within a function. For example, the following code contains a function named doSomething() which contains a nested function named doSomethingElse(): function doSomething() { console.log("doSomething"); function doSomethingElse() { console.log("doSomethingElse"); } } doSomething(); // Writes "doSomething" doSomethingElse(); // Throws ReferenceError You can call doSomethingElse() only within the doSomething() function. The doSomethingElse() function is encapsulated in the doSomething() function. The WinJS library takes advantage of function encapsulation to hide all of its internal methods. All of the WinJS methods are defined within self-executing anonymous functions. Everything is hidden by default. Public methods are exposed by explicitly adding the public methods to namespaces defined in the global scope. Imagine, for example, that I want a small library of utility methods. I want to create a method for calculating sales tax and a method for calculating the expected ship date of a product. The following library encapsulates the implementation of my library in a self-executing anonymous function: (function (global) { // Public method which calculates tax function calculateTax(price) { return calculateFederalTax(price) + calculateStateTax(price); } // Private method for calculating state tax function calculateStateTax(price) { return price * 0.08; } // Private method for calculating federal tax function calculateFederalTax(price) { return price * 0.02; } // Public method which returns the expected ship date function calculateShipDate(currentDate) { currentDate.setDate(currentDate.getDate() + 4); return currentDate; } // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); })(this); // Show expected ship date var shipDate = CompanyA.Utilities.calculateShipDate(new Date()); console.log(shipDate); // Show price + tax var price = 12.33; var tax = CompanyA.Utilities.calculateTax(price); console.log(price + tax); In the code above, the self-executing anonymous function contains four functions: calculateTax(), calculateStateTax(), calculateFederalTax(), and calculateShipDate(). The following statement is used to expose only the calcuateTax() and the calculateShipDate() functions: // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); Because the calculateTax() and calcuateShipDate() functions are added to the CompanyA.Utilities namespace, you can call these two methods outside of the self-executing function. These are the public methods of your library which form the public API. The calculateStateTax() and calculateFederalTax() methods, on the other hand, are forever hidden within the black hole of the self-executing function. These methods are encapsulated and can never be called outside of scope of the self-executing function. These are the internal methods of your library. Summary The goal of this blog entry was to describe why and how you use namespaces with the WinJS library. You learned how to define namespaces using both the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. We also discussed how to hide private members and expose public members using the module pattern.

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Error Actions

    - by pinaldave
    This blog post is inspired from SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 4. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Error Actions – A Primer. In the article we discussed various basics terminology of the error handling. The article further covers following important concepts of error handling. Introduction to SQL Error Actions Statement Termination Scope Abortion Batch Termination Above three are the most important concepts related to error handling and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to ON? You will get Statement Termination. You will get Scope Abortion. You will get Batch Abortion. You will get Connection Termination. SQL Server will pick the error action. 2.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to OFF? You will get Statement Termination You will get Scope Abortion You will get Batch Abortion You will get Connection Termination SQL Server will pick the error action Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 5 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Two Wifi Icons in Panel [Solved]

    - by Alex
    I have the exact problem in 13.10 as this user Two Wifi indicators in panel. Here are some screenshots: Here are some screenshots from another user: http://ubuntuforums.org/showthread.php?t=2183020&p=12825563 ifconfig and iwconfig outputs $ ifconfig lo Link encap:Local Loopback inet addr:XXXXXX Mask:XXXXXXX inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2243 errors:0 dropped:0 overruns:0 frame:0 TX packets:2243 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:209889 (209.8 KB) TX bytes:209889 (209.8 KB) wlan0 Link encap:Ethernet HWaddr XXXXXXXXX inet addr:XXXXXX Bcast:XXXXXXXX Mask:XXXXXXX inet6 addr: XXXXXXX Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5925 errors:0 dropped:0 overruns:0 frame:0 TX packets:3361 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2951818 (2.9 MB) TX bytes:630579 (630.5 KB) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"XXXXX" Mode:Managed Frequency:2.437 GHz Access Point: XXXXXXXX Bit Rate=72.2 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=49/70 Signal level=-61 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:153 Invalid misc:472 Missed beacon:0

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >