Search Results

Search found 20887 results on 836 pages for 'instance fields'.

Page 93/836 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Divide and Conquer Algo to find maximum difference between two ordered elements

    - by instance
    Given an array arr[] of integers, find out the difference between any two elements such that larger element appears after the smaller number in arr[]. Examples: If array is [2, 3, 10, 6, 4, 8, 1, 7] then returned value should be 8 (Diff between 10 and 2). If array is [ 7, 9, 5, 6, 3, 2 ] then returned value should be 2 (Diff between 7 and 9) My Algorithm: I thought of using D&C algorithm. Explanation 2, 3, 10, 6, 4, 8, 1, 7 then 2,3,10,6 and 4,8,1,7 then 2,3 and 10,6 and 4,8 and 1,7 then 2 and 3 10 and 6 4 and 8 1 and 7 Here as these elements will remain in same order, i will get the maximum difference, here it's 6. Now i will move back to merege these arrays and again find the difference between minimum of first block and maximum of second block and keep doing this till end. I am not able to implement this in my code. can anyone please provide a pseudo code for this?

    Read the article

  • A simple Dynamic Proxy

    - by Abhijeet Patel
    Frameworks such as EF4 and MOQ do what most developers consider "dark magic". For instance in EF4, when you use a POCO for an entity you can opt-in to get behaviors such as "lazy-loading" and "change tracking" at runtime merely by ensuring that your type has the following characteristics: The class must be public and not sealed. The class must have a public or protected parameter-less constructor. The class must have public or protected properties Adhere to this and your type is magically endowed with these behaviors without any additional programming on your part. Behind the scenes the framework subclasses your type at runtime and creates a "dynamic proxy" which has these additional behaviors and when you navigate properties of your POCO, the framework replaces the POCO type with derived type instances. The MOQ framework does simlar magic. Let's say you have a simple interface:   public interface IFoo      {          int GetNum();      }   We can verify that the GetNum() was invoked on a mock like so:   var mock = new Mock<IFoo>(MockBehavior.Default);   mock.Setup(f => f.GetNum());   var num = mock.Object.GetNum();   mock.Verify(f => f.GetNum());   Beind the scenes the MOQ framework is generating a dynamic proxy by implementing IFoo at runtime. the call to moq.Object returns the dynamic proxy on which we then call "GetNum" and then verify that this method was invoked. No dark magic at all, just clever programming is what's going on here, just not visible and hence appears magical! Let's create a simple dynamic proxy generator which accepts an interface type and dynamically creates a proxy implementing the interface type specified at runtime.     public static class DynamicProxyGenerator   {       public static T GetInstanceFor<T>()       {           Type typeOfT = typeof(T);           var methodInfos = typeOfT.GetMethods();           AssemblyName assName = new AssemblyName("testAssembly");           var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);           var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");           var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);              typeBuilder.AddInterfaceImplementation(typeOfT);           var ctorBuilder = typeBuilder.DefineConstructor(                     MethodAttributes.Public,                     CallingConventions.Standard,                     new Type[] { });           var ilGenerator = ctorBuilder.GetILGenerator();           ilGenerator.EmitWriteLine("Creating Proxy instance");           ilGenerator.Emit(OpCodes.Ret);           foreach (var methodInfo in methodInfos)           {               var methodBuilder = typeBuilder.DefineMethod(                   methodInfo.Name,                   MethodAttributes.Public | MethodAttributes.Virtual,                   methodInfo.ReturnType,                   methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                   );               var methodILGen = methodBuilder.GetILGenerator();               methodILGen.EmitWriteLine("I'm a proxy");               if (methodInfo.ReturnType == typeof(void))               {                   methodILGen.Emit(OpCodes.Ret);               }               else               {                   if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)                   {                       MethodInfo getMethod = typeof(Activator).GetMethod(/span>"CreateInstance",new Type[]{typeof((Type)});                                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                       methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);                       methodILGen.Emit(OpCodes.Call, typeofype).GetMethod("GetTypeFromHandle"));  ));                       methodILGen.Emit(OpCodes.Callvirt, getMethod);                       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);                                                              }                 else                   {                       methodILGen.Emit(OpCodes.Ldnull);                   }                   methodILGen.Emit(OpCodes.Ret);               }               typeBuilder.DefineMethodOverride(methodBuilder, methodInfo);           }                     Type constructedType = typeBuilder.CreateType();           var instance = Activator.CreateInstance(constructedType);           return (T)instance;       }   }   Dynamic proxies are created by calling into the following main types: AssemblyBuilder, TypeBuilder, Modulebuilder and ILGenerator. These types enable dynamically creating an assembly and emitting .NET modules and types in that assembly, all using IL instructions. Let's break down the code above a bit and examine it piece by piece                Type typeOfT = typeof(T);              var methodInfos = typeOfT.GetMethods();              AssemblyName assName = new AssemblyName("testAssembly");              var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);              var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");              var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);   We are instructing the runtime to create an assembly caled "test.dll"and in this assembly we then emit a new module called "testModule". We then emit a new type definition of name "typeName"Proxy into this new module. This is the definition for the "dynamic proxy" for type T                 typeBuilder.AddInterfaceImplementation(typeOfT);               var ctorBuilder = typeBuilder.DefineConstructor(                         MethodAttributes.Public,                         CallingConventions.Standard,                         new Type[] { });               var ilGenerator = ctorBuilder.GetILGenerator();               ilGenerator.EmitWriteLine("Creating Proxy instance");               ilGenerator.Emit(OpCodes.Ret);   The newly created type implements type T and defines a default parameterless constructor in which we emit a call to Console.WriteLine. This call is not necessary but we do this so that we can see first hand that when the proxy is constructed, when our default constructor is invoked.   var methodBuilder = typeBuilder.DefineMethod(                      methodInfo.Name,                      MethodAttributes.Public | MethodAttributes.Virtual,                      methodInfo.ReturnType,                      methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                      );   We then iterate over each method declared on type T and add a method definition of the same name into our "dynamic proxy" definition     if (methodInfo.ReturnType == typeof(void))   {       methodILGen.Emit(OpCodes.Ret);   }   If the return type specified in the method declaration of T is void we simply return.     if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)   {                               MethodInfo getMethod = typeof(Activator).GetMethod("CreateInstance",                                                         new Type[]{typeof(Type)});                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                                                     methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);       methodILGen.Emit(OpCodes.Call, typeof(Type).GetMethod("GetTypeFromHandle"));       methodILGen.Emit(OpCodes.Callvirt, getMethod);       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);   }   If the return type in the method declaration of T is either a value type or an enum, then we need to create an instance of the value type and return that instance the caller. In order to accomplish that we need to do the following: 1) Get a handle to the Activator.CreateInstance method 2) Declare a local variable which represents the Type of the return type(i.e the type object of the return type) specified on the method declaration of T(obtained from the MethodInfo) and push this Type object onto the evaluation stack. In reality a RuntimeTypeHandle is what is pushed onto the stack. 3) Invoke the "GetTypeFromHandle" method(a static method in the Type class) passing in the RuntimeTypeHandle pushed onto the stack previously as an argument, the result of this invocation is a Type object (representing the method's return type) which is pushed onto the top of the evaluation stack. 4) Invoke Activator.CreateInstance passing in the Type object from step 3, the result of this invocation is an instance of the value type boxed as a reference type and pushed onto the top of the evaluation stack. 5) Unbox the result and place it into the local variable of the return type defined in step 2   methodILGen.Emit(OpCodes.Ldnull);   If the return type is a reference type then we just load a null onto the evaluation stack   methodILGen.Emit(OpCodes.Ret);   Emit a a return statement to return whatever is on top of the evaluation stack(null or an instance of a value type) back to the caller     Type constructedType = typeBuilder.CreateType();   var instance = Activator.CreateInstance(constructedType);   return (T)instance;   Now that we have a definition of the "dynamic proxy" implementing all the methods declared on T, we can now create an instance of the proxy type and return that out typed as T. The caller can now invoke the generator and request a dynamic proxy for any type T. In our example when the client invokes GetNum() we get back "0". Lets add a new method on the interface called DayOfWeek GetDay()   public interface IFoo      {          int GetNum();          DayOfWeek GetDay();      }   When GetDay() is invoked, the "dynamic proxy" returns "Sunday" since that is the default value for the DayOfWeek enum This is a very trivial example of dynammic proxies, frameworks like MOQ have a way more sophisticated implementation of this paradigm where in you can instruct the framework to create proxies which return specified values for a method implementation.

    Read the article

  • Super constructor must be a first statement in Java constructor [closed]

    - by Val
    I know the answer: "we need rules to prevent shooting into your own foot". Ok, I make millions of programming mistakes every day. To be prevented, we need one simple rule: prohibit all JLS and do not use Java. If we explain everything by "not shooting your foot", this is reasonable. But there is not much reason is such reason. When I programmed in Delphy, I always wanted the compiler to check me if I read uninitializable. I have discovered myself that is is stupid to read uncertain variable because it leads unpredictable result and is errorenous obviously. By just looking at the code I could see if there is an error. I wished if compiler could do this job. It is also a reliable signal of programming error if function does not return any value. But I never wanted it do enforce me the super constructor first. Why? You say that constructors just initialize fields. Super fields are derived; extra fields are introduced. From the goal point of view, it does not matter in which order you initialize the variables. I have studied parallel architectures and can say that all the fields can even be assigned in parallel... What? Do you want to use the unitialized fields? Stupid people always want to take away our freedoms and break the JLS rules the God gives to us! Please, policeman, take away that person! Where do I say so? I'm just saying only about initializing/assigning, not using the fields. Java compiler already defends me from the mistake of accessing notinitialized. Some cases sneak but this example shows how this stupid rule does not save us from the read-accessing incompletely initialized in construction: public class BadSuper { String field; public String toString() { return "field = " + field; } public BadSuper(String val) { field = val; // yea, superfirst does not protect from accessing // inconstructed subclass fields. Subclass constr // must be called before super()! System.err.println(this); } } public class BadPost extends BadSuper { Object o; public BadPost(Object o) { super("str"); this. o = o; } public String toString() { // superconstructor will boom here, because o is not initialized! return super.toString() + ", obj = " + o.toString(); } public static void main(String[] args) { new BadSuper("test 1"); new BadPost(new Object()); } } It shows that actually, subfields have to be inilialized before the supreclass! Meantime, java requirement "saves" us from writing specializing the class by specializing what the super constructor argument is, public class MyKryo extends Kryo { class MyClassResolver extends DefaultClassResolver { public Registration register(Registration registration) { System.out.println(MyKryo.this.getDepth()); return super.register(registration); } } MyKryo() { // cannot instantiate MyClassResolver in super super(new MyClassResolver(), new MapReferenceResolver()); } } Try to make it compilable. It is always pain. Especially, when you cannot assign the argument later. Initialization order is not important for initialization in general. I could understand that you should not use super methods before initializing super. But, the requirement for super to be the first statement is different. It only saves you from the code that does useful things simply. I do not see how this adds safety. Actually, safety is degraded because we need to use ugly workarounds. Doing post-initialization, outside the constructors also degrades safety (otherwise, why do we need constructors?) and defeats the java final safety reenforcer. To conclude Reading not initialized is a bug. Initialization order is not important from the computer science point of view. Doing initalization or computations in different order is not a bug. Reenforcing read-access to not initialized is good but compilers fail to detect all such bugs Making super the first does not solve the problem as it "Prevents" shooting into right things but not into the foot It requires to invent workarounds, where, because of complexity of analysis, it is easier to shoot into the foot doing post-initialization outside the constructors degrades safety (otherwise, why do we need constructors?) and that degrade safety by defeating final access modifier When there was java forum alive, java bigots attecked me for these thoughts. Particularly, they dislaked that fields can be initialized in parallel, saying that natural development ensures correctness. When I replied that you could use an advanced engineering to create a human right away, without "developing" any ape first, and it still be an ape, they stopped to listen me. Cos modern technology cannot afford it. Ok, Take something simpler. How do you produce a Renault? Should you construct an Automobile first? No, you start by producing a Renault and, once completed, you'll see that this is an automobile. So, the requirement to produce fields in "natural order" is unnatural. In case of alarmclock or armchair, which are still chair and clock, you may need first develop the base (clock and chair) and then add extra. So, I can have examples where superfields must be initialized first and, oppositely, when they need to be initialized later. The order does not exist in advance. So, the compiler cannot be aware of the proper order. Only programmer/constructor knows is. Compiler should not take more responsibility and enforce the wrong order onto programmer. Saying that I cannot initialize some fields because I did not ininialized the others is like "you cannot initialize the thing because it is not initialized". This is a kind of argument we have. So, to conclude once more, the feature that "protects" me from doing things in simple and right way in order to enforce something that does not add noticeably to the bug elimination at that is a strongly negative thing and it pisses me off, altogether with the all the arguments to support it I've seen so far. It is "a conceptual question about software development" Should there be the requirement to call super() first or not. I do not know. If you do or have an idea, you have place to answer. I think that I have provided enough arguments against this feature. Lets appreciate the ones who benefit form it. Let it just be something more than simple abstract and stupid "write your own language" or "protection" kind of argument. Why do we need it in the language that I am going to develop?

    Read the article

  • What's Bringing SharePoint 2007 Server to a hault?

    - by juanlarios
    I've been having issues with my teste environment and I'm hoping someone has run into this problem and can point me in the right direction. I noticed: SharePoint Server Memory is through the roof at times and so is the CPU usage. Most of CPU usage is a sql proccess. Running out of disk space all the time. I looked in the Logs located in the 12 hive and sure enough I have 1G log files that are hard to open because of the size. The following are the 3 error messages that are flooding my SharePoint logs:   04/05/2010 16:02:36.99     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Variations Propagate Page Job Definition', id '{F9A73EB4-90FE-4574-AD99-B4034056F915}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.    04/05/2010 15:59:51.51     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Profile Synchronization', id '{A05E3439-8DCD-449A-9D9E-46D601CACAA2}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:56:25.53     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Scheduled Unpublish', id '{6298F93F-388D-46B9-809E-CEDBB8659661}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:54:14.73     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Config Refresh', id '{C42DA970-3DA3-4AA2-94E5-8499C5B80A3E}' for service '{7F6D2CBE-8071-4A30-B313-7C9989FC2D87}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.       I'm googling around but haven't found much. I know one other person posted something about this back in 2008, but no answers were reached. I have already checked the databases to see if any of them have gone offline for whatever reason, but from SQL everything is fine. I recently re-created an SSP and deleted an old ssp. So I thought maybe that was causing it, and who knows? maybe that causes some of the problems or maybe all. I'm running configuration wizard and see if anything changes. Please if someone has had similar issues let me know.

    Read the article

  • how to execute for loop with sed in terminal

    - by vipin8169
    I want to execute the for loop with sed command, and is getting an error for the same for i in <comma-separated server name list>;do "command";echo $i;done where command=sed '/^$/d' /home/nextag/instance.properties|grep -vc '#' I'm getting the following error :- -bash: sed "/^$/d" /home/nextag/instance.properties|grep -vc#: No such file or directory lu1 What is the correct way to execute this command to get the perfect output I tried this as well for i in lu1;do 'sed \'/^$/d\' /home/nextag/instance.properties|grep -vc \'#\'';echo $i;done Also, can some explain the part '/^$/d'in sed '/^$/d' /home/nextag/instance.properties|grep -vc '#'

    Read the article

  • ?RAC??????(Rolling)??/????????

    - by JaneZhang(???)
       ?RAC??????????,???????,???????????(Rolling),????,???????,??????????,???????????,????????,???????????????,?????????????????,???????   ?????????????????Rolling???,???????????Rolling?,?????????? ????,???Rolling???????:1. ?????2. ?????,????????????3. ????????????????????4. ??????,????????????????5. ?????????Readme????????????:1). ?oracle???????????????????.2). ??????:3). ??1????ORACLE_HOME?????????+ASM??(???);4). ?1?????:$cd $ORACLE_HOME/OPatch/10082277$opatch apply5). ??opatch????????????,??????????:6). ??1????ORACLE_HOME?????????+ASM??(???);7). ??2????ORACLE_HOME?????????+ASM??(???);8). ?????????????,??????????;9). ??2????ORACLE_HOME?????????+ASM??(???);10).???????,????? ????10.2.0.4 RAC???(Rolling)????8575528???:1).?oracle???????????????????,??:$ORACLE_HOME/OPatch??.$ pwd/u01/app/oracle/OPatch$ lsdocs  emdpatch.pl  jlib  opatch  opatch.ini  opatch.pl  opatchprereqs  p8575528_10204_Linux-x86.zip2).??????:su - oracle$ unzip p8575528_10204_Linux-x86.zipArchive:  p8575528_10204_Linux-x86.zip  creating: 8575528/  creating: 8575528/files/  creating: 8575528/files/lib/  creating: 8575528/files/lib/libserver10.a/ inflating: 8575528/files/lib/libserver10.a/kks1.o inflating: 8575528/files/lib/libserver10.a/kksc.o inflating: 8575528/files/lib/libserver10.a/kksh.o inflating: 8575528/files/lib/libserver10.a/ksmp.o  creating: 8575528/etc/  creating: 8575528/etc/config/ inflating: 8575528/etc/config/inventory inflating: 8575528/etc/config/actions  creating: 8575528/etc/xml/ inflating: 8575528/etc/xml/GenericActions.xml inflating: 8575528/etc/xml/ShiphomeDirectoryStructure.xml inflating: 8575528/README.txt    $ ls8575528  docs  emdpatch.pl  jlib  opatch  opatch.ini  opatch.pl  opatchprereqs  p8575528_10204_Linux-x86.zip3).????????????RAC?????(rolling)?$ $ORACLE_HOME/OPatch/opatch query -all /u01/app/oracle/OPatch/8575528|grep rollingPatch is a rolling patch: true <=====??????4).??1??????ORACLE_HOME?????????(???ASM,????):$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>??:$srvctl stop instance -d ONEPIECE -i ONEPIECE1$srvctl stop asm -n nascds14$ crs_stat -tName           Type           Target    State     Host      ------------------------------------------------------------ora....E1.inst application    OFFLINE   OFFLINE            ora....SM1.asm application    OFFLINE   OFFLINE5). ?1?????:??:$su - oracle$cd /u01/app/oracle/OPatch/8575528$opatch applyInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_01-27-38AM.logApplySession applying interim patch '8575528' to OH '/u01/app/oracle'Running prerequisite checks...OPatch detected the node list and the local node from the inventory.  OPatch will patch the local system thenpropagate the patch to the remote nodes.This node is part of an Oracle Real Application Cluster.Remote nodes: 'nascds15'Local node: 'nascds14'Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.(Oracle Home = '/u01/app/oracle')Is the local system ready for patching? [y|n]y <======??yUser Responded with: YBacking up files and inventory (not for auto-rollback) for the Oracle HomeBacking up files affected by the patch '8575528' for restore. This might take a while...Backing up files affected by the patch '8575528' for rollback. This might take a while...Patching component oracle.rdbms, 10.2.0.4.0...Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kks1.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksc.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksh.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/ksmp.o"Running make for target ioracleApplySession adding interim patch '8575528' to inventoryVerifying the update...Inventory check OK: Patch ID 8575528 is registered in Oracle Home inventory with proper meta-data.Files check OK: Files from Patch ID 8575528 are present in Oracle Home.The local system has been patched.  You can restart Oracle instances on it.Patching in rolling mode.The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]6). ??opatch????????????????????????7). ??1???ASM ????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds14$srvctl start instance -d ONEPIECE -i ONEPIECE1$crs_stat -tora....E1.inst application    ONLINE    ONLINE    nascds14  ora....SM1.asm application    ONLINE    ONLINE    nascds148).??2???ASM????????:$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>$srvctl stop instance -d ONEPIECE -i ONEPIECE2$srvctl stop asm -n nascds15$crs_statora....E2.inst application    OFFLINE   OFFLINE            ora....SM2.asm application    OFFLINE   OFFLINE9). ?????????????,???????????Is the node ready for patching? [y|n] y <====??yUser Responded with: YUpdating nodes 'nascds15'  Apply-related files are:    FP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt"    DP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt"    MP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt"    RC = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remote_cmds.txt"Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt" withactual path.Propagating files to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt" withactual path.Propagating directories to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt" withactual path.Running command on remote node 'nascds15':cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle || echoREMOTE_MAKE_FAILED::>&2The node 'nascds15' has been patched.  You can restart Oracle instances on it.There were relinks on remote nodes.  Remember to check the binary size and timestamp on the nodes 'nascds15' .The following make commands were invoked on remote nodes:'cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle'OPatch succeeded.10). ??2???ASM????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds15$srvctl start instance -d ONEPIECE -i ONEPIECE211).??????????????????????????$ORACLE_HOME/OPatch/opatch lsinventory[oracle@nascds14 8575528]$ $ORACLE_HOME/OPatch/opatch lsinventoryInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_01-44-11AM.logLsinventory Output file location : /u01/app/oracle/cfgtoollogs/opatch/lsinv/lsinventory2012-06-13_01-44-11AM.txt--------------------------------------------------------------------------------Installed Top-level Products (2):Oracle Database 10g                                                  10.2.0.1.0Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0There are 2 products installed in this Oracle Home.Interim patches (1) :Patch  8575528      : applied on Wed Jun 13 01:28:24 CST 2012<<<<<<<<<<<<<<<<<<<  Created on 17 Aug 2010, 07:56:36 hrs PST8PDT  Bugs fixed:    8575528Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded.Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded. ????10.2.0.4 RAC???(Rolling)????8575528???: 1).??1?????ORACLE_HOME?????????(???ASM,????):$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>??:$srvctl stop instance -d ONEPIECE -i ONEPIECE1$srvctl stop asm -n nascds14$crs_stat -tName           Type           Target    State     Host        ------------------------------------------------------------ora....E1.inst application    OFFLINE   OFFLINE              ora....SM1.asm application    OFFLINE   OFFLINE  2). ?1??????:??:$su - oracle$cd $ORACLE_HOME/OPatch/8575528$opatch rollback -id 8575528Invoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_18-22-10PM.logRollbackSession rolling back interim patch '8575528' from OH '/u01/app/oracle'Running prerequisite checks...OPatch detected the node list and the local node from the inventory.  OPatch will patch the local system thenpropagate the patch to the remote nodes.This node is part of an Oracle Real Application Cluster.Remote nodes: 'nascds15'Local node: 'nascds14'Please shut down Oracle instances running out of this ORACLE_HOME on all the nodes.(Oracle Home = '/u01/app/oracle')Are all the nodes ready for patching? [y|n]y <=========??yUser Responded with: YBacking up files affected by the patch '8575528' for restore. This might take a while...Patching component oracle.rdbms, 10.2.0.4.0...Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kks1.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksc.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksh.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/ksmp.o"Running make for target ioracleRollbackSession removing interim patch '8575528' from inventoryPatching in rolling mode.The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]3). ??opatch????????????????????????????4). ??1??ASM ????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds14$srvctl start instance -d ONEPIECE -i ONEPIECE1$crs_stat -tora....E1.inst application    ONLINE    ONLINE    nascds14    ora....SM1.asm application    ONLINE    ONLINE    nascds145).??2???ASM????????:$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>$srvctl stop instance -d ONEPIECE -i ONEPIECE2$srvctl stop asm -n nascds15$crs_stat -tora....E2.inst application    OFFLINE   OFFLINE              ora....SM2.asm application    OFFLINE   OFFLINE  6). ??????????????,??????????The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]y <=========??yUser Responded with: YUpdating nodes 'nascds15'  Rollback-related files are:    FR = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_files.txt"    DR = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt"    FP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt"    MP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt"    RC = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remote_cmds.txt"Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt" withactual path.Removing directories on remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt" withactual path.Propagating files to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt" withactual path.Propagating directories to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt" withactual path.Running command on remote node 'nascds15':cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle || echoREMOTE_MAKE_FAILED::>&2The node 'nascds15' has been patched.  You can restart Oracle instances on it.There were relinks on remote nodes.  Remember to check the binary size and timestamp on the nodes 'nascds15' .The following make commands were invoked on remote nodes:'cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle'OPatch succeeded.7). ??2???ASM????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds15$srvctl start instance -d ONEPIECE -i ONEPIECE28).??????????????????????????$ $ORACLE_HOME/OPatch/opatch lsinventoryInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_19-40-41PM.logLsinventory Output file location : /u01/app/oracle/cfgtoollogs/opatch/lsinv/lsinventory2012-06-13_19-40-41PM.txt--------------------------------------------------------------------------------Installed Top-level Products (2):Oracle Database 10g                                                  10.2.0.1.0Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0There are 2 products installed in this Oracle Home.There are no Interim patches installed in this Oracle Home.Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded.

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • Array help Index out of range exeption was unhandled

    - by Michael Quiles
    I am trying to populate combo boxes from a text file using comma as a delimiter everything was working fine, but now when I debug I get the "Index out of range exeption was unhandled" warning. I guess I need a fresh pair of eyes to see where I went wrong, I commented on the line that gets the error //Fname = fields[1]; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Drawing.Printing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; namespace Sullivan_Payroll { public partial class xEmpForm : Form { bool complete = false; public xEmpForm() { InitializeComponent(); } private void xEmpForm_Resize(object sender, EventArgs e) { this.xCenterPanel.Left = Convert.ToInt16((this.Width - this.xCenterPanel.Width) / 2); this.xCenterPanel.Top = Convert.ToInt16((this.Height - this.xCenterPanel.Height) / 2); Refresh(); } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { //Exits the application this.Close(); } private void xEmpForm_FormClosing(object sender, FormClosingEventArgs e) //use this on xtrip calculator { DialogResult Response; if (complete == true) { Application.Exit(); } else { Response = MessageBox.Show("Are you sure you want to Exit?", "Exit", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2); if (Response == DialogResult.No) { complete = false; e.Cancel = true; } else { complete = true; Application.Exit(); } } } private void xEmpForm_Load(object sender, EventArgs e) { //file sources string fileDept = "source\\Department.txt"; string fileSex = "source\\Sex.txt"; string fileStatus = "source\\Status.txt"; if (File.Exists(fileDept)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileDept)) { string dept = ""; while ((dept = sr.ReadLine()) != null) { this.xDeptComboBox.Items.Add(dept); } } } else { MessageBox.Show("The Department file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileSex)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileSex)) { string sex = ""; while ((sex = sr.ReadLine()) != null) { this.xSexComboBox.Items.Add(sex); } } } else { MessageBox.Show("The Sex file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileStatus)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileStatus)) { string status = ""; while ((status = sr.ReadLine()) != null) { this.xStatusComboBox.Items.Add(status); } } } else { MessageBox.Show("The Status file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } private void xFileSaveMenuItem_Click(object sender, EventArgs e) { { const string fileNew = "source\\New Staff.txt"; string recordIn; FileStream outFile = new FileStream(fileNew, FileMode.Create, FileAccess.Write); StreamWriter writer = new StreamWriter(outFile); for (int count = 0; count <= this.xEmployeeListBox.Items.Count - 1; count++) { this.xEmployeeListBox.SelectedIndex = count; recordIn = this.xEmployeeListBox.SelectedItem.ToString(); writer.WriteLine(recordIn); } writer.Close(); outFile.Close(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; MessageBox.Show("your file is saved"); } } private void xViewFacultyMenuItem_Click(object sender, EventArgs e) { const string fileStaff = "source\\Staff.txt"; const char DELIM = ','; string Lname, Fname, Depart, Stat, Sex, Salary, cDept, cStat, cSex; double Gtotal; string recordIn; string[] fields; cDept = this.xDeptComboBox.SelectedItem.ToString(); cStat = this.xStatusComboBox.SelectedItem.ToString(); cSex = this.xSexComboBox.SelectedItem.ToString(); FileStream inFile = new FileStream(fileStaff, FileMode.Open, FileAccess.Read); StreamReader reader = new StreamReader(inFile); recordIn = reader.ReadLine(); while (recordIn != null) { fields = recordIn.Split(DELIM); Lname = fields[0]; Fname = fields[1]; // this is where the error appears Depart = fields[2]; Stat = fields[3]; Sex = fields[4]; Salary = fields[5]; Fname = fields[1].TrimStart(null); Depart = fields[2].TrimStart(null); Stat = fields[3].TrimStart(null); Sex = fields[4].TrimStart(null); Salary = fields[5].TrimStart(null); Gtotal = double.Parse(Salary); if (Depart == cDept && cStat == Stat && cSex == Sex) { this.xEmployeeListBox.Items.Add(recordIn); } recordIn = reader.ReadLine(); } reader.Close(); inFile.Close(); if (this.xEmployeeListBox.Items.Count >= 1) { this.xFileSaveMenuItem.Enabled = true; this.xFilePrintMenuItem.Enabled = true; this.xEditClearMenuItem.Enabled = true; } else { this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; MessageBox.Show("Records not found"); } } private void xEditClearMenuItem_Click(object sender, EventArgs e) { this.xEmployeeListBox.Items.Clear(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; } } } Source file -- Anderson, Kristen, Accounting, Assistant, Female, 43155 Ball, Robin, Accounting, Instructor, Female, 42723 Chin, Roger, Accounting, Full, Male,59281 Coats, William, Accounting, Assistant, Male, 45371 Doepke, Cheryl, Accounting, Full, Female, 52105 Downs, Clifton, Accounting, Associate, Male, 46887 Garafano, Karen, Finance, Associate, Female, 49000 Hill, Trevor, Management, Instructor, Male, 38590 Jackson, Carole, Accounting, Instructor, Female, 38781 Jacobson, Andrew, Management, Full, Male, 56281 Lewis, Karl, Management, Associate, Male, 48387 Mack, Kevin, Management, Assistant, Male, 45000 McKaye, Susan, Management, Instructor, Female, 43979 Nelsen, Beth, Finance, Full, Female, 52339 Nelson, Dale, Accounting, Full, Male, 54578 Palermo, Sheryl, Accounting, Associate, Female, 45617 Rais, Mary, Finance, Instructor, Female, 27000 Scheib, Earl, Management, Instructor, Male, 37389 Smith, Tom, Finance, Full, Male, 57167 Smythe, Janice, Management, Associate, Female, 46887 True, David, Accounting, Full, Male, 53181 Young, Jeff, Management, Assistant, Male, 43513

    Read the article

  • Custom ASP.NET Routing to an HttpHandler

    - by Rick Strahl
    As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces. New Web Forms Routing Support In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms. To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx"); routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx"); } and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:public partial class StockQuote1 : System.Web.UI.Page { protected StockQuote Quote = null; protected void Page_Load(object sender, EventArgs e) { string symbol = RouteData.Values["symbol"] as string; StockServer server = new StockServer(); Quote = server.GetStockQuote(symbol); // display stock data in Page View } } Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy! How Routing Works To handle Routing in ASP.NET involves these steps: Registering Routes Creating a custom RouteHandler to retrieve an HttpHandler Attaching RouteData to your HttpHandler Picking up Route Information in your Request code Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched. Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler. At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData. It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis. It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules. Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call: which fires just before handler resolution. Non-Page Routing – You need to build custom RouteHandlers If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler. Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type./// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData. If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData; return handler; } The code in the handler implementation can then pick up the RouteData from the context collection as needed:RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution. An Example of a Custom RouteHandler: Attribute Based Route Implementation In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss. As a quick review here’s how CallbackHandler works: You create an Http Handler that derives from CallbackHandler You implement methods that have a [CallbackMethod] Attribute and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:// RestService.ashx.cs public class RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } } CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs. Typical URLs looked like this: http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msfthttp://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this: http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msfthttp://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml (the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)(If you use FireFox try using the JSONView plug-in make it easier to view JSON content) So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to: [CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier. So how does that work? Let’s find out… How to Create Custom Routes As mentioned earlier Routing is made up of several steps: Creating a custom RouteHandler to create HttpHandler instances Mapping the actual Routes to the RouteHandler Retrieving the RouteData and actually doing something useful with it in the HttpHandler In the CallbackHandler routing example above this works out to something like this: Create a custom RouteHandler that includes a property to track the method to call Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler Creating a Custom Route Handler To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute. Here’s the code:/// <summary> /// Route handler that can create instances of CallbackHandler derived /// callback classes. The route handler tracks the method name and /// creates an instance of the service in a predictable manner /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam> public class CallbackHandlerRouteHandler : IRouteHandler { /// <summary> /// Method name that is to be called on this route. /// Set by the automatically generated RegisterRoutes /// invokation. /// </summary> public string MethodName { get; set; } /// <summary> /// The type of the handler we're going to instantiate. /// Needed so we can semi-generically instantiate the /// handler and call the method on it. /// </summary> public Type CallbackHandlerType { get; set; } /// <summary> /// Constructor to pass in the two required components we /// need to create an instance of our handler. /// </summary> /// <param name="methodName"></param> /// <param name="callbackHandlerType"></param> public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType) { MethodName = methodName; CallbackHandlerType = callbackHandlerType; } /// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } /// <summary> /// Generic method to register all routes from a CallbackHandler /// that have RouteUrls defined on the [CallbackMethod] attribute /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam> /// <param name="routes"></param> public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes) { // find all methods var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public); foreach (var method in methods) { var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (attrs.Length < 1) continue; CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute; if (string.IsNullOrEmpty(attr.RouteUrl)) continue; // Add the route routes.Add(method.Name, new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler)))); } } } The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data. When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler. The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform. The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler. To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):protected void Application_Start(object sender, EventArgs e) { // Register Routes for RestService CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes); } If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set. routes.Add(method.Name,           new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) ); While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed: protected void Application_Start(objectsender, EventArgs e){    RegisterRoutes(RouteTable.Routes);}void RegisterRoutes(RouteCollection routes) { routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",                     new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",                     new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );}I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well. Using the RouteData in the Handler A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler. In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:/// <summary> /// Optionally store RouteData on this handler /// so we can access it internally /// </summary> public RouteData RouteData {get; set; } and the RouteHandler needs to set it when it creates the handler instance. Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor): // check for Route Data method name if (target is CallbackHandler) { var routeData = ((CallbackHandler)target).RouteData; if (routeData != null) methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName; } When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters: // look for parameters in the routeif(routeData != null){    string parmString = routeData.Values[parameter.Name] as string;    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);} And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler. For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):internal object ExecuteMethod(string method, object target, string[] parameters, CallbackMethodParameterType paramType, ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; } The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call. What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation. Debugging your Routes If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes. Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):   which adds a RouteDebug assembly to your project. Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); // Debug your routes RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any routed URL then displays something like this: The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins. This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done. Routing Around Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values. Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler References Sample ProjectIncludes the sample CallbackHandler service discussed here along with compiled versionsof the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010) West Wind Web Toolkit includes full implementation of CallbackHandler and the Routing Handler West Wind Web Toolkit Source CodeContains the full source code to the Westwind.Web and Westwind.Utilities assemblies usedin these samples. Includes the source described in the post.(Latest build in the Subversion Repository) CallbackHandler Source(Relevant code to this article tree in Westwind.Web assembly) JSONView FireFoxPluginA simple FireFox Plugin to easily view JSON data natively in FireFox.For IE you can use a registry hack to display JSON as raw text.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  HTTP  

    Read the article

  • How to create Custom ListForm WebPart

    - by DipeshBhanani
    Mostly all who works extensively on SharePoint (including meJ) don’t like to use out-of-box list forms (DispForm.aspx, EditForm.aspx, NewForm.aspx) as interface. Actually these OOB list forms bind hands of developers for the customization. It gives headache to developers to add just one post back event, for a dropdown field and to populate other fields in NewForm.aspx or EditForm.aspx. On top of that clients always ask such stuff. So here I am going to give you guys a flight for SharePoint Customization world. In this blog, I will explain, how to create CustomListForm WebPart. In my next blogs, I am going to explain easy deployment of List Forms through features and last, guidance on using SharePoint web controls. 1.       First thing, create a class library project through Visual Studio and inherit the class with WebPart class.     public class CustomListForm : WebPart   2.       Declare the public variables and properties which we are going to use throughout the class. You will get to know these once you see them in use.         #region "Variable Declaration"           Table spTableCntl;         FormToolBar formToolBar;         Literal ltAlertMessage;         Guid SiteId;         Guid ListId;         int ItemId;         string ListName;           #endregion           #region "Properties"           SPControlMode _ControlMode = SPControlMode.New;         [Personalizable(PersonalizationScope.Shared),          WebBrowsable(true),          WebDisplayName("Control Mode"),          WebDescription("Set Control Mode"),          DefaultValue(""),          Category("Miscellaneous")]         public SPControlMode ControlMode         {             get { return _ControlMode; }             set { _ControlMode = value; }         }           #endregion     The property “ControlMode” is used to identify the mode of the List Form. The property is of type SPControlMode which is an enum type with values (Display, Edit, New and Invalid). When we will add this WebPart to DispForm.aspx, EditForm.aspx and NewForm.aspx, we will set the WebPart property “ControlMode” to Display, Edit and New respectively.     3.       Now, we need to override the CreateChildControl method and write code to manually add SharePoint Web Controls related to each list fields as well as ToolBar controls.         protected override void CreateChildControls()         {             base.CreateChildControls();               try             {                 SiteId = SPContext.Current.Site.ID;                 ListId = SPContext.Current.ListId;                 ListName = SPContext.Current.List.Title;                   if (_ControlMode == SPControlMode.Display || _ControlMode == SPControlMode.Edit)                     ItemId = SPContext.Current.ItemId;                   SPSecurity.RunWithElevatedPrivileges(delegate()                 {                     using (SPSite site = new SPSite(SiteId))                     {                         //creating a new SPSite with credentials of System Account                         using (SPWeb web = site.OpenWeb())                         {                               //<Custom Code for creating form controls>                         }                     }                 });             }             catch (Exception ex)             {                 ShowError(ex, "CreateChildControls");             }         }   Here we are assuming that we are developing this WebPart to plug into List Forms. Hence we will get the List Id and List Name from the current context. We can have Item Id only in case of Display and Edit Mode. We are putting our code into “RunWithElevatedPrivileges” to elevate privileges to System Account. Now, let’s get deep down into the main code and expand “//<Custom Code for creating form controls>”. Before initiating any SharePoint control, we need to set context of SharePoint web controls explicitly so that it will be instantiated with elevated System Account user. Following line does the job.     //To create SharePoint controls with new web object and System Account credentials     SPControl.SetContextWeb(Context, web);   First thing, let’s add main table as container for all controls.     //Table to render webpart     Table spTableMain = new Table();     spTableMain.CellPadding = 0;     spTableMain.CellSpacing = 0;     spTableMain.Width = new Unit(100, UnitType.Percentage);     this.Controls.Add(spTableMain);   Now we need to add Top toolbar with Save and Cancel button at top as you see in the below screen shot.       // Add Row and Cell for Top ToolBar     TableRow spRowTopToolBar = new TableRow();     spTableMain.Rows.Add(spRowTopToolBar);     TableCell spCellTopToolBar = new TableCell();     spRowTopToolBar.Cells.Add(spCellTopToolBar);     spCellTopToolBar.Width = new Unit(100, UnitType.Percentage);         ToolBar toolBarTop = (ToolBar)Page.LoadControl("/_controltemplates/ToolBar.ascx");     toolBarTop.CssClass = "ms-formtoolbar";     toolBarTop.ID = "toolBarTbltop";     toolBarTop.RightButtons.SeparatorHtml = "<td class=ms-separator> </td>";       if (_ControlMode != SPControlMode.Display)     {         SaveButton btnSave = new SaveButton();         btnSave.ControlMode = _ControlMode;         btnSave.ListId = ListId;           if (_ControlMode == SPControlMode.New)             btnSave.RenderContext = SPContext.GetContext(web);         else         {             btnSave.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemId = ItemId;         }         toolBarTop.RightButtons.Controls.Add(btnSave);     }       GoBackButton goBackButtonTop = new GoBackButton();     toolBarTop.RightButtons.Controls.Add(goBackButtonTop);     goBackButtonTop.ControlMode = SPControlMode.Display;       spCellTopToolBar.Controls.Add(toolBarTop);   Here we have use “SaveButton” and “GoBackButton” which are internal SharePoint web controls for save and cancel functionality. I have set some of the properties of Save Button with if-else condition because we will not have Item Id in case of New Mode. Item Id property is used to identify which SharePoint List Item need to be saved. Now, add Form Toolbar to the page which contains “Attach File”, “Delete Item” etc buttons.       // Add Row and Cell for FormToolBar     TableRow spRowFormToolBar = new TableRow();     spTableMain.Rows.Add(spRowFormToolBar);     TableCell spCellFormToolBar = new TableCell();     spRowFormToolBar.Cells.Add(spCellFormToolBar);     spCellFormToolBar.Width = new Unit(100, UnitType.Percentage);       FormToolBar formToolBar = new FormToolBar();     formToolBar.ID = "formToolBar";     formToolBar.ListId = ListId;     if (_ControlMode == SPControlMode.New)         formToolBar.RenderContext = SPContext.GetContext(web);     else     {         formToolBar.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemId = ItemId;     }     formToolBar.ControlMode = _ControlMode;     formToolBar.EnableViewState = true;       spCellFormToolBar.Controls.Add(formToolBar);     The ControlMode property will take care of which button to be displayed on the toolbar. E.g. “Attach files”, “Delete Item” in new/edit forms and “New Item”, “Edit Item”, “Delete Item”, “Manage Permissions” etc in display forms. Now add main section which contains form field controls.     //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       //Add a Blank Row with height of 5px to render space between ToolBar table and Control table     TableRow spRowLine1 = new TableRow();     spTableMain.Rows.Add(spRowLine1);     TableCell spCellLine1 = new TableCell();     spRowLine1.Cells.Add(spCellLine1);     spCellLine1.Height = new Unit(5, UnitType.Pixel);     spCellLine1.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       //Add Row and Cell for Form Controls Section     TableRow spRowCntl = new TableRow();     spTableMain.Rows.Add(spRowCntl);     TableCell spCellCntl = new TableCell();       //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       TableRow spRowLine2 = new TableRow();     spTableMain.Rows.Add(spRowLine2);     TableCell spCellLine2 = new TableCell();     spRowLine2.Cells.Add(spCellLine2);     spCellLine2.CssClass = "ms-formline";     spCellLine2.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       // Add Blank row with height of 5 pixel     TableRow spRowLine3 = new TableRow();     spTableMain.Rows.Add(spRowLine3);     TableCell spCellLine3 = new TableCell();     spRowLine3.Cells.Add(spCellLine3);     spCellLine3.Height = new Unit(5, UnitType.Pixel);     spCellLine3.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));   You can add bottom toolbar also to get same look and feel as OOB forms. I am not adding here as the blog will be much lengthy. At last, you need to write following lines to allow unsafe updates for Save and Delete button.     // Allow unsafe update on web for save button and delete button     if (this.Page.IsPostBack && this.Page.Request["__EventTarget"] != null         && (this.Page.Request["__EventTarget"].Contains("IOSaveItem")         || this.Page.Request["__EventTarget"].Contains("IODeleteItem")))     {         SPContext.Current.Web.AllowUnsafeUpdates = true;     }   So that’s all. We have finished writing Custom Code for adding field control. But something most important is skipped. In above code, I have called function “CreateFieldControls(web);” to add SharePoint field controls to the page. Let’s see the implementation of the function:     private void CreateFieldControls(SPWeb pWeb)     {         SPList listMain = pWeb.Lists[ListId];         SPFieldCollection fields = listMain.Fields;           //Main Table to render all fields         spTableCntl = new Table();         spTableCntl.BorderWidth = new Unit(0);         spTableCntl.CellPadding = 0;         spTableCntl.CellSpacing = 0;         spTableCntl.Width = new Unit(100, UnitType.Percentage);         spTableCntl.CssClass = "ms-formtable";           SPContext controlContext = SPContext.GetContext(this.Context, ItemId, ListId, pWeb);           foreach (SPField listField in fields)         {             string fieldDisplayName = listField.Title;             string fieldInternalName = listField.InternalName;               //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;               FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }               //Add row to display a field row             TableRow spCntlRow = new TableRow();             spTableCntl.Rows.Add(spCntlRow);               //Add the cells for containing field lable and control             TableCell spCellLabel = new TableCell();             spCellLabel.Width = new Unit(30, UnitType.Percentage);             spCellLabel.CssClass = "ms-formlabel";             spCntlRow.Cells.Add(spCellLabel);             TableCell spCellControl = new TableCell();             spCellControl.Width = new Unit(70, UnitType.Percentage);             spCellControl.CssClass = "ms-formbody";             spCntlRow.Cells.Add(spCellControl);               //Add the control to the table cells             spCellLabel.Controls.Add(fieldLabel);             spCellControl.Controls.Add(fieldControl);               //Add description if there is any in case of New and Edit Mode             if (_ControlMode != SPControlMode.Display && listField.Description != string.Empty)             {                 FieldDescription fieldDesc = new FieldDescription();                 fieldDesc.FieldName = fieldInternalName;                 fieldDesc.ListId = ListId;                 spCellControl.Controls.Add(fieldDesc);             }               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }         }         fields = null;     }   First of all, I have declared List object and got list fields in field collection object called “fields”. Then I have added a table for the container of all controls and assign CSS class as "ms-formtable" so that it gives consistent look and feel of SharePoint. Now it’s time to navigate through all fields and add them if required. Here we don’t need to add hidden or system fields. We also don’t want to display read-only fields in new and edit forms. Following lines does this job.             //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;   Let’s move to the next line of code.             FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;   We have used “FieldLabel” control for displaying field title. The advantage of using Field Label is, SharePoint automatically adds red star besides field label to identify it as mandatory field if there is any. Here is most important part to understand. The “BaseFieldControl”. It will render the respective web controls according to type of the field. For example, if it’s single line of text, then Textbox, if it’s look up then it renders dropdown. Additionally, the “ControlMode” property tells compiler that which mode (display/edit/new) controls need to be rendered with. In display mode, it will render label with field value. In edit mode, it will render respective control with item value and in new mode it will render respective control with empty value. Please note that, it’s not always the case when dropdown field will be rendered for Lookup field or Choice field. You need to understand which controls are rendered for which list fields. I am planning to write a separate blog which I hope to publish it very soon. Moreover, we also need to assign list field specific properties like List Id, Field Name etc to identify which SharePoint List field is attached with the control.             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }   Here, I have separate code for new mode and Edit/Display mode because we will not have Item Id to assign in New Mode. We also need to set CSS class for cell containing Label and Controls so that those controls get rendered with SharePoint theme.             spCellLabel.CssClass = "ms-formlabel";             spCellControl.CssClass = "ms-formbody";   “FieldDescription” control is used to add field description if there is any.    Now it’s time to add some more customization,               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }   The above code will disable the title field in edit mode. You can add more code here to achieve more customization according to your requirement. Some of the examples are as follow:             //Adding post back event on UserField to auto populate some other dependent field             //in new mode and disable it in edit mode             if (_ControlMode != SPControlMode.Display && fieldDisplayName == "Manager")             {                 if (fieldControl.Controls[0].FindControl("UserField") != null)                 {                     PeopleEditor pplEditor = (PeopleEditor)fieldControl.Controls[0].FindControl("UserField");                     if (_ControlMode == SPControlMode.New)                         pplEditor.AutoPostBack = true;                     else                         pplEditor.Enabled = false;                 }             }               //Add JavaScript Event on Dropdown field. Don't forget to add the JavaScript function on the page.             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Designation")             {                 DropDownList ddlCategory = (DropDownList)fieldControl.Controls[0];                 ddlCategory.Attributes.Add("onchange", string.Format("javascript:DropdownChangeEvent('{0}');return false;", ddlCategory.ClientID));             }    Following are the screenshots of my Custom ListForm WebPart. Let’s play a game, check out your OOB List forms of SharePoint, compare with these screens and find out differences.   DispForm.aspx:   EditForm.aspx:   NewForm.aspx:   Enjoy the SharePoint Soup!!! ­­­­­­­­­­­­­­­­­­­­

    Read the article

  • Why does my mac have page outs when I have inactive memory?

    - by Chace Fields
    From what I've read, page outs are a sign that you don't have enough RAM. I have also read that if you have inactive memory available, then the machine will use that memory when starting up new programs. I have about 2GB of inactive memory and very little available memory. Once this happens, my page outs go up. Why is this and do I need more memory? I have 8GB of RAM. I'm running VMware Fusion 5 with 2GB allotted to Windows 8.

    Read the article

  • IIS7 ASP.NET application - 2 identical apps in 2 identical app pools, 1 is responsive and 1 is not

    - by Ben
    I have an ASP.NET (v4.0) web app that is installed in a virtual directory (as an application) and is hosted in it's own app pool. This is repeated for each instance of the app (i.e. per customer). The app pools are integrated (not classic) mode and LoadUserProfile is set to true. Otherwise, default settings. Each instance currently has it's own copy of the code/config, and it's own data folder (basic file read/writes). 1 instance of this app runs well (operation used for comparison takes ~4 seconds). Every other instance runs slowly (from 10-25 seconds for the same operation). If I move the slower instance to the "fastest" app pool that instance springs to life. If I move the faster instance into the slower app pool that instance slows to a crawl. The app pools were created in the same way initially - manually. I later used the powershell copy routine to ensure an exact copy of the faster app pool and still the same behaviour. Comparing the apppool.config files shows they are identical barring the virtual directory assignments. There are no shared resources that are being blocked, so far as I can tell, and I tested that by shutting down the performant app pool and restarting... slow is still slow, and then when I restart that app pool (so it's loaded last) it's still faster...

    Read the article

  • If USB is not listed in BIOS as a boot option, does that mean the machine can't boot from USB?

    - by Chace Fields
    I just purchased an Asus Zenbook Prime UX31A-DH51 with Windows 8. I want to wipe the drive and do a clean install but USB is not listed as a boot option in the BIOS. Does this mean it is not possible? Here is a photo of my BIOS options. This is the only option I get when I click Add New Boot Option. Not sure if I can add USB here. * Update * Asus tech emailed and said: "Unfortunately with Windows 8 you can not boot from bios."

    Read the article

  • Django FileField not saving to upload_to location

    - by Erik
    I have an Attachment model that has a FileField in a Django 1.4.1 app. This FileField has a callable upload_to parameter which, per the Django docs should be called when the form (and therefore the model) is saved. When I run FormTest below, the upload_to callable is never called and the file therefore does not appear in the location provided by the upload_to method. What am I doing wrong? Notice that in the passing tests in ModelTest (also below), the upload_to method works as expected. Test: from core.forms.attachments import AttachmentForm from django.test import TestCase import unittest from django.core.files.uploadedfile import SimpleUploadedFile from django.core.files.storage import default_storage def suite(): return unittest.TestSuite( [ unittest.TestLoader().loadTestsFromTestCase(FormTest), ] ) class FormTest(TestCase): def test_form_1(self): filename = 'filename' f = file(filename) data = {'name':'name',} file_data = {'attachment_file':SimpleUploadedFile(f.name,f.read()),} form = AttachmentForm(data=data,files=file_data) self.assertTrue(form.is_valid()) attachment = form.save() root_directory = 'attachments' upload_location = root_directory + '/' + attachment.directory + '/' + filename self.assertTrue(attachment.attachment_file) # Fails self.assertTrue(default_storage.exists(upload_location)) # Fails Attachment Model: from django.db import models from parent_mixins import Parent_Mixin import uuid from django.db.models.signals import pre_delete,pre_save from dirtyfields import DirtyFieldsMixin def upload_to(instance,filename): return 'attachments/' + instance.directory + '/' + filename def uuid_directory_name(): return uuid.uuid4().hex class Attachment(DirtyFieldsMixin,Parent_Mixin,models.Model): attachment_file = models.FileField(blank=True,null=True,upload_to=upload_to) directory = models.CharField(blank=False,default=uuid_directory_name,null=False,max_length=32) name = models.CharField(blank=False,default=None,null=False,max_length=128) class Meta: app_label = 'core' def __str__(self): return unicode(self).encode('utf-8') def __unicode__(self): return unicode(self.name) @models.permalink def get_absolute_url(self): return('core_attachments_update',(),{'pk': self.pk}) # def save(self,*args,**kwargs): # super(Attachment,self).save(*args,**kwargs) def pre_delete_callback(sender, instance, *args, **kwargs): if not isinstance(instance, Attachment): return if not instance.attachment_file: return instance.attachment_file.delete(save=False) def pre_save_callback(sender, instance, *args, **kwargs): if not isinstance(instance, Attachment): return if not instance.attachment_file: return if instance.is_dirty(): dirty_fields = instance.get_dirty_fields() if 'attachment_file' in dirty_fields: old_attachment_file = dirty_fields['attachment_file'] old_attachment_file.delete() pre_delete.connect(pre_delete_callback) pre_save.connect(pre_save_callback) Attachment Form: from ..models.attachments import Attachment from crispy_forms.helper import FormHelper from crispy_forms.layout import Div,Layout,HTML,Field,Fieldset,Button,ButtonHolder,Submit from django import forms class AttachmentFormHelper(FormHelper): form_tag=False layout = Layout( Div( Div( Field('name',css_class='span4'), Field('attachment_file',css_class='span4'), css_class='span4', ), css_class='row', ), ) class AttachmentForm(forms.ModelForm): helper = AttachmentFormHelper() class Meta: fields=('attachment_file','name') model = Attachment class AttachmentInlineFormHelper(FormHelper): form_tag=False form_style='inline' layout = Layout( Div( Div( Field('name',css_class='span4'), Field('attachment_file',css_class='span4'), Field('DELETE',css_class='span4'), css_class='span4', ), css_class='row', ), ) class AttachmentInlineForm(forms.ModelForm): helper = AttachmentInlineFormHelper() class Meta: fields=('attachment_file','name') model = Attachment UPDATE I also do testing on the Attachment model class with these unit tests -- which all pass: from core.models.attachments import Attachment from core.models.attachments import upload_to from django.test import TestCase import unittest from django.core.files.storage import default_storage from django.core.files.base import ContentFile def suite(): return unittest.TestSuite( [ unittest.TestLoader().loadTestsFromTestCase(ModelTest), ] ) class ModelTest(TestCase): def test_model_minimum_fields(self): attachment = Attachment(name='name') attachment.attachment_file.save('test.txt',ContentFile("hello world")) attachment.save() self.assertEqual(str(attachment),'name') self.assertEqual(unicode(attachment),'name') self.assertTrue(attachment.directory) # def test_model_full_fields(self): # attachment = Attachment() # attachement.save() def test_file_operations_basic(self): root_directory = 'attachments' filename = 'test.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename,ContentFile('test')) attachment.save() upload_location = root_directory + '/' + attachment.directory + '/' + filename self.assertEqual(upload_to(attachment,filename),upload_location) self.assertTrue(default_storage.exists(upload_location)) def test_file_operations_delete(self): root_directory = 'attachments' filename = 'test.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename,ContentFile('test')) attachment.save() upload_location = upload_to(attachment,filename) attachment.delete() self.assertFalse(default_storage.exists(upload_location)) def test_file_operations_change(self): root_directory = 'attachments' filename_1 = 'test_1.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename_1,ContentFile('test')) attachment.save() upload_location_1 = upload_to(attachment,filename_1) self.assertTrue(default_storage.exists(upload_location_1)) filename_2 = 'test_2.txt' attachment.attachment_file.save(filename_2,ContentFile('test')) attachment.save() upload_location_2 = upload_to(attachment,filename_2) self.assertTrue(default_storage.exists(upload_location_2)) self.assertFalse(default_storage.exists(upload_location_1))

    Read the article

  • C# XMLSerializer fails with List<T>

    - by Redshirt
    Help... I'm using a singleton class to save all my settings info. It's first utilized by calling Settings.ValidateSettings(@"C:\MyApp") The problem I'm having is that 'List Contacts' is causing the xmlserializer to fail to write the settings file, or to load said settings. If I comment out the List then I have no problems saving/loading the xml file. What am I doing wrong... Thanks in advance // The actual settings to save public class MyAppSettings { public bool FirstLoad { get; set; } public string VehicleFolderName { get; set; } public string ContactFolderName { get; set; } public List<ContactInfo> Contacts { get { if (contacts == null) contacts = new List<ContactInfo>(); return contacts; } set { contacts = value; } } private List<ContactInfo> contacts; } // The class in which the settings are manipulated public static class Settings { public static string SettingPath; private static MyAppSettings instance; public static MyAppSettings Instance { get { if (instance == null) instance = new MyAppSettings(); return instance; } set { instance = value; } } public static void InitializeSettings(string path) { SettingPath = Path.GetFullPath(path + "\\MyApp.xml"); if (File.Exists(SettingPath)) { LoadSettings(); } else { Instance.FirstLoad = true; Instance.VehicleFolderName = "Cars"; Instance.ContactFolderName = "Contacts"; SaveSettingsFile(); } } // load the settings from the xml file private static void LoadSettings() { XmlSerializer ser = new XmlSerializer(typeof(MyAppSettings)); TextReader reader = new StreamReader(SettingPath); Instance = (MyAppSettings)ser.Deserialize(reader); reader.Close(); } // Save the settings to the xml file public static void SaveSettingsFile() { XmlSerializer ser = new XmlSerializer(typeof(MyAppSettings)); TextWriter writer = new StreamWriter(SettingPath); ser.Serialize(writer, Settings.Instance); writer.Close(); } public static bool ValidateSettings(string initialFolder) { try { Settings.InitializeSettings(initialFolder); } catch (Exception e) { return false; } // Do some validation logic here return true; } } // A utility class to contain each contact detail public class ContactInfo { public string ContactID; public string Name; public string PhoneNumber; public string Details; public bool Active; public int SortOrder; } }

    Read the article

  • Implementing INotifyPropertyChanged with PostSharp 1.5

    - by no9
    Hello all. Im new to .NET and WPF so i hope i will ask the question correctly. I am using INotifyPropertyChanged implemented using PostSharp 1.5: [Serializable, DebuggerNonUserCode, AttributeUsage(AttributeTargets.Assembly | AttributeTargets.Class, AllowMultiple = false, Inherited = false), MulticastAttributeUsage(MulticastTargets.Class, AllowMultiple = false, Inheritance = MulticastInheritance.None, AllowExternalAssemblies = true)] public sealed class NotifyPropertyChangedAttribute : CompoundAspect { public int AspectPriority { get; set; } public override void ProvideAspects(object element, LaosReflectionAspectCollection collection) { Type targetType = (Type)element; collection.AddAspect(targetType, new PropertyChangedAspect { AspectPriority = AspectPriority }); foreach (var info in targetType.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(pi => pi.GetSetMethod() != null)) { collection.AddAspect(info.GetSetMethod(), new NotifyPropertyChangedAspect(info.Name) { AspectPriority = AspectPriority }); } } } [Serializable] internal sealed class PropertyChangedAspect : CompositionAspect { public override object CreateImplementationObject(InstanceBoundLaosEventArgs eventArgs) { return new PropertyChangedImpl(eventArgs.Instance); } public override Type GetPublicInterface(Type containerType) { return typeof(INotifyPropertyChanged); } public override CompositionAspectOptions GetOptions() { return CompositionAspectOptions.GenerateImplementationAccessor; } } [Serializable] internal sealed class NotifyPropertyChangedAspect : OnMethodBoundaryAspect { private readonly string _propertyName; public NotifyPropertyChangedAspect(string propertyName) { if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName"); _propertyName = propertyName; } public override void OnEntry(MethodExecutionEventArgs eventArgs) { var targetType = eventArgs.Instance.GetType(); var setSetMethod = targetType.GetProperty(_propertyName); if (setSetMethod == null) throw new AccessViolationException(); var oldValue = setSetMethod.GetValue(eventArgs.Instance, null); var newValue = eventArgs.GetReadOnlyArgumentArray()[0]; if (oldValue == newValue) eventArgs.FlowBehavior = FlowBehavior.Return; } public override void OnSuccess(MethodExecutionEventArgs eventArgs) { var instance = eventArgs.Instance as IComposed<INotifyPropertyChanged>; var imp = instance.GetImplementation(eventArgs.InstanceCredentials) as PropertyChangedImpl; imp.OnPropertyChanged(_propertyName); } } [Serializable] internal sealed class PropertyChangedImpl : INotifyPropertyChanged { private readonly object _instance; public PropertyChangedImpl(object instance) { if (instance == null) throw new ArgumentNullException("instance"); _instance = instance; } public event PropertyChangedEventHandler PropertyChanged; internal void OnPropertyChanged(string propertyName) { if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName"); var handler = PropertyChanged as PropertyChangedEventHandler; if (handler != null) handler(_instance, new PropertyChangedEventArgs(propertyName)); } } } Then i have a couple of classes (user and adress) that implement [NotifyPropertyChanged]. It works fine. But what i want would be that if the child object changes (in my example address) that the parent object gets notified (in my case user). Would it be possible to expand this code so it automaticly creates listeners on parent objects that listen for changes in its child objets?

    Read the article

  • .NET XmlSerializer fails with List<T>

    - by Redshirt
    I'm using a singleton class to save all my settings info. It's first utilized by calling Settings.ValidateSettings(@"C:\MyApp"). The problem I'm having is that 'List Contacts' is causing the xmlserializer to fail to write the settings file, or to load said settings. If I comment out the List<T> then I have no problems saving/loading the xml file. What am I doing wrong? // The actual settings to save public class MyAppSettings { public bool FirstLoad { get; set; } public string VehicleFolderName { get; set; } public string ContactFolderName { get; set; } public List<ContactInfo> Contacts { get { if (contacts == null) contacts = new List<ContactInfo>(); return contacts; } set { contacts = value; } } private List<ContactInfo> contacts; } // The class in which the settings are manipulated public static class Settings { public static string SettingPath; private static MyAppSettings instance; public static MyAppSettings Instance { get { if (instance == null) instance = new MyAppSettings(); return instance; } set { instance = value; } } public static void InitializeSettings(string path) { SettingPath = Path.GetFullPath(path + "\\MyApp.xml"); if (File.Exists(SettingPath)) { LoadSettings(); } else { Instance.FirstLoad = true; Instance.VehicleFolderName = "Cars"; Instance.ContactFolderName = "Contacts"; SaveSettingsFile(); } } // load the settings from the xml file private static void LoadSettings() { XmlSerializer ser = new XmlSerializer(typeof(MyAppSettings)); TextReader reader = new StreamReader(SettingPath); Instance = (MyAppSettings)ser.Deserialize(reader); reader.Close(); } // Save the settings to the xml file public static void SaveSettingsFile() { XmlSerializer ser = new XmlSerializer(typeof(MyAppSettings)); TextWriter writer = new StreamWriter(SettingPath); ser.Serialize(writer, Settings.Instance); writer.Close(); } public static bool ValidateSettings(string initialFolder) { try { Settings.InitializeSettings(initialFolder); } catch (Exception e) { return false; } // Do some validation logic here return true; } } // A utility class to contain each contact detail public class ContactInfo { public string ContactID; public string Name; public string PhoneNumber; public string Details; public bool Active; public int SortOrder; }

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • Visual Studio Code Analysis: CA0001 Error Running Code Analysis - object reference not set to an instance of an object

    - by sturdytree
    For a WPF application being developed in VS 2012 (Ultimate), the application runs fine when a particular project's code analysis is disabled. Enabling it results in the error above. This was working fine until recently (i.e. running with code analysis enabled for the particular project) and the only recent change I can think of is removing NHibernate Profiler (using NuGet). Will be grateful for any pointers on how to debug this, or to see a more detailed log/error message.

    Read the article

  • How to handle Append Only text fields in a Sharepoint DataSheet view?

    - by Telos
    We've created a Sharepoint site to track a process. Eventually we're going to make a workflow out of it, but in the meantime there's a list we all have to look at which lists the various dates each piece is supposed to be finished. So basically My group needs to see and update columns X, Y, Z and Comments while ignoring the other 30 billion or so columns. Which is great in datasheet view because we can easily view our columns, and update them right there without drilling into the item and browsing through all the other crap we don't need. The problem is the Comments field, in which we really need to see the last actual comment made. Unfortunately whenever anyone saves the record the field is updated with a blank value (unless they entered a comment) and the last actual comment is lost unless you drill into the item. Is there some way to get the Datasheet view to show all the entries? I should also note that I know very little about Sharepoint 2007... so detailed answers would be nice!

    Read the article

  • How to Choose Fields While Using ExportToExcel in jqGrid ?

    - by João Guilherme
    Hi ! I have this jqGrid <trirand:JQGrid runat="server" ID="JQGrid1" OnRowEditing="JQGrid1_RowEditing" RenderingMode="Optimized" oncellbinding="JQGrid1_CellBinding" Height="350" EditUrl="/Ferramenta/Transacoes/TransacoesT.aspx"> <AppearanceSettings HighlightRowsOnHover="true"/> <Columns> <trirand:JQGridColumn DataField="IdLancamento" PrimaryKey="True" Visible="false" /> <trirand:JQGridColumn DataField="IdCategoria" Visible="false" /> <trirand:JQGridColumn DataField="DataLancamento" Editable="true" DataFormatString="{0:dd/MM/yy}" HeaderText="Data" Width="65" TextAlign="Center" CssClass="font_data" /> <trirand:JQGridColumn DataField="Descricao" Editable="true" HeaderText="Descrição" Width="330" /> <trirand:JQGridColumn DataField="NomeCategoria" Editable="true" EditType="DropDown" EditorControlID="ddlCategorias" HeaderText="Categoria"> <Formatter> <trirand:CustomFormatter FormatFunction="DefineUrl" /> </Formatter> </trirand:JQGridColumn> <trirand:JQGridColumn DataField="Valor" Editable="false" DataFormatString="{0:C}" HeaderText="Valor" Width="80" TextAlign="Center" /> </Columns> <ClientSideEvents RowSelect="editRow" /> <PagerSettings PageSize="20" /> <ToolBarSettings ShowEditButton="false" ShowRefreshButton="True" ShowAddButton="false" ShowDeleteButton="false" ShowSearchButton="false" /> <SortSettings InitialSortColumn=""></SortSettings> </trirand:JQGrid> <asp:LinkButton ID="lbExportar" runat="server" onclick="lbExportar_Click">Exportar todas as transações</asp:LinkButton> When I use the method ExportToExcel JQGrid1.ExportToExcel("export.xls"); it includes the first column IdLancamento that is not visible and also includes another column that is used on the query. Is it possible to choose the columns that are going to be exported ?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >