Search Results

Search found 14801 results on 593 pages for 'base de datos'.

Page 556/593 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • libvirt upgrade caused vms to not see drives (boot media not found)

    - by bias
    I upgraded to Ubuntu 12.04.1 and now libvirt (via open nebula) successfully runs vms but they aren't finding the 2 drives (specifically, the boot drive). One is "hd" the other is "cdrom". The machine boots but fails and displays something like "boot media not found hd" (this was in a vnc terminal and I didn't copy the output anywhere so that's not the verbatim message). I tried constructing a new disk using the new version of qemu (via vmbuilder) and this new machine has the same problem as the old machine. In case it matters (I can't see why it would) I'm using open nebula to manage the machines. There's nothing relevant in any of the logs: syslog, libvirtd, oned. Which is to say nothing interesting/anomalous is reported when the machine is brought up. Versions libvirt 0.9.8-2ubuntu17.4 qemu-kvm 1.0+noroms-0ubuntu14.3 The libvirt xml config portions (relavent) <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> ... <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/one//203/images/disk.0'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/lib/one//203/images/disk.1'/> <target dev='sdc' bus='scsi'/> <readonly/> <alias name='scsi0-0-2'/> <address type='drive' controller='0' bus='0' unit='2'/> </disk> <controller type='scsi' index='0'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> ... </devices> The libvirt/qemu log contains 2012-11-25 22:19:24.328+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-204 -uuid 4be6c276-19e8-bdc2-e9c9-9ca5352f2be3 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-204.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one//204/images/disk.0,if=none,id=drive-scsi0-0-0,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0,bootindex=1 -drive file=/var/lib/one//204/images/disk.1,if=none,media=cdrom,id=drive-scsi0-0-2,readonly=on,format=raw -device scsi-disk,bus=scsi0.0,scsi-id=2,drive=drive-scsi0-0-2,id=scsi0-0-2 -netdev tap,fd=18,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3 -netdev tap,fd=19,id=hostnet1 -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4 -usb -vnc 0.0.0.0:204 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 kvm: -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:00:68,bus=pci.0,addr=0x3: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom" kvm: -device rtl8139,netdev=hostnet1,id=net1,mac=02:00:ad:f0:1b:94,bus=pci.0,addr=0x4: pci_add_option_rom: failed to find romfile "pxe-rtl8139.rom"

    Read the article

  • Installing .NET application on IIS 7.5 issues

    - by Juw
    Really need some help here. I am at a loss. I am trying to install a webservice that some other guy wrote in .NET. I have some basic IIS understanding. The webservice works just fine on my dev computer. But now i try to move the webservice to a production server and bad things happens. The webservice has been located in C:\inetpub\wwwroot\ dir on the dev server. But on this production server it is to be located in D:\services\ I have managed to install an application on the production server and everything seems fine and dandy. But when i "Test Settings" in the initial setup i get "Invalid application path" error. But i can just close it down and still install it. But when i try to access the webservice with: http://myserver.com/webservice/GetData nothing happens. Just a blank page and when i check the response headers...500 error. I don´t know what is going on here or where the problem is. I post the config file here so someone hopefully might notice something odd. Thanx in advance! EDIT: The config file is from my dev server. I just copied it to my production server...but that obviously didn´t work :-) UPDATE: I noticed that my dev server run in an Application pool with Net 4 and in "classic" "mode". On the production server it was in NET 4 but in "integrated" mode. So i changed it to "classic". I still get a blank page. But checking the log will output this: 2012-10-03 14:57:00 ip removed GET /boo/GetData - 80 - ip removed Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:15.0)+Gecko/20100101+Firefox/15.0.1 404 2 1260 203 <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <identity impersonate="true" /> <!-- Impersonate NT AUTHORITY/IUSR --> <compilation targetFramework="4.0"> <assemblies> <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b7735c561131e089" /> </assemblies> </compilation> <pages controlRenderingCompatibilityVersion="3.5" clientIDMode="AutoID" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <httpErrors existingResponse="PassThrough" /> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> <directoryBrowse enabled="false" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <!-- Configure the WCF REST service base address via the global.asax.cs file and the default endpoint via the attributes on the <standardEndpoint> element below --> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> <connectionStrings> <add name="Entities" connectionString="metadata=res://*/DataModel.csdl|res://*/DataModel.ssdl|res://*/DataModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=someip;initial catalog=db_90;User ID=user1;Password=access2;multipleactiveresultsets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration>

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • Set source address to use tun device does not work (Debian Squeeze)

    - by A. Donda
    there have been similar questions on StackExchange but none of the answers helped me, so I'll try a question of my own. I have a VPN connection via OpenVPN. By default, all traffic is redirected through the tunnel using OpenVPN's "two more specific routes" trick, but I disabled that. My routing table is like this: 198.144.156.141 192.168.2.1 255.255.255.255 UGH 0 0 0 eth0 10.30.92.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1 10.30.92.1 10.30.92.5 255.255.255.255 UGH 0 0 0 tun1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.30.92.5 0.0.0.0 UG 0 0 0 tun1 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0 And the interface configuration is like this: # ifconfig eth0 Link encap:Ethernet HWaddr XX-XX- inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fe8d:acbd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:394869 errors:0 dropped:0 overruns:0 frame:0 TX packets:293489 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:388519578 (370.5 MiB) TX bytes:148817487 (141.9 MiB) Interrupt:20 Base address:0x6f00 tun1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.30.92.6 P-t-P:10.30.92.5 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:9885 (9.6 KiB) TX bytes:4380 (4.2 KiB) plus the lo device. The routing table has two default routes, one via eth0 through my local network router (DSL modem) at 192.168.2.1, and another via tun1 through the VPN's gateway. With this configuration, if I connect to a site, the route chosen is the direct one (because it has less hops?): # traceroute 8.8.8.8 -n traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 192.168.2.1 0.427 ms 0.491 ms 0.610 ms 2 213.191.89.13 17.981 ms 20.137 ms 22.141 ms 3 62.109.108.48 23.681 ms 25.009 ms 26.401 ms ... This is fine, because my goal is to send only traffic from specific applications through the tunnel (esp. transmission, using its -i / bind-address-ipv4 option). To test whether this can work at all, I check it first with traceroute's -s option: # traceroute 8.8.8.8 -n -s 10.30.92.6 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * ... This I take to mean that connection using the tunnel's local address as source is not possible. What is possible (though only as root) is to specify the source interface: # traceroute 8.8.8.8 -n -i tun1 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.30.92.1 129.337 ms 297.758 ms 297.725 ms 2 * * * 3 198.144.152.17 297.653 ms 297.652 ms 297.650 ms ... So apparently the tun1 interface is working and it is possible to send packets through it. But selecting the source interface is not implemented in my actual target application (transmission), so I would like to get source address selection to work. What am I doing wrong?

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

  • How broken is routing strategy that causes a martian packet (so far only) during tracepath?

    - by lkraav
    I believe I've achieved a table that routes packets from and to eth1/192.168.3.x through 192.168.3.1, and packets from and to eth0/192.168.1.x through 192.168.1.1 (helpful source). Question: when doing tracepath from 192.168.3.20 (from within vserver), I'm getting kernel: [318535.927489] martian source 192.168.3.20 from 212.47.223.33, on dev eth0 at or near the target IP, while intermediary hops go without (log below). I don't understand why this packet is arriving on eth0, instead of eth1, even after reading this: Note that you may see packets from non-routable IP addresses when running the traceroute or tracepath commands. While packets cannot be routed to these routers, packets sent between 2 routers only need to know the address of the next hop within the local networks, which could be a non-routable address. Can someone explain that paragraph in human language? Based on short initial trials so far, everything else seems to work without causing martians. Is this contained to the nature of tracepath operation or do I have some other bigger routing problem that will cause work traffic breakage? Side note: is it possible to inspect martian packet with tcpdump or wireshark or anything of the sort? I'm have not been able to get it to show up on my own. vserver-20 / # tracepath -n 212.47.223.33 1: 192.168.3.2 0.064ms pmtu 1500 1: 192.168.3.1 1.076ms 1: 192.168.3.1 1.259ms 2: 90.191.8.2 1.908ms 3: 90.190.134.194 2.595ms 4: 194.126.123.94 2.136ms asymm 5 5: 195.250.170.22 2.266ms asymm 6 6: 212.47.201.86 2.390ms asymm 7 7: no reply 8: no reply 9: no reply ^C Host routing: $ sudo ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:24:1d:de:b3:5d brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:46:46:a3:6a brd ff:ff:ff:ff:ff:ff inet 192.168.3.2/27 scope global eth1 inet 192.168.3.20/27 brd 192.168.3.31 scope global secondary eth1 # linux-vserver instance $ sudo ip route default via 192.168.1.1 dev eth0 metric 3 unreachable 127.0.0.0/8 scope host 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2 192.168.3.0/27 dev eth1 proto kernel scope link src 192.168.3.2 $ sudo ip rule 0: from all lookup local 32764: from all to 192.168.3.0/27 lookup dmz 32765: from 192.168.3.0/27 lookup dmz 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table dmz default via 192.168.3.1 dev eth1 metric 4 192.168.3.0/27 dev eth1 scope link metric 4 Gateway routing # ip route 10.24.0.2 dev tun0 proto kernel scope link src 10.24.0.1 10.24.0.0/24 via 10.24.0.2 dev tun0 192.168.3.0/24 dev br-dmz proto kernel scope link src 192.168.3.1 192.168.1.0/24 dev br-lan proto kernel scope link src 192.168.1.1 $ISP_NET/23 dev eth0.1 proto kernel scope link src $WAN_IP default via $ISP_GW dev eth0.1 Additional background Options for non-virtualized network interface isolation?

    Read the article

  • The dynamic Type in C# Simplifies COM Member Access from Visual FoxPro

    - by Rick Strahl
    I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET. Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly. Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code: *** Create .NET COM InstanceloNet = CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "[email protected]" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component: public string PassRecordObject(object FoxObject) { // *** using raw Reflection string Company = (string) FoxObject.GetType().InvokeMember( "Company", BindingFlags.GetProperty,null, FoxObject,null); // using the easier ComUtils wrappers string Name = (string) ComUtils.GetProperty(FoxObject,"Name"); // Getting Address object – then getting child properties object Address = ComUtils.GetProperty(FoxObject,"Address");    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress"); // using ComUtils 'Ex' functions you can use . Syntax     string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress"); return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; } Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element. Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural: public string PassRecordObjectDynamic(dynamic FoxObject) { // *** using raw Reflection string Company = FoxObject.Company; // *** using the easier ComUtils class string Name = FoxObject.Name; // *** using ComUtils 'ex' functions to use . Syntax string Address = FoxObject.Address.StreetAddress; return Name + Environment.NewLine + Company + Environment.NewLine + Address + Environment.NewLine + " FOX"; } As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required. Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET . The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result. Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object: DEFINE CLASS FoxData AS SESSION OlePublic cAppStartPath = "" FUNCTION INIT THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) ) SET PATH TO ( THIS.cAppStartpath ) ENDFUNC FUNCTION GetRecord(lnPk) LOCAL loCustomer SELECT * FROM tt_Cust WHERE pk = lnPk ; INTO CURSOR TCustomer IF _TALLY < 1 RETURN NULL ENDIF SCATTER NAME loCustomer MEMO RETURN loCustomer ENDFUNC ENDDEFINE If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly: int pk = 0; int.TryParse(Request.QueryString["id"],out pk); // Create Fox COM Object with Com Callable Wrapper FoxData foxData = new FoxData(); dynamic foxRecord = foxData.GetRecord(pk); string company = foxRecord.Company; DateTime entered = foxRecord.Entered; This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain. For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed. © Rick Strahl, West Wind Technologies, 2005-2010Posted in COM  FoxPro  .NET  CSharp  

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • NLog Exception Details Renderer

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/nlog-exception-details-renderer.aspxI recently switch from Microsoft's Enterprise Library Logging block to NLog.  In my opinion, NLog offers a simpler and much cleaner configuration section with better use of placeholders, complemented by custom variables. Despite this, I found one deficiency in my migration; I had lost the ability to simply render all details of an exception into our logs and notification emails. This is easily remedied by implementing a custom layout renderer. Start by extending 'NLog.LayoutRenderers.LayoutRenderer' and overriding the 'Append' method. using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails";   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { // Todo: Append details to StringBuilder } }   Now that we have a base layout renderer, we simply need to add the formatting logic to add exception details as well as inner exception details. This is done using reflection with some simple filtering for the properties that are already being rendered. I have added an additional 'Register' method, allowing the definition to be registered in code, rather than in configuration files. This complements by 'LogWrapper' class which standardizes writing log entries throughout my applications. using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public sealed class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails"; private const string _Spacer = "======================================"; private List<string> _FilteredProperties;   private List<string> FilteredProperties { get { if (_FilteredProperties == null) { _FilteredProperties = new List<string> { "StackTrace", "HResult", "InnerException", "Data" }; }   return _FilteredProperties; } }   public bool LogNulls { get; set; }   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { Append(builder, logEvent.Exception, false); }   private void Append(StringBuilder builder, Exception exception, bool isInnerException) { if (exception == null) { return; }   builder.AppendLine();   var type = exception.GetType(); if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception Details:") .AppendLine(_Spacer) .Append("Exception Type: ") .AppendLine(type.ToString());   var bindingFlags = BindingFlags.Instance | BindingFlags.Public; var properties = type.GetProperties(bindingFlags); foreach (var property in properties) { var propertyName = property.Name; var isFiltered = FilteredProperties.Any(filter => String.Equals(propertyName, filter, StringComparison.InvariantCultureIgnoreCase)); if (isFiltered) { continue; }   var propertyValue = property.GetValue(exception, bindingFlags, null, null, null); if (propertyValue == null && !LogNulls) { continue; }   var valueText = propertyValue != null ? propertyValue.ToString() : "NULL"; builder.Append(propertyName) .Append(": ") .AppendLine(valueText); }   AppendStackTrace(builder, exception.StackTrace, isInnerException); Append(builder, exception.InnerException, true); }   private void AppendStackTrace(StringBuilder builder, string stackTrace, bool isInnerException) { if (String.IsNullOrEmpty(stackTrace)) { return; }   builder.AppendLine();   if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception StackTrace:") .AppendLine(_Spacer) .AppendLine(stackTrace); }   public static void Register() { Type definitionType; var layoutRenderers = ConfigurationItemFactory.Default.LayoutRenderers; if (layoutRenderers.TryGetDefinition(Name, out definitionType)) { return; }   layoutRenderers.RegisterDefinition(Name, typeof(ExceptionDetailsRenderer)); LogManager.ReconfigExistingLoggers(); } } For brevity I have removed the Trace, Debug, Warn, and Fatal methods. They are modelled after the Info methods. As mentioned above, note how the log wrapper automatically registers our custom layout renderer reducing the amount of application configuration required. using System; using NLog;   public static class LogWrapper { static LogWrapper() { ExceptionDetailsRenderer.Register(); }   #region Log Methods   public static void Info(object toLog) { Log(toLog, LogLevel.Info); }   public static void Info(string messageFormat, params object[] parameters) { Log(messageFormat, parameters, LogLevel.Info); }   public static void Error(object toLog) { Log(toLog, LogLevel.Error); }   public static void Error(string message, Exception exception) { Log(message, exception, LogLevel.Error); }   private static void Log(string messageFormat, object[] parameters, LogLevel logLevel) { string message = parameters.Length == 0 ? messageFormat : string.Format(messageFormat, parameters); Log(message, (Exception)null, logLevel); }   private static void Log(object toLog, LogLevel logLevel, LogType logType = LogType.General) { if (toLog == null) { throw new ArgumentNullException("toLog"); }   if (toLog is Exception) { var exception = toLog as Exception; Log(exception.Message, exception, logLevel, logType); } else { var message = toLog.ToString(); Log(message, null, logLevel, logType); } }   private static void Log(string message, Exception exception, LogLevel logLevel, LogType logType = LogType.General) { if (exception == null && String.IsNullOrEmpty(message)) { return; }   var logger = GetLogger(logType); // Note: Using the default constructor doesn't set the current date/time var logInfo = new LogEventInfo(logLevel, logger.Name, message); logInfo.Exception = exception; logger.Log(logInfo); }   private static Logger GetLogger(LogType logType) { var loggerName = logType.ToString(); return LogManager.GetLogger(loggerName); }   #endregion   #region LogType private enum LogType { General } #endregion } The following configuration is similar to what is provided for each of my applications. The 'application' variable is all that differentiates the various applications in all of my environments, the rest has been standardized. Depending on your needs to tweak this configuration while developing and debugging, this section could easily be pushed back into code similar to the registering of our custom layout renderer.   <?xml version="1.0"?>   <configuration> <configSections> <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/> </configSections> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <variable name="application" value="Example"/> <targets> <target type="EventLog" name="EventLog" source="${application}" log="${application}" layout="${message}${onexception: ${newline}${exceptiondetails}}"/> <target type="Mail" name="Email" smtpServer="smtp.example.local" from="[email protected]" to="[email protected]" subject="(${machinename}) ${application}: ${level}" body="Machine: ${machinename}${newline}Timestamp: ${longdate}${newline}Level: ${level}${newline}Message: ${message}${onexception: ${newline}${exceptiondetails}}"/> </targets> <rules> <logger name="*" minlevel="Debug" writeTo="EventLog" /> <logger name="*" minlevel="Error" writeTo="Email" /> </rules> </nlog> </configuration>   Now go forward, create your custom exceptions without concern for including their custom properties in your exception logs and notifications.

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • Why can't I reinstall MySQL?

    - by Johannes Nielsen
    I've been looking all around the Internet for an answer but didn't find anything. I hope you can help me now. I have a server with MySQL. From one day to another, MySQL didn't let me enter with my root password anymore (accsess denied for user 'root'@'localhost' using password: 'YES'). So I tried two ways to reset the password: No.1: I typed: shell> /etc/init.d/mysqld stop To stop MySQL. Then I restarted it skipping the grant-tables: shell> mysqld_safe --skip-grant-tables So I was able to log in as root and change the password using: mysql> UPDATE mysql.user SET Password = PASSWORD('MyNewPassword') WHERE User = 'root'; FLUSH PRIVILEGES; I restarted MySQL and tried to log in as root with my new password - didn't work. So I tried the solution that's described here: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html (I don't want to post it here because this post is already pretty long). Didn't work either. Actually it made it worse, because since that day, every time I try to start MySQL, it doesn't even ask me for my password, but I get: shell> ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Well, I've looked up what it means and found that my mysqld.sock is missing. I tried to create it using touch but MySQL can't start with that socket. Now I'm trying to reinstall MySQL but everytime I type in shell> apt-get --purge remove mysql-server mysql-common mysql-client In that or any other order or every one of those three alone, I get: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> psa-firewall : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> psa-spamassassin : Depends: plesk-core (>= 11.0.9) but it is not installable shell> psa-vpn : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: plesk-base (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I said to my self "let's just remove those files with depenencies, too" (that psa-stuff since plesk is virtual and can't be uninstalled)... Guess what happened: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Of course I tried apt-get -f install, too many times even. What am I doing wrong? No matter, which other packages I include into apt-get --purge remove, I always get new dependencies. Do I have to delete every MySQL-related directory and file manually? Hope there's someone out there who can help me! Cheers! EDIT: After trying apt-get purge mysql-server mysql-common mysql-client libmysqlclient18 libmysqlclient18:i386 mysql-client-5.5 mysql-server-5.5 psa-firewall psa-spamassassin psa-vpn Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libdbd-mysql-perl : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libmyodbc : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libmysqlclient18:i386 (>= 5.5.13-1) but it is not going to be installed php5-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed ruby-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to remove all these and got: Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these:qlclient18:i386 mysql The following packages have unmet dependencies: libmysql-ruby1.8 : Depends: ruby-mysql but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And actually I think removing that file, too solved my problem :-S Next time I'll try everything before asking :D Thank you Eric for keeping me couraged to just go on removing :D

    Read the article

  • Top Tweets SOA Partner Community – March 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community ?SOA Community Newsletter February 2012 wp.me/p10C8u-o0 Marc ?Reading through the #OFM 11.1.1.6 , patchset 5 documentation. What is the best way to upgrade your whole dev…prd street. SOA Community Thanks for the successful and super interesting #sbidays ! Wonderful discussions around the Integration, case management and security tracks Torsten Winterberg Schon den neuen Opitz Technology-Blog gebookmarked? The Cattle Crew bit.ly/yLPwBD wird ab sofort regelmäßig Erkenntnisse posten. OTNArchBeat ? Unit Testing Asynchronous BPEL Processes Using soapUI | @DanielAmadei bit.ly/x9NsS9 Rolando Carrasco ?Video de Human Task en BPM 11g. Por @edwardo040. bit.ly/wki9CA cc @OracleBPM @OracleSOA @soacommunity View video Marcel Mertin SOA Security Hands-On by Dirk Krafzig and Mamoon Yunus at #sbidays is also great! SOA Community Workshop day #sbidays #BPMN2.0 by Volker Stiehl from #SAP great training – now I can model & execute in #bpmsuite #soacommunity Simone Geib ?Just updated our advanced #soasuite #otn page with a number of very interesting @orclateamsoa blog posts: bit.ly/advancedsoasui… OTNArchBeat ? Start Small, Grow Fast: SOA Best Practices article by @biemond, @rluttikhuizen, @demed bit.ly/yem9Zv Steffen Miller ? Nice new features in SOA Suite Business Rules #PS5 Testing rules with scenarios and output validation bit.ly/zj64Q3 @SOACOMMUNITY OTNArchBeat ? Reply SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 bit.ly/xGdC24 OTNArchBeat ? What have BPM, big data, social tools, and business models got in common? | Andy Mulholland bit.ly/xUkOGf SOA Community ? Live hacking at #sbidays – cheaper shopping, bias cracking, payment systems, secure your SOA! pic.twitter.com/y7YaIdug SOA Community Future #BPM & #ACM solutions can make use of ontology’s, based on #sqarql #sbidays pic.twitter.com/xLb1Z5zs Simone Geib ? @soacommunity: SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 wp.me/p10C8u-nX Biemond Changing your ADF Connections in Enterprise Manager with PS5: With Patch Set 5 of Fusion Middleware you can fina… bit.ly/zF7Rb1 Marc ? HUGE (!) CPU and Heap improvement on Oracle Fusion Middleware tinyurl.com/762drzp @wlscommunity @soacommunity #OSB #SOA #WLS SOA Community ?Networking @ SOA & BPM Partner Community blogs.oracle.com/soacommunity/e… #soacommunity #otn #opn #oracle SOA Community ?Published the SOA Partner Community newsletter February edition – READ it. Not yet a member? oracle.com/goto/emea/soa #soacommunity #otn #opn AMIS, Oracle & Java Blog by Lucas Jellema: "Book Review: Do More with SOA Integration: Best of Packt (december 2011, various authors)" bit.ly/wq633E Jon petter hjulstad @SOASimone Excellent summary! Lots of new features! Simone Geib ?Do you want to know what’s new in #soasuite #PS5? Go to bit.ly/xBX06f and let me know what you think SOA Community ? Unit Testing Asynchronous BPEL Processes Using soapUI oracle.com/technetwork/ar… #soacommunity #soa #otn #oracle #bpel Retweeted by SOA Community View media Retweeted by SOA Community Eric Elzinga ? Oracle Fusion Middleware Partner Community Forum Malage, The Overview, bit.ly/AA9BKd #ofmforum SOA&Cloud Symposium ? The February issue of the Service Technology Magazine is now published. servicetechmag.com SOA Community ? Oracle SOA Suite 11g Database Growth Management – must read! oracle.com/technetwork/da… #soacommunity #soa #purging demed ? Have you exposed internal processes to mobile devices using #oraclesoa? Interested in an article? DM me! #osb #rest #multichannel #mobile orclateamsoa ? A-Team SOA Blog: Enhanced version of Thread Dump Analyzer (TDA A-Team) ow.ly/1hpk7l SOA Community Reply BPM Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #bpmsuite #opn SOA Community ? SOA Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #soasuite SOA Community BPM Suite #PS5 1(1.1.1.6) available for download. List of new BPM features blogs.oracle.com/soacommunity/e… #soacommunity #bpm #bpmsuite #opn OracleBlogs BPM in Utilties Industry ow.ly/1hC3fp Retweeted by SOA Community OTNArchBeat ? Demystifying Oracle Enterprise Gateway | Naresh Persaud bit.ly/xtDNe2 OTNArchBeat ? Architect’s Guide to Big Data; Test BPEL Processes Using SoapUI; Development Debate bit.ly/xbDYSo Frank Nimphius ? Finished my book review of "Do More with SOA Integration: Best of Packt ". Here are my review comments: bit.ly/x2k9OZ Lucas Jellema ? That is my one stop-and-go download center for #PS5 : edelivery.oracle.com/EPD/Download/g… Lucas Jellema ? Interesting piece of documentation: Fusion Applications Extensibility Guide – docs.oracle.com/cd/E15586_01/f… source for design time @ run time inspira Lucas Jellema ? Strongly improved support for testing Business Rules at Design Time in #PS5 see docs.oracle.com/cd/E23943_01/u… Lucas Jellema ? SOA Suite 11gR1 PS5: new BPEL Component testing – docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? PS5 available for CEP (Complex Event Processing) – a personal favorite of mine : oracle.com/technetwork/mi… Lucas Jellema ?What’s New in Fusion Developer’s Guide 11gR1 PS5: docs.oracle.com/cd/E23943_01/w… Lucas Jellema ? BPMN Correlation (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? Modifying running BPM Process instances (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? SOA Suite 11gR1 PS5 – new aggregation pattern: docs.oracle.com/cd/E23943_01/d… routing multiple messages to same instance Melvin van der Kuijl ? Automating Testing of SOA Composite Applications in PS5. docs.oracle.com/cd/E23943_01/d… Cato Aune ? SOA suite PS5 Enterprise Deployment Guide is available in ePub docs.oracle.com/cd/E23943_01/c… . Much better than pdf on Galaxy Note SOA Community ?JDeveloper 11.1.1.6 is available for download bit.ly/wGYrwE #soacommunity SOA Community ? Your first experience #PS5 – let us know @soacommunity – send us your tweets and blog posts! #soacommunity Jon petter hjulstad ? WLS 10.3.6 New features, ex better logging of jdbc use: docs.oracle.com/cd/E23943_01/w… Heidi Buelow ? Get it now! RT @soacommunity: BPM Suite PS5 11.1.1.6 available for download bit.ly/AgagT5 #bpm #soacommunity Jon petter hjulstad ?SOA Suite PS5 EDG contains OSB! docs.oracle.com/cd/E23943_01/c… Jon petter hjulstad ? Testing Oracle Rules from JDeveloper is easier in PS5: docs.oracle.com/cd/E23943_01/u… Biemond® ? What’s New in Oracle Service Bus 11.1.1.6.0 oracle.com/technetwork/mi… Jon petter hjulstad ? Adminguide New and Changed Features for PS5, ex GridLink data sources: docs.oracle.com/cd/E23943_01/c… Retweeted by SOA Community Andreas Koop ? Unbelievable! #OFM Doc Lib growth from 11gPS4->11gPS5 by 1.2G! View media SOA Community ?ODI PS5 is available oracle.com/technetwork/mi… #odi #soacommunity 22 Feb View media SOA Community Service Bus 11g Development Cookbook soacommunity.wordpress.com/2012/02/20/ser… #osb #soacommunity #ace #opn View media For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • Silverlight Recruiting Application Part 6 - Adding an Interview Scheduling Module/View

    Between the last post and this one I went ahead and carried the ideas for the Jobs module and view into the Applicants module and view- they're both doing more or less the same thing, except with different objects being at their core.  Made for an easy cut-and-paste operation with a few items being switched from one to another.  Now that we have the ability to add postings and applicants, wouldn't it be nice if we could schedule an interview?  Of course it would! Scheduling Module I think you get the drift from previous posts that these project structures start looking somewhat similar.  The interview scheduling module is no different than the rest- it gets a SchedulingModule.cs file at the root that inherits from IModule, and there is a single SchedulerView.xsml and SchedulerViewModel.cs setup for our V+VM.  We have one unique concern as we enter into this- RadScheduler deals with AppointmentsSource, not ItemsSource, so there are some special considerations to take into account when planning this module. First, I need something which inherits from AppointmentBase.  This is the core of the RadScheduler appointment, and if you are planning to do any form of custom appointment, you'll want it to inherit from this.  Then you can add-on functionality as needed.  Here is my addition to the mix, the InterviewAppointment: 01.public class InterviewAppointment : AppointmentBase 02.{ 03.    private int _applicantID; 04.    public int ApplicantID 05.    { 06.        get { return this._applicantID; } 07.        set 08.        { 09.            if (_applicantID != value) 10.            { 11.                _applicantID = value; 12.                OnPropertyChanged("ApplicantID"); 13.            } 14.        } 15.    } 16.   17.    private int _postingID; 18.    public int PostingID 19.    { 20.        get { return _postingID; } 21.        set 22.        { 23.            if (_postingID != value) 24.            { 25.                _postingID = value; 26.                OnPropertyChanged("PostingID"); 27.            } 28.        } 29.    } 30.   31.    private string _body; 32.    public string Body 33.    { 34.        get { return _body; } 35.        set 36.        { 37.            if (_body != value) 38.            { 39.                _body = value; 40.                OnPropertyChanged("Body"); 41.            } 42.        } 43.    } 44.   45.    private int _interviewID; 46.    public int InterviewID 47.    { 48.        get { return _interviewID; } 49.        set 50.        { 51.            if (_interviewID != value) 52.            { 53.                _interviewID = value; 54.                OnPropertyChanged("InterviewID"); 55.            } 56.        } 57.    } 58.   59.    public override IAppointment Copy() 60.    { 61.        IAppointment appointment = new InterviewAppointment(); 62.        appointment.CopyFrom(this);             63.        return appointment; 64.    } 65.   66.    public override void CopyFrom(IAppointment other) 67.    {             68.        base.CopyFrom(other); 69.        var appointment = other as InterviewAppointment; 70.        if (appointment != null) 71.        { 72.            ApplicantID = appointment.ApplicantID; 73.            PostingID = appointment.PostingID; 74.            Body = appointment.Body; 75.            InterviewID = appointment.InterviewID; 76.        } 77.    } 78.} Nothing too exciting going on here, we just make sure that our custom fields are persisted (specifically set in CopyFrom at the bottom) and notifications are fired- otherwise this ends up exactly like the standard appointment as far as interactions, etc.  But if we've got custom appointment items... that also means we need to customize what our appointment dialog window will look like. Customizing the Edit Appointment Dialog This initially sounds a lot more intimidating than it really is.  The first step here depends on what you're dealing with for theming, but for ease of everything I went ahead and extracted my templates in Blend for RadScheduler so I could modify it as I pleased.  For the faint of heart, the RadScheduler template is a few thousand lines of goodness since there are some very complex things going on in that control.  I've gone ahead and trimmed down the template parts I don't need as much as possible, so what is left is all that is relevant to the Edit Appointment Dialog.  Here's the resulting Xaml, with line numbers, so I can explain further: 001.<UserControl.Resources> 002.    <!-- begin Necessary Windows 7 Theme Resources for EditAppointmentTemplate --> 003.    <helpers:DataContextProxy x:Key="DataContextProxy" /> 004.       005.    <telerik:Windows7Theme x:Key="Theme" /> 006.    <SolidColorBrush x:Key="DialogWindowBackground" 007.                     Color="White" /> 008.    <SolidColorBrush x:Key="CategorySelectorBorderBrush" 009.                     Color="#FFB1B1B1" /> 010.    <LinearGradientBrush x:Key="RadToolBar_InnerBackground" 011.                         EndPoint="0.5,1" 012.                         StartPoint="0.5,0"> 013.        <GradientStop Color="#FFFDFEFF" 014.                      Offset="0" /> 015.        <GradientStop Color="#FFDDE9F7" 016.                      Offset="1" /> 017.        <GradientStop Color="#FFE6F0FA" 018.                      Offset="0.5" /> 019.        <GradientStop Color="#FFDCE6F4" 020.                      Offset="0.5" /> 021.    </LinearGradientBrush> 022.    <Style x:Key="FormElementTextBlockStyle" 023.           TargetType="TextBlock"> 024.        <Setter Property="HorizontalAlignment" 025.                Value="Right" /> 026.        <Setter Property="VerticalAlignment" 027.                Value="Top" /> 028.        <Setter Property="Margin" 029.                Value="15, 15, 0, 2" /> 030.    </Style> 031.    <Style x:Key="FormElementStyle" 032.           TargetType="FrameworkElement"> 033.        <Setter Property="Margin" 034.                Value="10, 10, 0, 2" /> 035.    </Style> 036.    <SolidColorBrush x:Key="GenericShallowBorderBrush" 037.                     Color="#FF979994" /> 038.    <telerik:BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter" /> 039.    <telerikScheduler:ImportanceToBooleanConverter x:Key="ImportanceToBooleanConverter" /> 040.    <telerikScheduler:NullToVisibilityConverter x:Key="NullToVisibilityConverter" /> 041.    <telerikScheduler:InvertedNullToVisibilityConverter x:Key="InvertedNullToVisibilityConverter" /> 042.    <scheduler:ResourcesSeparatorConverter x:Key="ResourcesSeparatorConverter" /> 043.    <DataTemplate x:Key="IconDataEditTemplate"> 044.        <Image Source="/Telerik.Windows.Controls.Scheduler;component/Themes/Office/Images/cal.png" 045.               Margin="3,3,0,0" 046.               Width="16" 047.               Height="16" /> 048.    </DataTemplate> 049.    <DataTemplate x:Key="SingleSelectionTemplate"> 050.        <Grid VerticalAlignment="Stretch" 051.              HorizontalAlignment="Stretch"> 052.            <Grid.RowDefinitions> 053.                <RowDefinition Height="Auto" /> 054.            </Grid.RowDefinitions> 055.            <Grid.ColumnDefinitions> 056.                <ColumnDefinition Width="Auto" 057.                                  MinWidth="84" /> 058.                <ColumnDefinition Width="Auto" 059.                                  MinWidth="200" /> 060.            </Grid.ColumnDefinitions> 061.            <TextBlock x:Name="SelectionNameLabel" 062.                       Margin="0,13,4,2" 063.                       Text="{Binding ResourceType.DisplayName}" 064.                       Style="{StaticResource FormElementTextBlockStyle}" 065.                       Grid.Column="0" /> 066.            <telerikInput:RadComboBox ItemsSource="{Binding ResourceItems}" 067.                                      Width="185" 068.                                      Margin="5,10,20,2" 069.                                      HorizontalAlignment="Left" 070.                                      Grid.Column="1" 071.                                      ClearSelectionButtonVisibility="Visible" 072.                                      ClearSelectionButtonContent="Clear All" 073.                                      DisplayMemberPath="Resource.DisplayName" 074.                                      telerik:StyleManager.Theme="{StaticResource Theme}" 075.                                      SelectedItem="{Binding SelectedItem, Mode=TwoWay}" /> 076.        </Grid> 077.    </DataTemplate> 078.    <DataTemplate x:Key="MultipleSelectionTemplate"> 079.        <Grid VerticalAlignment="Stretch" 080.              HorizontalAlignment="Stretch"> 081.            <Grid.RowDefinitions> 082.                <RowDefinition Height="Auto" /> 083.            </Grid.RowDefinitions> 084.            <Grid.ColumnDefinitions> 085.                <ColumnDefinition Width="Auto" 086.                                  MinWidth="84" /> 087.                <ColumnDefinition Width="Auto" 088.                                  MinWidth="200" /> 089.            </Grid.ColumnDefinitions> 090.            <TextBlock x:Name="SelectionNameLabel" 091.                       Grid.Column="0" 092.                       Text="{Binding ResourceType.DisplayName}" 093.                       Margin="0,13,4,2" 094.                       Style="{StaticResource FormElementTextBlockStyle}" /> 095.            <telerikInput:RadComboBox Grid.Column="1" 096.                                      Width="185" 097.                                      HorizontalAlignment="Left" 098.                                      Margin="5,10,20,2" 099.                                      ItemsSource="{Binding ResourceItems}" 100.                                      SelectedIndex="{Binding SelectedIndex, Mode=TwoWay}" 101.                                      ClearSelectionButtonVisibility="Visible" 102.                                      ClearSelectionButtonContent="Clear All" 103.                                      telerik:StyleManager.Theme="{StaticResource Theme}"> 104.                <telerikInput:RadComboBox.ItemTemplate> 105.                    <DataTemplate> 106.                        <Grid HorizontalAlignment="Stretch" 107.                              VerticalAlignment="Stretch"> 108.                            <CheckBox VerticalAlignment="Center" 109.                                      HorizontalContentAlignment="Stretch" 110.                                      VerticalContentAlignment="Center" 111.                                      IsChecked="{Binding IsChecked, Mode=TwoWay}" 112.                                      Content="{Binding Resource.DisplayName}"> 113.                                <CheckBox.ContentTemplate> 114.                                    <DataTemplate> 115.                                        <TextBlock HorizontalAlignment="Stretch" 116.                                                   VerticalAlignment="Stretch" 117.                                                   Text="{Binding Content, RelativeSource={RelativeSource TemplatedParent}}" /> 118.                                    </DataTemplate> 119.                                </CheckBox.ContentTemplate> 120.                            </CheckBox> 121.                        </Grid> 122.                    </DataTemplate> 123.                </telerikInput:RadComboBox.ItemTemplate> 124.            </telerikInput:RadComboBox> 125.        </Grid> 126.    </DataTemplate> 127.    <scheduler:ResourceTypeTemplateSelector x:Key="ItemTemplateSelector" 128.                                            MultipleSelectionTemplate="{StaticResource MultipleSelectionTemplate}" 129.                                            SingleSelectionTemplate="{StaticResource SingleSelectionTemplate}" /> 130.    <!-- end Necessary Windows 7 Theme Resources for EditAppointmentTemplate -->  131.       132.    <ControlTemplate x:Key="EditAppointmentTemplate" 133.                     TargetType="telerikScheduler:AppointmentDialogWindow"> 134.        <StackPanel Background="{TemplateBinding Background}" 135.                    UseLayoutRounding="True"> 136.            <StackPanel Grid.Row="0" 137.                        Orientation="Horizontal" 138.                        Background="{StaticResource RadToolBar_InnerBackground}" 139.                        Grid.ColumnSpan="2" 140.                        Height="0"> 141.                <!-- Recurrence buttons --> 142.                <Border Margin="1,1,0,0" 143.                        Background="#50000000" 144.                        HorizontalAlignment="Left" 145.                        VerticalAlignment="Center" 146.                        Width="2" 147.                        Height="16"> 148.                    <Border Margin="0,0,1,1" 149.                            Background="#80FFFFFF" 150.                            HorizontalAlignment="Left" 151.                            Width="1" /> 152.                </Border> 153.                <Border Margin="1,1,0,0" 154.                        Background="#50000000" 155.                        HorizontalAlignment="Left" 156.                        VerticalAlignment="Center" 157.                        Width="2" 158.                        Height="16"> 159.                    <Border Margin="0,0,1,1" 160.                            Background="#80FFFFFF" 161.                            HorizontalAlignment="Left" 162.                            Width="1" /> 163.                </Border> 164.                <TextBlock telerik:LocalizationManager.ResourceKey="ShowAs" 165.                           VerticalAlignment="Center" 166.                           Margin="5,0,0,0" /> 167.                <telerikInput:RadComboBox ItemsSource="{TemplateBinding TimeMarkers}" 168.                                          Width="100" 169.                                          Height="20" 170.                                          VerticalAlignment="Center" 171.                                          Margin="5,0,0,0" 172.                                          ClearSelectionButtonVisibility="Visible" 173.                                          ClearSelectionButtonContent="Clear" 174.                                          SelectedItem="{Binding TimeMarker,RelativeSource={RelativeSource TemplatedParent},Mode=TwoWay}" 175.                                          telerik:StyleManager.Theme="{StaticResource Theme}"> 176.                    <telerikInput:RadComboBox.ItemTemplate> 177.                        <DataTemplate> 178.                            <StackPanel Orientation="Horizontal"> 179.                                <Rectangle Fill="{Binding TimeMarkerBrush}" 180.                                           Margin="2" 181.                                           Width="12" 182.                                           Height="12" /> 183.                                <TextBlock Text="{Binding TimeMarkerName}" 184.                                           Margin="2" /> 185.                            </StackPanel> 186.                        </DataTemplate> 187.                    </telerikInput:RadComboBox.ItemTemplate> 188.                </telerikInput:RadComboBox> 189.                <telerik:RadToggleButton x:Name="High" 190.                                         BorderThickness="0" 191.                                         Background="{StaticResource RadToolBar_InnerBackground}" 192.                                         DataContext="{TemplateBinding EditedAppointment}" 193.                                         telerik:StyleManager.Theme="{StaticResource Theme}" 194.                                         IsChecked="{Binding Importance,Mode=TwoWay, Converter={StaticResource ImportanceToBooleanConverter},ConverterParameter=High}" 195.                                         Margin="2,2,0,2" 196.                                         Width="23" 197.                                         Height="23" 198.                                         HorizontalContentAlignment="Center" 199.                                         ToolTipService.ToolTip="High importance" 200.                                         CommandParameter="High" 201.                                         Command="telerikScheduler:RadSchedulerCommands.SetAppointmentImportance"> 202.                    <StackPanel HorizontalAlignment="Center"> 203.                        <Path Stretch="Fill" 204.                              Height="10" 205.                              HorizontalAlignment="Center" 206.                              VerticalAlignment="Top" 207.                              Width="5.451" 208.                              Data="M200.39647,58.840393 C200.39337,58.336426 201.14566,57.683922 202.56244,57.684292 C204.06589,57.684685 204.73764,58.357765 204.72783,58.992363 C205.04649,61.795574 203.04713,64.181099 202.47388,66.133446 C201.93753,64.154961 199.9471,61.560352 200.39647,58.840393 z"> 209.                            <Path.Fill> 210.                                <LinearGradientBrush EndPoint="1.059,0.375" 211.                                                     StartPoint="-0.457,0.519"> 212.                                    <GradientStop Color="#FFFF0606" 213.                                                  Offset="0.609" /> 214.                                    <GradientStop Color="#FFBF0303" 215.                                                  Offset="0.927" /> 216.                                </LinearGradientBrush> 217.                            </Path.Fill> 218.                        </Path> 219.                        <Ellipse Height="3" 220.                                 HorizontalAlignment="Center" 221.                                 Margin="0,-1,0,0" 222.                                 VerticalAlignment="Top" 223.                                 Width="3"> 224.                            <Ellipse.Fill> 225.                                <RadialGradientBrush> 226.                                    <GradientStop Color="#FFFF0606" 227.                                                  Offset="0" /> 228.                                    <GradientStop Color="#FFBF0303" 229.                                                  Offset="1" /> 230.                                </RadialGradientBrush> 231.                            </Ellipse.Fill> 232.                        </Ellipse> 233.                    </StackPanel> 234.                </telerik:RadToggleButton> 235.                <telerik:RadToggleButton x:Name="Low" 236.                                         HorizontalContentAlignment="Center" 237.                                         BorderThickness="0" 238.                                         Background="{StaticResource RadToolBar_InnerBackground}" 239.                                         DataContext="{TemplateBinding EditedAppointment}" 240.                                         IsChecked="{Binding Importance,Mode=TwoWay, Converter={StaticResource ImportanceToBooleanConverter},ConverterParameter=Low}" 241.                                         Margin="0,2,0,2" 242.                                         Width="23" 243.                                         Height="23" 244.                                         ToolTipService.ToolTip="Low importance" 245.                                         CommandParameter="Low" 246.                                         telerik:StyleManager.Theme="{StaticResource Theme}" 247.                                         Command="telerikScheduler:RadSchedulerCommands.SetAppointmentImportance"> 248.                    <Path Stretch="Fill" 249.                          Height="12" 250.                          HorizontalAlignment="Center" 251.                          VerticalAlignment="Top" 252.                          Width="9" 253.                          Data="M222.40353,60.139881 L226.65768,60.139843 L226.63687,67.240196 L229.15347,67.240196 L224.37816,71.394943 L219.65274,67.240196 L222.37572,67.219345 z" 254.                          Stroke="#FF0365A7"> 255.                        <Path.Fill> 256.                            <LinearGradientBrush EndPoint="1.059,0.375" 257.                                                 StartPoint="-0.457,0.519"> 258.                                <GradientStop Color="#FFBBE4FF" /> 259.                                <GradientStop Color="#FF024572" 260.                                              Offset="0.836" /> 261.                                <GradientStop Color="#FF43ADF4" 262.                                              Offset="0.466" /> 263.                            </LinearGradientBrush> 264.                        </Path.Fill> 265.                    </Path> 266.                </telerik:RadToggleButton> 267.            </StackPanel > 268.            <Border DataContext="{TemplateBinding EditedAppointment}" 269.                    Background="{Binding Category.CategoryBrush}" 270.                    Visibility="{Binding Category,Converter={StaticResource NullToVisibilityConverter}}" 271.                    CornerRadius="3" 272.                    Height="20" 273.                    Margin="5,10,5,0"> 274.                <TextBlock Text="{Binding Category.DisplayName}" 275.                           VerticalAlignment="Center" 276.                           Margin="5,0,0,0" /> 277.            </Border> 278.            <Grid VerticalAlignment="Stretch" 279.                  HorizontalAlignment="Stretch" 280.                  DataContext="{TemplateBinding EditedAppointment}" 281.                  Background="{TemplateBinding Background}"> 282.                <Grid.RowDefinitions> 283.                    <RowDefinition Height="Auto" /> 284.                    <RowDefinition Height="Auto" /> 285.                    <RowDefinition Height="Auto" /> 286.                    <RowDefinition Height="Auto" /> 287.                    <RowDefinition Height="Auto" /> 288.                    <RowDefinition Height="Auto" /> 289.                    <RowDefinition Height="Auto" /> 290.                    <RowDefinition Height="Auto" /> 291.                    <RowDefinition Height="Auto" /> 292.                    <RowDefinition Height="Auto" /> 293.                </Grid.RowDefinitions> 294.                <Grid.ColumnDefinitions> 295.                    <ColumnDefinition Width="Auto" 296.                                      MinWidth="100" /> 297.                    <ColumnDefinition Width="Auto" 298.                                      MinWidth="200" /> 299.                </Grid.ColumnDefinitions> 300.                <!-- Subject --> 301.                <TextBlock x:Name="SubjectLabel" 302.                           Grid.Row="0" 303.                           Grid.Column="0" 304.                           Margin="0,15,0,2" 305.                           telerik:LocalizationManager.ResourceKey="Subject" 306.                           Style="{StaticResource FormElementTextBlockStyle}" /> 307.                <TextBox x:Name="Subject" 308.                         Grid.Row="0" 309.                         Grid.Column="1" 310.                         MinHeight="22" 311.                         Padding="4 2" 312.                         Width="340" 313.                         HorizontalAlignment="Left" 314.                         Text="{Binding Subject, Mode=TwoWay}" 315.                         MaxLength="255" 316.                         telerik:StyleManager.Theme="{StaticResource Theme}" 317.                         Margin="10,12,20,2" /> 318.                <!-- Description --> 319.                <TextBlock x:Name="DescriptionLabel" 320.                           Grid.Row="1" 321.                           Grid.Column="0" 322.                           Margin="0,13,0,2" 323.                           telerik:LocalizationManager.ResourceKey="Body" 324.                           Style="{StaticResource FormElementTextBlockStyle}" /> 325.                <TextBox x:Name="Body" 326.                         VerticalAlignment="top" 327.                         Grid.Row="1" 328.                         Grid.Column="1" 329.                         Height="Auto" 330.                         MaxHeight="82" 331.                         Width="340" 332.                         HorizontalAlignment="Left" 333.                         MinHeight="22" 334.                         Padding="4 2" 335.                         TextWrapping="Wrap" 336.                         telerik:StyleManager.Theme="{StaticResource Theme}" 337.                         Text="{Binding Body, Mode=TwoWay}" 338.                         AcceptsReturn="true" 339.                         Margin="10,10,20,2" 340.                         HorizontalScrollBarVisibility="Auto" 341.                         VerticalScrollBarVisibility="Auto" /> 342.                <!-- Start/End date --> 343.                <TextBlock x:Name="StartDateLabel" 344.                           Grid.Row="2" 345.                           Grid.Column="0" 346.                           Margin="0,13,0,2" 347.                           telerik:LocalizationManager.ResourceKey="StartTime" 348.                           Style="{StaticResource FormElementTextBlockStyle}" /> 349.                <telerikScheduler:DateTimePicker x:Name="StartDateTime" 350.                                                 Height="22" 351.                                                 Grid.Row="2" 352.                                                 Grid.Column="1" 353.                                                 HorizontalAlignment="Left" 354.                                                 Margin="10,10,20,2" 355.                                                 Style="{StaticResource FormElementStyle}" 356.                                                 SelectedDateTime="{Binding Start, Mode=TwoWay}" 357.                                                 telerikScheduler:StartEndDatePicker.EndPicker="{Binding ElementName=EndDateTime}" 358.                                                 IsTabStop="False" 359.                                                 IsEnabled="False" /> 360.                <TextBlock x:Name="EndDateLabel" 361.                           Grid.Row="3" 362.                           Grid.Column="0" 363.                           Margin="0,13,0,2" 364.                           telerik:LocalizationManager.ResourceKey="EndTime" 365.                           Style="{StaticResource FormElementTextBlockStyle}" /> 366.                <telerikScheduler:DateTimePicker x:Name="EndDateTime" 367.                                                 Height="22" 368.                                                 Grid.Row="3" 369.                                                 Grid.Column="1" 370.                                                 HorizontalAlignment="Left" 371.                                                 Margin="10,10,20,2" 372.                                                 Style="{StaticResource FormElementStyle}" 373.                                                 IsTabStop="False" 374.                                                 IsEnabled="False" 375.                                                 SelectedDateTime="{Binding End, Mode=TwoWay}" /> 376.                <!-- Is-all-day selector --> 377.                <CheckBox x:Name="AllDayEventCheckbox" 378.                          IsChecked="{Binding IsAllDayEvent, Mode=TwoWay}" 379.                          Grid.Row="4" 380.                          Grid.Column="1" 381.                          Margin="10,10,20,2" 382.                          HorizontalAlignment="Left" 383.                          telerik:StyleManager.Theme="{StaticResource Theme}" 384.                          telerik:LocalizationManager.ResourceKey="AllDayEvent"> 385.                    <telerik:CommandManager.InputBindings> 386.                        <telerik:InputBindingCollection> 387.                            <telerik:MouseBinding Command="telerikScheduler:RadSchedulerCommands.ChangeTimePickersVisibility" 388.                                                  Gesture="LeftClick" /> 389.                        </telerik:InputBindingCollection> 390.                    </telerik:CommandManager.InputBindings> 391.                </CheckBox> 392.                <Grid Grid.Row="5" 393.                      Grid.ColumnSpan="2"> 394.                    <Grid.ColumnDefinitions> 395.                        <ColumnDefinition Width="Auto" 396.                                          MinWidth="100" /> 397.                        <ColumnDefinition Width="Auto" 398.                                          MinWidth="200" /> 399.                    </Grid.ColumnDefinitions> 400.                    <Grid.RowDefinitions> 401.                        <RowDefinition Height="Auto" /> 402.                        <RowDefinition Height="Auto" /> 403.                    </Grid.RowDefinitions> 404.                    <TextBlock Text="Applicant" 405.                               Margin="0,13,0,2" 406.                               Style="{StaticResource FormElementTextBlockStyle}" /> 407.                    <telerikInput:RadComboBox IsEditable="False" 408.                                              Grid.Column="1" 409.                                              Height="24" 410.                                              VerticalAlignment="Center" 411.                                              ItemsSource="{Binding Source={StaticResource DataContextProxy}, Path=DataSource.ApplicantList}" 412.                                              SelectedValue="{Binding ApplicantID, Mode=TwoWay}" 413.                                              SelectedValuePath="ApplicantID" 414.                                              DisplayMemberPath="FirstName" /> 415.                       416.                    <TextBlock Text="Job" 417.                               Margin="0,13,0,2" 418.                               Grid.Row="1" 419.                               Style="{StaticResource FormElementTextBlockStyle}" /> 420.                    <telerikInput:RadComboBox IsEditable="False" 421.                                              Grid.Column="1" 422.                                              Grid.Row="1" 423.                                              Height="24" 424.                                              VerticalAlignment="Center" 425.                                              ItemsSource="{Binding Source={StaticResource DataContextProxy}, Path=DataSource.JobsList}" 426.                                              SelectedValue="{Binding PostingID, Mode=TwoWay}" 427.                                              SelectedValuePath="PostingID" 428.                                              DisplayMemberPath="JobTitle"/> 429.                </Grid> 430.                    <!-- Resources --> 431.                <Grid x:Name="ResourcesLayout" 432.                      Grid.Row="7" 433.                      Grid.Column="0" 434.                      Grid.ColumnSpan="2" 435.                      MaxHeight="130" 436.                      Margin="20,5,20,0"> 437.                    <Border Margin="0" 438.                            BorderThickness="1" 439.                            BorderBrush="{StaticResource GenericShallowBorderBrush}" 440.                            Visibility="{Binding ElementName=ResourcesScrollViewer, Path=ComputedVerticalScrollBarVisibility}"></Border> 441.                    <ScrollViewer x:Name="ResourcesScrollViewer" 442.                                  IsTabStop="false" 443.                                  Grid.Row="6" 444.                                  Grid.Column="0" 445.                                  Grid.ColumnSpan="2" 446.                                  Margin="1" 447.                                  telerik:StyleManager.Theme="{StaticResource Theme}" 448.                                  VerticalScrollBarVisibility="Auto"> 449.                        <scheduler:ResourcesItemsControl x:Name="PART_Resources" 450.                                                         HorizontalAlignment="Left" 451.                                                         Padding="0,2,0,5" 452.                                                         IsTabStop="false" 453.                                                         ItemsSource="{TemplateBinding ResourceTypeModels}" 454.                                                         ItemTemplateSelector="{StaticResource ItemTemplateSelector}" /> 455.                    </ScrollViewer> 456.                </Grid> 457.                <StackPanel x:Name="FooterControls" 458.                            Margin="5 10 10 10" 459.                            Grid.Row="8" 460.                            Grid.Column="1" 461.                            HorizontalAlignment="Left" 462.                            Orientation="Horizontal"> 463.                    <telerik:RadButton x:Name="OKButton" 464.                                       Margin="5" 465.                                       Padding="10 0" 466.                                       MinWidth="80" 467.                                       Command="telerikScheduler:RadSchedulerCommands.SaveAppointment" 468.                                       telerik:StyleManager.Theme="{StaticResource Theme}" 469.                                       telerikNavigation:RadWindow.ResponseButton="Accept" 470.                                       telerik:LocalizationManager.ResourceKey="SaveAndCloseCommandText"> 471.                    </telerik:RadButton> 472.                    <telerik:RadButton x:Name="CancelButton" 473.                                       Margin="5" 474.                                       Padding="10 0" 475.                                       MinWidth="80" 476.                                       telerik:LocalizationManager.ResourceKey="Cancel" 477.                                       telerik:StyleManager.Theme="{StaticResource Theme}" 478.                                       telerikNavigation:RadWindow.ResponseButton="Cancel" 479.                                       Command="telerik:WindowCommands.Close"> 480.                    </telerik:RadButton> 481.                </StackPanel> 482.            </Grid> 483.            <vsm:VisualStateManager.VisualStateGroups> 484.                <vsm:VisualStateGroup x:Name="RecurrenceRuleState"> 485.                    <vsm:VisualState x:Name="RecurrenceRuleIsNull"> 486.                        <Storyboard> 487.                            <ObjectAnimationUsingKeyFrames Storyboard.TargetName="StartDateTime" 488.                                                           Storyboard.TargetProperty="IsEnabled" 489.                                                           Duration="0"> 490.                                <DiscreteObjectKeyFrame KeyTime="0" 491.                                                        Value="True" /> 492.                            </ObjectAnimationUsingKeyFrames> 493.                            <ObjectAnimationUsingKeyFrames Storyboard.TargetName="EndDateTime" 494.                                                           Storyboard.TargetProperty="IsEnabled" 495.                                                           Duration="0"> 496.                                <DiscreteObjectKeyFrame KeyTime="0" 497.                                                        Value="True" /> 498.                            </ObjectAnimationUsingKeyFrames> 499.                            <ObjectAnimationUsingKeyFrames Storyboard.TargetName="AllDayEventCheckbox" 500.                                                           Storyboard.TargetProperty="IsEnabled" 501.                                                           Duration="0"> 502.                                <DiscreteObjectKeyFrame KeyTime="0" 503.                                                        Value="True" /> 504.                            </ObjectAnimationUsingKeyFrames> 505.                        </Storyboard> 506.                    </vsm:VisualState> 507.                    <vsm:VisualState x:Name="RecurrenceRuleIsNotNull"> 508.                        <Storyboard> 509.                            <ObjectAnimationUsingKeyFrames Storyboard.TargetName="StartDateTime" 510.                                                           Storyboard.TargetProperty="IsEnabled" 511.                                                           Duration="0"> 512.                                <DiscreteObjectKeyFrame KeyTime="0" 513.                                                        Value="False" /> 514.                            </ObjectAnimationUsingKeyFrames> 515.                            <ObjectAnimationUsingKeyFrames Storyboard.TargetName="EndDateTime" 516.                                                           Storyboard.TargetProperty="IsEnabled" 517.                                                           Duration="0"> 518.                                <DiscreteObjectKeyFrame KeyTime="0" 519.                                                        Value="False" /> 520.                            </ObjectAnimationUsingKeyFrames> 521.                            <ObjectAnimationUsingKeyFrames Storyboard.TargetName="AllDayEventCheckbox" 522.                                                           Storyboard.TargetProperty="IsEnabled" 523.                                                           Duration="0"> 524.                                <DiscreteObjectKeyFrame KeyTime="0" 525.                                                        Value="False" /> 526.                            </ObjectAnimationUsingKeyFrames> 527.                        </Storyboard> 528.                    </vsm:VisualState> 529.                </vsm:VisualStateGroup> 530.            </vsm:VisualStateManager.VisualStateGroups> 531.        </StackPanel> 532.    </ControlTemplate> 533.    <DataTemplate x:Key="AppointmentDialogWindowHeaderDataTemplate"> 534.        <StackPanel Orientation="Horizontal" 535.                    MaxWidth="400"> 536.            <TextBlock telerik:LocalizationManager.ResourceKey="Event" 537.                       Visibility="{Binding Appointment.IsAllDayEvent, Converter={StaticResource BooleanToVisibilityConverter}}" /> 538.            <TextBlock telerik:LocalizationManager.ResourceKey="Appointment" 539.                       Visibility="{Binding Appointment.IsAllDayEvent, Converter={StaticResource InvertedBooleanToVisibilityConverter}}" /> 540.            <TextBlock Text=" - " /> 541.            <TextBlock x:Name="SubjectTextBlock" 542.                       Visibility="{Binding Appointment.Subject, Converter={StaticResource NullToVisibilityConverter}}" 543.                       Text="{Binding Appointment.Subject}" /> 544.            <TextBlock telerik:LocalizationManager.ResourceKey="Untitled" 545.                       Visibility="{Binding Appointment.Subject, Converter={StaticResource InvertedNullToVisibilityConverter}}" /> 546.        </StackPanel> 547.    </DataTemplate> 548.    <Style x:Key="EditAppointmentStyle" 549.           TargetType="telerikScheduler:AppointmentDialogWindow"> 550.        <Setter Property="IconTemplate" 551.                Value="{StaticResource IconDataEditTemplate}" /> 552.        <Setter Property="HeaderTemplate" 553.                Value="{StaticResource AppointmentDialogWindowHeaderDataTemplate}" /> 554.        <Setter Property="Background" 555.                Value="{StaticResource DialogWindowBackground}" /> 556.        <Setter Property="Template" 557.                Value="{StaticResource EditAppointmentTemplate}" /> 558.    </Style> 559.</UserControl.Resources> The first line there is the DataContextProxy I mentioned previously- we use that again to work a bit of magic in this template. Where we start getting into the dialog in question is line 132, but line 407 is where things start getting interesting.  The ItemsSource is pointing at a list that exists in my ViewModel (or code-behind, if it is used as a DataContext), the SelectedValue is the item I am actually binding from the applicant (note the TwoWay binding), and SelectedValuePath and DisplayMemberPath ensure the proper applicant is being displayed from the collection.  You will also see similar starting on line 420 where I do the same for the Jobs we'll be displaying. Just to wrap-up the Xaml, here's the RadScheduler declaraction that ties this all together and will be the main focus of our view: 01.<telerikScheduler:RadScheduler x:Name="xJobsScheduler" 02.                  Grid.Row="1" 03.                  Grid.Column="1" 04.                  Width="800" 05.                  MinWidth="600" 06.                  Height="500" 07.                  MinHeight="300" 08.                  AppointmentsSource="{Binding Interviews}" 09.                  EditAppointmentStyle="{StaticResource EditAppointmentStyle}" 10.                  command:AppointmentAddedEventClass.Command="{Binding AddAppointmentCommand}" 11.                  command:ApptCreatedEventClass.Command="{Binding ApptCreatingCommand}" 12.                  command:ApptEditedEventClass.Command="{Binding ApptEditedCommand}" 13.                  command:ApptDeletedEventClass.Command="{Binding ApptDeletedCommand}"> 14.</telerikScheduler:RadScheduler> Now, we get to the ViewModel and what it takes to get that rigged up.  And for those of you who remember the jobs post, those command:s in the Xaml are pointing to attached behavior commands that reproduce the respective events.  This becomes very handy when we're setting up the code-behind version. ;) ViewModel I've been liking this approach so far, so I'm going to put the entire ViewModel here and then go into the lines of interest.  Of course, feel free to ask me questions about anything that isn't clear (by line number, ideally) so I can help out if I have missed anything important: 001.public class SchedulerViewModel : ViewModelBase 002.{ 003.    private readonly IEventAggregator eventAggregator; 004.    private readonly IRegionManager regionManager; 005.   006.    public RecruitingContext context; 007.   008.    private ObservableItemCollection<InterviewAppointment> _interviews = new ObservableItemCollection<InterviewAppointment>(); 009.    public ObservableItemCollection<InterviewAppointment> Interviews 010.    { 011.        get { return _interviews; } 012.        set 013.        { 014.            if (_interviews != value) 015.            { 016.                _interviews = value; 017.                NotifyChanged("Interviews"); 018.            } 019.        } 020.    } 021.   022.    private QueryableCollectionView _jobsList; 023.    public QueryableCollectionView JobsList 024.    { 025.        get { return this._jobsList; } 026.        set 027.        { 028.            if (this._jobsList != value) 029.            { 030.                this._jobsList = value; 031.                this.NotifyChanged("JobsList"); 032.            } 033.        } 034.    } 035.   036.    private QueryableCollectionView _applicantList; 037.    public QueryableCollectionView ApplicantList 038.    { 039.        get { return _applicantList; } 040.        set 041.        { 042.            if (_applicantList != value) 043.            { 044.                _applicantList = value; 045.                NotifyChanged("ApplicantList"); 046.            } 047.        } 048.    } 049.   050.    public DelegateCommand<object> AddAppointmentCommand { get; set; } 051.    public DelegateCommand<object> ApptCreatingCommand { get; set; } 052.    public DelegateCommand<object> ApptEditedCommand { get; set; } 053.    public DelegateCommand<object> ApptDeletedCommand { get; set; } 054.   055.    public SchedulerViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 056.    { 057.        // set Unity items 058.        this.eventAggregator = eventAgg; 059.        this.regionManager = regionmanager; 060.   061.        // load our context 062.        context = new RecruitingContext(); 063.        LoadOperation<Interview> loadOp = context.Load(context.GetInterviewsQuery()); 064.        loadOp.Completed += new EventHandler(loadOp_Completed); 065.   066.        this._jobsList = new QueryableCollectionView(context.JobPostings); 067.        context.Load(context.GetJobPostingsQuery()); 068.   069.        this._applicantList = new QueryableCollectionView(context.Applicants); 070.        context.Load(context.GetApplicantsQuery()); 071.   072.        AddAppointmentCommand = new DelegateCommand<object>(this.AddAppt); 073.        ApptCreatingCommand = new DelegateCommand<object>(this.ApptCreating); 074.        ApptEditedCommand = new DelegateCommand<object>(this.ApptEdited); 075.        ApptDeletedCommand = new DelegateCommand<object>(this.ApptDeleted); 076.   077.    } 078.   079.    void loadOp_Completed(object sender, EventArgs e) 080.    { 081.        LoadOperation loadop = sender as LoadOperation; 082.   083.        foreach (var ent in loadop.Entities) 084.        { 085.            _interviews.Add(EntityToAppointment(ent as Interview)); 086.        } 087.    } 088.   089.    #region Appointment Adding 090.   091.    public void AddAppt(object obj) 092.    { 093.        // now we have a new InterviewAppointment to add to our QCV :) 094.        InterviewAppointment newInterview = obj as InterviewAppointment; 095.   096.        this.context.Interviews.Add(AppointmentToEntity(newInterview)); 097.        this.context.SubmitChanges((s) => 098.        { 099.            ActionHistory myAction = new ActionHistory(); 100.            myAction.InterviewID = newInterview.InterviewID; 101.            myAction.PostingID = newInterview.PostingID; 102.            myAction.ApplicantID = newInterview.ApplicantID; 103.            myAction.Description = String.Format("Interview with {0} has been created by {1}", newInterview.ApplicantID.ToString(), "default user"); 104.            myAction.TimeStamp = DateTime.Now; 105.            eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 106.        } 107.            , null); 108.    } 109.   110.    public void ApptCreating(object obj) 111.    { 112.        // handled in the behavior, just a placeholder to ensure it runs :) 113.    } 114.   115.    #endregion 116.   117.    #region Appointment Editing 118.   119.    public void ApptEdited(object obj) 120.    { 121.        Interview editedInterview = (from x in context.Interviews 122.                            where x.InterviewID == (obj as InterviewAppointment).InterviewID 123.                            select x).SingleOrDefault(); 124.   125.        CopyAppointmentEdit(editedInterview, obj as InterviewAppointment); 126.   127.        context.SubmitChanges((s) => { 128.            ActionHistory myAction = new ActionHistory(); 129.            myAction.InterviewID = editedInterview.InterviewID; 130.            myAction.PostingID = editedInterview.PostingID; 131.            myAction.ApplicantID = editedInterview.ApplicantID; 132.            myAction.Description = String.Format("Interview with {0} has been modified by {1}", editedInterview.ApplicantID.ToString(), "default user"); 133.            myAction.TimeStamp = DateTime.Now; 134.            eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); } 135.            , null); 136.    } 137.   138.    #endregion 139.   140.    #region Appointment Deleting 141.   142.    public void ApptDeleted(object obj) 143.    { 144.        Interview deletedInterview = (from x in context.Interviews 145.                                      where x.InterviewID == (obj as InterviewAppointment).InterviewID 146.                                      select x).SingleOrDefault(); 147.   148.        context.Interviews.Remove(deletedInterview); 149.        context.SubmitChanges((s) => 150.        { 151.            ActionHistory myAction = new ActionHistory(); 152.            myAction.InterviewID = deletedInterview.InterviewID; 153.            myAction.PostingID = deletedInterview.PostingID; 154.            myAction.ApplicantID = deletedInterview.ApplicantID; 155.            myAction.Description = String.Format("Interview with {0} has been deleted by {1}", deletedInterview.ApplicantID.ToString(), "default user"); 156.            myAction.TimeStamp = DateTime.Now; 157.            eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 158.        } 159.            , null); 160.    } 161.   162.    #endregion 163.   164.    #region Appointment Helpers :) 165.   166.    public Interview AppointmentToEntity(InterviewAppointment ia) 167.    { 168.        Interview newInterview = new Interview(); 169.        newInterview.Subject = ia.Subject; 170.        newInterview.Body = ia.Body; 171.        newInterview.Start = ia.Start; 172.        newInterview.End = ia.End; 173.        newInterview.ApplicantID = ia.ApplicantID; 174.        newInterview.PostingID = ia.PostingID; 175.        newInterview.InterviewID = ia.InterviewID; 176.   177.        return newInterview; 178.    } 179.   180.    public InterviewAppointment EntityToAppointment(Interview ia) 181.    { 182.        InterviewAppointment newInterview = new InterviewAppointment(); 183.        newInterview.Subject = ia.Subject; 184.        newInterview.Body = ia.Body; 185.        newInterview.Start = ia.Start; 186.        newInterview.End = ia.End; 187.        newInterview.ApplicantID = ia.ApplicantID; 188.        newInterview.PostingID = ia.PostingID; 189.        newInterview.InterviewID = ia.InterviewID; 190.   191.        return newInterview; 192.    } 193.   194.    public void CopyAppointmentEdit(Interview entityInterview, InterviewAppointment appointmentInterview) 195.    { 196.        entityInterview.Subject = appointmentInterview.Subject; 197.        entityInterview.Body = appointmentInterview.Body; 198.        entityInterview.Start = appointmentInterview.Start; 199.        entityInterview.End = appointmentInterview.End; 200.        entityInterview.ApplicantID = appointmentInterview.ApplicantID; 201.        entityInterview.PostingID = appointmentInterview.PostingID; 202.    } 203.   204.    #endregion 205.} One thing we're doing here which you won't see in any of the other ViewModels is creating a duplicate collection.  I know this is something which will be fixed down the line for using RadScheduler, simplifying this process, but with WCF RIA changing as it does I wanted to ensure functionality would remain consistent as I continued development on this application.  So, I do a little bit of duplication, but for the greater good.  This all takes place starting on line 79, so for every entity that comes back we add it to the collection that is bound to RadScheduler.  Otherwise, the DelegateCommands that you see correspond directly to the events they are named after.  In each case, rather than sending over the full event arguments, I just send in the appointment in question (coming through as the object obj in all cases) so I can add (line 91), edit (line 119), and delete appointments (line 142) like normal.  This just ensures they get updated back to my database.  Also, the one bit of code you won't see is for the Appointment Creating (line 110) event- that is because in the command I've created I simply make the replacement I need to: 1.void element_AppointmentCreating(object sender, AppointmentCreatingEventArgs e) 2.{ 3.    e.NewAppointment = new InterviewAppointment(); 4.    base.ExecuteCommand(); 5.} And the ViewModel is none the wiser, the appointments just work as far as it is concerned since as they are created they become InterviewAppointments.  End result?  I've customized my EditAppointmentDialog as follows: And adding, editing, and deleting appointments works like a charm.  I can even 'edit' by moving appointments around RadScheduler, so as they are dropped into a timeslot they perform their full edit routine and things get updated. And then, the Code-Behind Version Perhaps the thing I like the most about doing one then the other is I get to steal 90% or more of the code from the MVVM version.  For example, the only real changes to the Code-Behind Xaml file exist in the control declaration, in which I use events instead of attached-behavior-event-commands: 01.<telerikScheduler:RadScheduler x:Name="xJobsScheduler" 02.                  Grid.Row="1" 03.                  Grid.Column="1" 04.                  Width="800" 05.                  MinWidth="600" 06.                  Height="500" 07.                  MinHeight="300" 08.                  EditAppointmentStyle="{StaticResource EditAppointmentStyle}" 09.                  AppointmentAdded="xJobsScheduler_AppointmentAdded" 10.                  AppointmentCreating="xJobsScheduler_AppointmentCreating" 11.                  AppointmentEdited="xJobsScheduler_AppointmentEdited" 12.                  AppointmentDeleted="xJobsScheduler_AppointmentDeleted"> 13.</telerikScheduler:RadScheduler> Easy, right?  Otherwise, all the same styling in UserControl.Resources was re-used, right down to the DataContextProxy that lets us bind to a collection from our viewmodel (in this case, our code-behind) to use within the DataTemplate.  The code conversion gets even easier, as I could literally copy and paste almost everything from the ViewModel to my Code-Behind, just a matter of pasting the right section into the right event.  Here's the code-behind as proof: 001.public partial class SchedulingView : UserControl, INotifyPropertyChanged 002.{ 003.    public RecruitingContext context; 004.   005.    private QueryableCollectionView _jobsList; 006.    public QueryableCollectionView JobsList 007.    { 008.        get { return this._jobsList; } 009.        set 010.        { 011.            if (this._jobsList != value) 012.            { 013.                this._jobsList = value; 014.                this.NotifyChanged("JobsList"); 015.            } 016.        } 017.    } 018.   019.    private QueryableCollectionView _applicantList; 020.    public QueryableCollectionView ApplicantList 021.    { 022.        get { return _applicantList; } 023.        set 024.        { 025.            if (_applicantList != value) 026.            { 027.                _applicantList = value; 028.                NotifyChanged("ApplicantList"); 029.            } 030.        } 031.    } 032.   033.    private ObservableItemCollection<InterviewAppointment> _interviews = new ObservableItemCollection<InterviewAppointment>(); 034.    public ObservableItemCollection<InterviewAppointment> Interviews 035.    { 036.        get { return _interviews; } 037.        set 038.        { 039.            if (_interviews != value) 040.            { 041.                _interviews = value; 042.                NotifyChanged("Interviews"); 043.            } 044.        } 045.    } 046.   047.    public SchedulingView() 048.    { 049.        InitializeComponent(); 050.   051.        this.DataContext = this; 052.   053.        this.Loaded += new RoutedEventHandler(SchedulingView_Loaded); 054.    } 055.   056.    void SchedulingView_Loaded(object sender, RoutedEventArgs e) 057.    { 058.        this.xJobsScheduler.AppointmentsSource = Interviews; 059.   060.        context = new RecruitingContext(); 061.           062.        LoadOperation loadop = context.Load(context.GetInterviewsQuery()); 063.        loadop.Completed += new EventHandler(loadop_Completed); 064.   065.        this._applicantList = new QueryableCollectionView(context.Applicants); 066.        context.Load(context.GetApplicantsQuery()); 067.   068.        this._jobsList = new QueryableCollectionView(context.JobPostings); 069.        context.Load(context.GetJobPostingsQuery()); 070.    } 071.   072.    void loadop_Completed(object sender, EventArgs e) 073.    { 074.        LoadOperation loadop = sender as LoadOperation; 075.   076.        _interviews.Clear(); 077.   078.        foreach (var ent in loadop.Entities) 079.        { 080.            _interviews.Add(EntityToAppointment(ent as Interview)); 081.        } 082.    } 083.   084.    private void xJobsScheduler_AppointmentAdded(object sender, Telerik.Windows.Controls.AppointmentAddedEventArgs e) 085.    { 086.        // now we have a new InterviewAppointment to add to our QCV :) 087.        InterviewAppointment newInterview = e.Appointment as InterviewAppointment; 088.   089.        this.context.Interviews.Add(AppointmentToEntity(newInterview)); 090.        this.context.SubmitChanges((s) => 091.        { 092.            ActionHistory myAction = new ActionHistory(); 093.            myAction.InterviewID = newInterview.InterviewID; 094.            myAction.PostingID = newInterview.PostingID; 095.            myAction.ApplicantID = newInterview.ApplicantID; 096.            myAction.Description = String.Format("Interview with {0} has been created by {1}", newInterview.ApplicantID.ToString(), "default user"); 097.            myAction.TimeStamp = DateTime.Now; 098.            context.ActionHistories.Add(myAction); 099.            context.SubmitChanges(); 100.        } 101.            , null); 102.    } 103.   104.    private void xJobsScheduler_AppointmentCreating(object sender, Telerik.Windows.Controls.AppointmentCreatingEventArgs e) 105.    { 106.        e.NewAppointment = new InterviewAppointment(); 107.    } 108.   109.    private void xJobsScheduler_AppointmentEdited(object sender, Telerik.Windows.Controls.AppointmentEditedEventArgs e) 110.    { 111.        Interview editedInterview = (from x in context.Interviews 112.                                     where x.InterviewID == (e.Appointment as InterviewAppointment).InterviewID 113.                                     select x).SingleOrDefault(); 114.   115.        CopyAppointmentEdit(editedInterview, e.Appointment as InterviewAppointment); 116.   117.        context.SubmitChanges((s) => 118.        { 119.            ActionHistory myAction = new ActionHistory(); 120.            myAction.InterviewID = editedInterview.InterviewID; 121.            myAction.PostingID = editedInterview.PostingID; 122.            myAction.ApplicantID = editedInterview.ApplicantID; 123.            myAction.Description = String.Format("Interview with {0} has been modified by {1}", editedInterview.ApplicantID.ToString(), "default user"); 124.            myAction.TimeStamp = DateTime.Now; 125.            context.ActionHistories.Add(myAction); 126.            context.SubmitChanges(); 127.        } 128.            , null); 129.    } 130.   131.    private void xJobsScheduler_AppointmentDeleted(object sender, Telerik.Windows.Controls.AppointmentDeletedEventArgs e) 132.    { 133.        Interview deletedInterview = (from x in context.Interviews 134.                                      where x.InterviewID == (e.Appointment as InterviewAppointment).InterviewID 135.                                      select x).SingleOrDefault(); 136.   137.        context.Interviews.Remove(deletedInterview); 138.        context.SubmitChanges((s) => 139.        { 140.            ActionHistory myAction = new ActionHistory(); 141.            myAction.InterviewID = deletedInterview.InterviewID; 142.            myAction.PostingID = deletedInterview.PostingID; 143.            myAction.ApplicantID = deletedInterview.ApplicantID; 144.            myAction.Description = String.Format("Interview with {0} has been deleted by {1}", deletedInterview.ApplicantID.ToString(), "default user"); 145.            myAction.TimeStamp = DateTime.Now; 146.            context.ActionHistories.Add(myAction); 147.            context.SubmitChanges(); 148.        } 149.            , null); 150.    } 151.   152.    #region Appointment Helpers :) 153.   154.    public Interview AppointmentToEntity(InterviewAppointment ia) 155.    { 156.        Interview newInterview = new Interview(); 157.        newInterview.Subject = ia.Subject; 158.        newInterview.Body = ia.Body; 159.        newInterview.Start = ia.Start; 160.        newInterview.End = ia.End; 161.        newInterview.ApplicantID = ia.ApplicantID; 162.        newInterview.PostingID = ia.PostingID; 163.        newInterview.InterviewID = ia.InterviewID; 164.   165.        return newInterview; 166.    } 167.   168.    public InterviewAppointment EntityToAppointment(Interview ia) 169.    { 170.        InterviewAppointment newInterview = new InterviewAppointment(); 171.        newInterview.Subject = ia.Subject; 172.        newInterview.Body = ia.Body; 173.        newInterview.Start = ia.Start; 174.        newInterview.End = ia.End; 175.        newInterview.ApplicantID = ia.ApplicantID; 176.        newInterview.PostingID = ia.PostingID; 177.        newInterview.InterviewID = ia.InterviewID; 178.   179.        return newInterview; 180.    } 181.   182.    public void CopyAppointmentEdit(Interview entityInterview, InterviewAppointment appointmentInterview) 183.    { 184.        entityInterview.Subject = appointmentInterview.Subject; 185.        entityInterview.Body = appointmentInterview.Body; 186.        entityInterview.Start = appointmentInterview.Start; 187.        entityInterview.End = appointmentInterview.End; 188.        entityInterview.ApplicantID = appointmentInterview.ApplicantID; 189.        entityInterview.PostingID = appointmentInterview.PostingID; 190.    } 191.   192.    #endregion 193.   194.    #region INotifyPropertyChanged Members 195.   196.    public event PropertyChangedEventHandler PropertyChanged; 197.   198.    public void NotifyChanged(string propertyName) 199.    { 200.        if (string.IsNullOrEmpty(propertyName)) 201.            throw new ArgumentException("propertyName"); 202.   203.        PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); 204.    } 205.   206.    #endregion 207.} Nice... right? :) One really important thing to note as well.  See on line 51 where I set the DataContext before the Loaded event?  This is super important, as if you don't have this set before the usercontrol is loaded, the DataContextProxy has no context to use and your EditAppointmentDialog Job/Applicant dropdowns will be blank and empty.  Trust me on this, took a little bit of debugging to figure out that by setting the DataContext post-loaded would only lead to disaster and frustration.  Otherwise, the only other real difference is that instead of sending an ActionHistory item through an event to get added to the database and saved, I do those right in the callback from submitting.  The Result Again, I only have to post one picture because these bad boys used nearly identical code for both the MVVM and the code-behind views, so our end result is... So what have we learned here today?  One, for the most part this MVVM thing is somewhat easy.  Yeah, you sometimes have to write a bunch of extra code, but with the help of a few useful snippits you can turn the process into a pretty streamlined little workflow.  Heck, this story gets even easier as you can see in this blog post by Michael Washington- specifically run a find on 'InvokeCommandAction' and you'll see the section regarding the command on TreeView in Blend 4.  Brilliant!  MVVM never looked so sweet! Otherwise, it is business as usual with RadScheduler for Silverlight whichever path you're choosing for your development.  Between now and the next post, I'll be cleaning up styles a bit (those RadComboBoxes are a little too close to my labels!) and adding some to the RowDetailsViews for Applicants and Jobs, so you can see all the info for an appointment in the dropdown tab view.  Otherwise, we're about ready to call a wrap on this oneDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Interesting things – Twitter annotations and your phone as a web server

    - by jamiet
    I overheard/read a couple of things today that really made me, data junkie that I am, take a step back and think, “Hmmm, yeah, that could be really interesting” and I wanted to make a note of them here so that (a) I could bring them to the attention of anyone that happens to read this and (b) I can maybe come back here in a few years and see if either of these have come to fruition. Your phone as a web server While listening to Jon Udell’s (twitter) “Interviews with Innovators Podcast” today in which he interviewed Herbert Van de Sompel (twitter) about his Momento project. During the interview Jon and Herbert made the following remarks: Jon: [some people] really had this vision of a web of servers, the notion that every node on the internet, every connected entity, is potentially a server and a client…we can see where we’re getting to a point where these endpoint devices we have in our pockets are going to be massively capable and it may be in the not too distant future that significant chunks of the web archive will be cached all over the place including on your own machine… Herbert: wasn’t it Opera who at one point turned your browser into a server? That really got my brain ticking. We all carry a mobile phone with us and therefore we all potentially carry a mobile web server with us as well and to my mind the only thing really stopping that from happening is the capabilities of the phone hardware, the capabilities of the network infrastructure and the will to just bloody do it. Certainly all the standards required for addressing a web server on a phone already exist (to this uninitiated observer DNS and IPv6 seem to solve that problem) so why not? I tweeted about the idea and Rory Street answered back with “why would you want a phone to be a web server?”: Its a fair question and one that I would like to try and answer. Mobile phones are increasingly becoming our window onto the world as we use them to upload messages to Twitter, record our location on FourSquare or interact with our friends on Facebook but in each of these cases some other service is acting as our intermediary; to see what I’m thinking you have to go via Twitter, to see where I am you have to go to FourSquare (I’m using ‘I’ liberally, I don’t actually use FourSquare before you ask). Why should this have to be the case? Why can’t that data be decentralised? Why can’t we be masters of our own data universe? If my phone acted as a web server then I could expose all of that information without needing those intermediary services. I see a time when we can pass around URLs such as the following: http://jamiesphone.net/location/current - Where is Jamie right now? http://jamiesphone.net/location/2010-04-21 – Where was Jamie on 21st April 2010? http://jamiesphone.net/thoughts/current – What’s on Jamie’s mind right now? http://jamiesphone.net/blog – What documents is Jamie sharing with me? http://jamiesphone.net/calendar/next7days – Where is Jamie planning to be over the next 7 days? and those URLs get served off of the phone in our pockets. If we govern that data then we can control who has access to it and (crucially) how long its available for. Want to wipe yourself off the face of the web? its pretty easy if you’re in control of all the data – just turn your phone off. None of this exists today but I look forward to a time when it does. Opera really were onto something last June when they announced Opera Unite (admittedly Unite only works because Opera provide an intermediary DNS-alike system – it isn’t totally decentralised). Opening up Twitter annotations Last week Twitter held their first developer conference called Chirp where they announced an upcoming new feature called ‘Twitter Annotations’; in short this will allow us to attach metadata to a Tweet thus enhancing the tweet itself. Think of it as a richer version of hashtags. To think of it another way Twitter are turning their data into a humongous Entity-Attribute-Value or triple-tuple store. That alone has huge implications both for the web and Twitter as a whole – the ability to enrich that 140 characters data and thus make it more useful is indeed compelling however today I stumbled upon a blog post from Eugene Mandel entitled Tweet Annotations – a Way to a Metadata Marketplace? where he proposed the idea of allowing tweets to have metadata added by people other than the person who tweeted the original tweet. This idea really fascinated me especially when I read some of the potential uses that Eugene and his commenters suggested. They included: Amazon could attach an ISBN to a tweet that mentions a book. Specialist clients apps for book lovers could be built up around this metadata. Advertisers could pay to place adverts in metadata. The revenue generated from those adverts could be shared with the tweeter or people who add the metadata. Granted, allowing anyone to add metadata to a tweet has the potential to create a spam problem the like of which we haven’t even envisaged but spam hasn’t halted the growth of the web and neither should it halt the growth of data annotations either. The original tweeter should of course be able to determine who can add metadata and whether it should be moderated. As Eugene says himself: Opening publishing tweet annotations to anyone will open the way to a marketplace of metadata where client developers, data mining companies and advertisers can add new meaning to Twitter and build innovative businesses. What Eugene and his followers did not mention is what I think is potentially the most fascinating use of opening up annotations. Google’s success today is built on their page rank algorithm that measures the validity of a web page by the number of incoming links to it and the page rank of the sites containing those links – its a system built on reputation. Twitter annotations could open up a new paradigm however – let’s call it People rank- where reputation can be measured by the metadata that people choose to apply to links and the websites containing those links. Its not hard to see why Google and Microsoft have paid big bucks to get access to the Twitter firehose! Neither of these features, phones as a web server or the ability to add annotations to other people’s tweets, exist today but I strongly believe that they could dramatically enhance the web as we know it today. I hope to look back on this blog post in a few years in the knowledge that these ideas have been put into place. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • Upgrading Windows 8 boot to VHD to Windows 8.1&ndash;Step by step guide

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/19/upgrading-windows-8-boot-to-vhd-to-windows-8.1ndashstep-by.aspxBoot to VHD – dual booting Windows 7 and Windows 8 became easy When Windows 8 arrived, quite a few people decided that they would still dual boot their machines, and instead of mucking about with resizing disk partitions to free up space for Windows 8 they decided to use the boot from VHD feature to create a huge hard disc image into which Windows 8 could be installed.  Scott Hanselman wrote this installation guide, while I myself used the installation guide from Ed Bott of ZD net fame. Boot to VHD is a great solution, it achieves a dual boot, can be backed up easily and had virtually no effect on the original Windows 7 partition. As a developer who has dual booted Windows operating systems for years, hacking boot.ini files, the boot to VHD was a much easier solution. Upgrade to Windows 8.1 – ah, you can’t do that on a virtual disk installation (boot to VHD) Last week the final version of Windows 8.1 arrived, and I went into the Windows Store to upgrade.  Luckily I’m on a fast download service, and use an SSD, because once the upgrade was downloaded and prepared Windows informed that This PC can’t run Windows 8.1, and provided the reason, You can’t install Windows on a virtual drive.  You can see an image of the message and discussion that sparked my search for a solution in this Microsoft Technet forum post. I was determined not to have to resize partitions yet again and fiddle with VHD to disk utilities and back again, and in the end I did succeed in upgrading to a Windows 8.1 boot to VHD partition.  It takes quite a bit of effort though … tldr; Simple steps of how you upgrade Boot into Windows 7 – make a copy of your Windows 8 VHD, to become Windows 8.1 Enable Hyper-V in your Windows 8 (the original boot to VHD partition) Create a new virtual machine, attaching the copy of your Windows 8 VHD Start the virtual machine, upgrade it via the Windows Store to Windows 8.1 Shutdown the virtual machine Boot into Windows 7 – use the bcedit tool to create a new Windows 8.1 boot to VHD option (pointing at the copy) Boot into the new Windows 8.1 option Reactivate Windows 8.1 (it will have become deactivated by running under Hyper-V) Remove the original Windows 8 VHD, and in Windows 7 use bcedit to remove it from the boot menu Things you’ll need A system that can run Hyper-V under Windows 8 (Intel i5, i7 class CPU) Enough space to have your original Windows 8 boot to VHD and a copy at the same time An ISO or DVD for Windows 8 to create a bootable Windows 8 partition Step by step guide Boot to your base o/s, the real one, Windows 7. Make a copy of the Windows 8 VHD file that you use to boot Windows 8 (via boot from VHD) – I copied it from a folder on C: called VHD-Win8 to VHD-Win8.1 on my N: drive. Reboot your system into Windows 8, and enable Hyper-V if not already present (this may require reboot) Use the Hyper-V manager , create a new Hyper-V machine, using half your system memory, and use the option to attach an existing VHD on the main IDE controller – this will be the new copy you made in Step 2. Start the virtual machine, use Connect to view it, and you’ll probably discover it cannot boot as there is no boot record If this is the case, go to Hyper-V manager, edit the Settings for the virtual machine to attach an ISO of a Windows 8 DVD to the second IDE controller. Start the virtual machine, use Connect to view it, and it should now attempt a fresh installation of Windows 8.  You should select Advanced Options and choose Repair - this will make VHD bootable When the setup reboots your virtual machine, turn off the virtual machine, and remove the ISO of the Windows 8 DVD from the virtual machine settings. Start virtual machine, use Connect to view it.  You will see the devices to be re-discovered (including your quad CPU becoming single CPU).  Eventually you should see the Windows Login screen. You may notice that your desktop background (Win+D) will have turned black as your Windows installation has become deactivate due to the hardware changes between your real PC and Hyper-V. Fortunately becoming deactivated, does not stop you using the Windows Store, where you can select the update to Windows 8.1. You can now watch the progress joy of the Windows 8 update; downloading, preparing to update, checking compatibility, gathering info, preparing to restart, and finally, confirm restart - remember that you are restarting your virtual machine sitting on the copy of the VHD, not the Windows 8 boot to VHD you are currently using to run Hyper-V (confused yet?) After the reboot you get the real upgrade messages; setting up x%, xx%, (quite slow) After a while, Getting ready Applying PC Settings x%, xx% (really slow) Updating your system (fast) Setting up a few more things x%, (quite slow) Getting ready, again Accept license terms Express settings Confirmed previous password Next, I had to set up a Microsoft account – which is possibly now required, and not optional Using the Microsoft account required a 2 factor authorization, via text message, a 7 digit code for me Finalising settings Blank screen, HI .. We're setting up things for you (similar to original Windows 8 install) 'You can get new apps from the Store', below which is ’Installing your apps’ - I had Windows Media Center which is counts as an app from the Store ‘Taking care of a few things’, below which is ‘Installing your apps’ ‘Taking care of a few things’, below ‘Don't turn off your PC’ ‘Getting your apps ready’, below ‘Don't turn off your PC’ ‘Almost ready’, below ‘Don't turn off your PC’ … finally, we get the Windows 8.1 start menu, and a quick Win+D to check the desktop confirmed all the application icons I expected, pinned items on the taskbar, and one app moaning about a missing drive At this point the upgrade is complete – you can shutdown the virtual machine Reboot from the original Windows 8 and return to Windows 7 to configure booting to the Windows 8.1 copy of the VHD In an administrator command prompt do following use the bcdedit tool (from an MSDN blog about configuring VHD to boot in Windows 7) Type bcedit to list the current boot options, so you can copy the GUID (complete with brackets/braces) for the original Windows 8 boot to VHD Create a new menu option, copy of the Windows 8 option; bcdedit /copy {originalguid} /d "Windows 8.1" Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} device vhd=[D:]\Image.vhd Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} osdevice vhd=[D:]\Image.vhd Set autodetection of the HAL (may already be set); bcdedit /set {newguid} detecthal on Reboot from Windows 7 and select the new option 'Windows 8.1' on the boot menu, and you’ll have some messages to look at, as your hardware is redetected (as you are back from 1 CPU to 4 CPUs) ‘Getting devices ready, blank then %xx, with occasional blank screen, for the graphics driver, (fast-ish) Getting Ready message (fast) You will have to suffer one final reboots, choose 'Windows 8.1' and you can now login to a lovely Windows 8.1 start screen running on non virtualized hardware via boot to VHD After checking everything is running fine, you can now choose to Activate Windows, which for me was a toll free phone call to the automated system where you type in lots of numbers to be given a whole bunch of new activation codes. Once you’re happy with your new Windows 8.1 boot to VHD, and no longer need the Windows 8 boot to VHD, feel free to delete the old one.  I do believe once you upgrade, you are no longer licensed to use it anyway. There, that was simple wasn’t it? Looking at the huge list of steps it took to perform this upgrade, you may wonder whether I think this is worth it.  Well, I think it is worth booting to VHD.  It makes backups a snap (go to Windows 7, copy the VHD, you backed up the o/s) and helps with disk management – want to move the o/s, you can move the VHD and repoint the boot menu to the new location. The downside is that Microsoft has complete neglected to support boot to VHD as an upgradable option.  Quite a poor decision in my opinion, and if you read twitter and the forums quite a few people agree with that view.  It’s a shame this got missed in the work on creating the upgrade packages for Windows 8.1.

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >