Search Results

Search found 27945 results on 1118 pages for 'ld library path'.

Page 502/1118 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Converting a PV vm back into an HVM vm

    - by wim.coekaerts
    I have been doing some Oracle VM benchmark stuff in the last week or 2 in my off hours and yesterday I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel. It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps. A PV kernel uses pygrub and a paravirt kernel image that lives on the vm image virtual disk. since this disk image does not have to be bootable it doesn't contain a boot sector and if you just restart the VM in hvm mode the virtual bios will just not do much as it can't start the boot process from disk The first thing I do is make a backup of my vm.cfg file :-) and then edit it as follows : the original file contains : bootloader = '/usr/bin/pygrub' I replace that with : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' kernel = '/usr/lib/xen/boot/hvmloader' then changing the disk files. I change my xvd disks to hd disks and I copy over the iso image of my instal lDVD. In the case of my VM template it was based on OL5U4 So I downloaded Enterprise-R5-U4-Server-x86_64-dvd.iso and added it as a cd device. disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,xvda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,xvdb,w', ] to disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,hda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,hdb,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Enterprise-R5-U4-Server-x86_64-dvd.iso, hdc:cdrom,r', ] boot='d' for the network devices (vifs) I change : vif = ['bridge=xenbr2,type=netfront'] to vif = ['bridge=xenbr2,type=ioemu'] That should do it. Next, inside the VM, I copy over the regular kernel rpm that I want to end up running in hvm mode. In this example case it was : kernel-2.6.18-164.0.0.0.1.el5.x8664.rpm. I will use that later on in the process. I put this kernel simply in /root At this point I just start the vm with xm create vm.cfg and start my vnc console to the vm console. Oracle Linux will boot from the iso image, I just go through the install steps and click on UPgrade existing (not re-install). Because the VM is the same as the ISO the install won't actually do anything and it will run through instantly. When the "Reboot" button pops up, don't reboot. Switch to the command prompt console. hi alt-f2 to go to the shell prompt. Now it's easy : umount /mnt/sysimage/boot cd /mnt/sysimage chroot . mount /dev/hda1 (if that was your /boot partition) export PATH=/sbin:$PATH (just to clean that up) edit /etc/modprobe.conf and comment out the xen modules (just put a # in front) Install grub. if your /boot is hda1 then that is (hd0,0) $ grub root (hd0,0) setup (hd0) exit grub now you have a good bootsector, grub installed and you have your grub.conf file Install the new kernel cd root (this is your old /root in your pv image) rpm -ivh remove (or comment out) boot='d' in your vm.cfg restart the VM and you should be good to go, regular grub should start and load your environment. Caveats : this assumes you used labels for your filesystems. if /etc/fstab were to have devices listed then you would have to rename these device before rebooting as well. If you had a /dev/xvda disk then this would be /dev/hda or /dev/sda. All in all it is a relatively short and simple process.

    Read the article

  • Converting a PV vm back into an HVM vm

    - by wim.coekaerts
    I have been doing some Oracle VM benchmark stuff in the last week or 2 in my off hours and yesterday I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel. It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps. A PV kernel uses pygrub and a paravirt kernel image that lives on the vm image virtual disk. since this disk image does not have to be bootable it doesn't contain a boot sector and if you just restart the VM in hvm mode the virtual bios will just not do much as it can't start the boot process from disk The first thing I do is make a backup of my vm.cfg file :-) and then edit it as follows : the original file contains : bootloader = '/usr/bin/pygrub' I replace that with : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' kernel = '/usr/lib/xen/boot/hvmloader' then changing the disk files. I change my xvd disks to hd disks and I copy over the iso image of my instal lDVD. In the case of my VM template it was based on OL5U4 So I downloaded Enterprise-R5-U4-Server-x86_64-dvd.iso and added it as a cd device. disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,xvda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,xvdb,w', ] to disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,hda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,hdb,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Enterprise-R5-U4-Server-x86_64-dvd.iso, hdc:cdrom,r', ] boot='d' for the network devices (vifs) I change : vif = ['bridge=xenbr2,type=netfront'] to vif = ['bridge=xenbr2,type=ioemu'] That should do it. Next, inside the VM, I copy over the regular kernel rpm that I want to end up running in hvm mode. In this example case it was : kernel-2.6.18-164.0.0.0.1.el5.x8664.rpm. I will use that later on in the process. I put this kernel simply in /root At this point I just start the vm with xm create vm.cfg and start my vnc console to the vm console. Oracle Linux will boot from the iso image, I just go through the install steps and click on UPgrade existing (not re-install). Because the VM is the same as the ISO the install won't actually do anything and it will run through instantly. When the "Reboot" button pops up, don't reboot. Switch to the command prompt console. hi alt-f2 to go to the shell prompt. Now it's easy : umount /mnt/sysimage/boot cd /mnt/sysimage chroot . mount /dev/hda1 (if that was your /boot partition) export PATH=/sbin:$PATH (just to clean that up) edit /etc/modprobe.conf and comment out the xen modules (just put a # in front) Install grub. if your /boot is hda1 then that is (hd0,0) $ grub root (hd0,0) setup (hd0) exit grub now you have a good bootsector, grub installed and you have your grub.conf file Install the new kernel cd root (this is your old /root in your pv image) rpm -ivh remove (or comment out) boot='d' in your vm.cfg restart the VM and you should be good to go, regular grub should start and load your environment. Caveats : this assumes you used labels for your filesystems. if /etc/fstab were to have devices listed then you would have to rename these device before rebooting as well. If you had a /dev/xvda disk then this would be /dev/hda or /dev/sda. All in all it is a relatively short and simple process.

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • Mind Reading with the Raspberry Pi

    - by speakjava
    Mind Reading With The Raspberry Pi At JavaOne in San Francisco I did a session entitled "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi".  As part of this I showed some demonstrations of things I'd done using Java on the Raspberry Pi.  This is the first part of a series of blog entries that will cover all the different aspects of these demonstrations. A while ago I had bought a MindWave headset from Neurosky.  I was particularly interested to see how this worked as I had had the opportunity to visit Neurosky several years ago when they were still developing this technology.  At that time the 'headset' consisted of a headband (very much in the Bjorn Borg style) with a sensor attached and some wiring that clearly wasn't quite production ready.  The commercial version is very simple and easy to use: there are two sensors, one which rests on the skin of your forehead, the other is a small clip that attaches to your earlobe. Typical EEG sensors used in hospitals require lots of sensors and they all need copious amounts of conductive gel to ensure the electrical signals are picked up.  Part of Neurosky's innovation is the development of this simple dry-sensor technology.  Having put on the sensor and turned it on (it powers off a single AAA size battery) it collects data and transmits it to a USB dongle plugged into a PC, or in my case a Raspberry Pi. From a hacking perspective the USB dongle is ideal because it does not require any special drivers for any complex, low level USB communication.  Instead it appears as a simple serial device, which on the Raspberry Pi is accessed as /dev/ttyUSB0.  Neurosky have published details of the command protocol.  In addition, the MindSet protocol document, including sample code for parsing the data from the headset, can be found here. To get everything working on the Raspberry Pi using Java the first thing was to get serial communications going.  Back in the dim distant past there was the Java Comm API.  Sadly this has grown a bit dusty over the years, but there is a more modern open source project that provides compatible and enhanced functionality, RXTXComm.  This can be installed easily on the Pi using sudo apt-get install librxtx-java.  Next I wrote a library that would send commands to the MindWave headset via the serial port dongle and read back data being sent from the headset.  The design is pretty simple, I used an event based system so that code using the library could register listeners for different types of events from the headset.  You can download a complete NetBeans project for this here.  This includes javadoc API documentation that should make it obvious how to use it (incidentally, this will work on platforms other than Linux.  I've tested it on Windows without any issues, just by changing the device name to something like COM4). To test this I wrote a simple application that would connect to the headset and then print the attention and meditation values as they were received from the headset.  Again, you can download the NetBeans project for that here. Oracle recently released a developer preview of JavaFX on ARM which will run on the Raspberry Pi.  I thought it would be cool to write a graphical front end for the MindWave data that could take advantage of the built in charts of JavaFX.  Yet another NetBeans project is available here.  Screen shots of the app, which uses a very nice dial from the JFxtras project, are shown below. I probably should add labels for the EEG data so the user knows which is the low alpha, mid gamma waves and so on.  Given that I'm not a neurologist I suspect that it won't increase my understanding of what the (rather random looking) traces mean. In the next blog I'll explain how I connected a LEGO motor to the GPIO pins on the Raspberry Pi and then used my mind to control the motor!

    Read the article

  • Information on upgrading Kinect Applications to MS SDK Beta 2.

    - by mbcrump
    Introduction Microsoft recently released the Kinect for Windows SDK Beta 2. It contains many enhancements and fixes that can be found here. The only problem with it is that a lot of current demo applications no longer function properly. Today, I’m going to walk you through a typical scenario of upgrading a Kinect application built with Beta 1 to Beta 2. Note: This tutorial covers WPF, but you can use the same techniques for WinForms. 1) Fix the references Let’s start with a fairly popular Kinect demo called Kinect User Interface Demo. This project uses the beta 1 version of Microsoft.Research.Kinect.dll and version 1.0.0.0 of Coding4Fun’s Kinect library. After you download the source code and extract the zip you will see the following references in Visual Studio 2010: Pay attention to the following references as these are the .dlls that you will have to update: Coding4Fun.Kinect.Wpf Microsoft.Research.Kinect If you click on Coding4Fun.Kinect.Wpf file you will see the following version information (v1.0.0.0): This needs to be upgraded to the Coding4Fun Kinect library built against Beta 2. So head over to http://c4fkinect.codeplex.com/ and hit download and you will have the following files. Go ahead and hit the delete key on your keyboard to remove the Coding4Fun.Kinect.Wpf.dll file from your project. Select “Add Reference” and navigate out to the folder where you extracted the files and select Coding4Fun.Kinect.Wpf.dll. If you click on the Coding4Fun.Kinect.Wpf.dll file and check properties it should be listed at 1.1.0.0: Fix Microsoft.Research.Kinect.dll The official SDK Beta 2 released a new .dll that you will need to reference in your application. Go ahead and select Microsoft.Research.Kinect.dll in your application and hit the Delete key on your keyboard. Go ahead and select Add Reference again and select Microsoft.Research.Kinect.dll from the .NET tab. Double check and make sure the version number is 1.0.0.45 as shown below. References fixed – Runtime needs to be updated. So we have fixed the references in a typical Kinect application that uses Microsoft’s SDK and C4F Kinect libraries. Now, we will need to update the runtime. All Beta 1 Kinect applications will instantiate the Runtime with the following code: Can you see that it is now marked with [Depreciated]? That means we need to update it before Microsoft decides to remove it from future versions of the SDK. We can fix this very easily by replacing this code: readonly Runtime _runtime = new Runtime(); with Microsoft.Research.Kinect.Nui.Runtime _nui; and adding similar code to our Loaded event as shown below public MainWindow() { InitializeComponent(); Loaded += new RoutedEventHandler(MainWindow_Loaded); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { if (Runtime.Kinects.Count == 0) { txtInfo.Text = "Missing Kinect"; } else { _nui = Runtime.Kinects[0]; _nui.Initialize(RuntimeOptions.UseColor); // Video Frame Ready Event can happen now!!! //_nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(_nui_VideoFrameReady); _nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } } In this sample, I am testing to see if a Kinect is detected and if it is then I initialize the runtime with my first Kinect by using the Runtime.Kinects[0]. You can also specify other Kinect devices here. The rest of the code is standard code that you simply modify however you wish (ie Skeletal, Depth, etc) depending on what type of video feed you want. Conclusion As you can see it really wasn’t that painful to upgrade your project to Beta 2. I would recommend that you go ahead and upgrade to Beta 2 as future versions of the SDK will use these methods.  Thanks for reading. Subscribe to my feed

    Read the article

  • Running a WebLogic Portal (WLP) 10.3.4 Domain as a Windows Service

    - by user647124
    To start a WLP server as a Windows service it is simplest to make your own script based on the provided standard script located at WL_HOME\server\bin\installSvc.cmd. The standard script works fine for a plain WLS domain, but lacks some classpath and options necessary for WLP.Start by making a copy of the installSvc.cmd script and naming it something specific to your domain.Next, just under SETLOCAL you will find where WL_HOME is defined. Here you will add the definitions you would normally add in a script that later calls installSvc.cmd (as per the standard documentation). set DOMAIN_NAME=gnma_test_domainset USERDOMAIN_HOME=D:\my_test_domainset SERVER_NAME=AdminServerset WLS_USER=weblogicset WLS_PW=gnmaAdmin01set PRODUCTION_MODE=trueset MEM_ARGS=-Xms512m –Xmx512mset MW_HOME=C:\Oracle\Middleware Note: I had heard of people using this approach who had issues with the length of the command line. This may be due to their use of the default domain path. In the example above, I use a shorter path.At this point, edit the DOMAIN_HOME\bin\startWebLogic.cmd and set it to echo both the classpath and the options. Then start the domain and capture the output of those echoes, then shut the domain back down. Now REM out the existing CLASSPATH definition, then use the outputs you captured earlier to set the CLASSPATH and JAVA_OPTIONS like this: REM set CLASSPATH=%WEBLOGIC_CLASSPATH%;%CLASSPATH%; C:\Oracle\Middleware\wlportal_10.3\portal\lib\security\wsrp-security-providers.jarset CLASSPATH=%MW_HOME%\patch_wls1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_wlp1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_oepe1111\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_ocm1033\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\JROCKI~1.1-3\lib\tools.jar;%WL_HOME%\server\lib\weblogic_sp.jar;%WL_HOME%\server\lib\weblogic.jar;%MW_HOME%\modules\features\weblogic.server.modules_10.3.4.0.jar;%WL_HOME%\server\lib\webservices.jar;%MW_HOME%\modules\ORGAPA~1.1/lib/ant-all.jar;%MW_HOME%\modules\NETSFA~1.0_1/lib/ant-contrib.jar;%WL_HOME%\common\derby\lib\derbyclient.jar;%WL_HOME%\server\lib\xqrl.jar;%WL_HOME%\server\lib\xquery.jar;%WL_HOME%\server\lib\binxml.jarset JAVA_OPTIONS= -Xverify:none -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole... -Dplatform.home=%WL_HOME% -Dwls.home=%WL_HOME%\server -Dweblogic.home=%WL_HOME%\server -Dweblogic.wsee.bind.suppressDeployErrorMessage=true -Dweblogic.wsee.skip.async.response=true -Dweblogic.management.discover=true -Dwlw.iterativeDev=true -Dwlw.testConsole=true -Dwlw.logErrorsToConsole=true -Dweblogic.ext.dirs=%MW_HOME%\patch_wls1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_wlp1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_oepe1111\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_ocm1033\profiles\default\sysext_manifest_classpath;%MW_HOME%\wlportal_10.3\p13n\lib\system;%MW_HOME%\wlportal_10.3\light-portal\lib\system;%MW_HOME%\wlportal_10.3\portal\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\lib\system;%MW_HOME%\wlportal_10.3\analytics\lib\system;%MW_HOME%\wlportal_10.3\apps\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\deprecated\lib\system;%MW_HOME%\wlportal_10.3\content-mgmt\lib\system -Dweblogic.alternateTypesDirectory=%MW_HOME%\wlportal_10.3\portal\lib\securityAnd that's it. Looks really simple, but it took me quite some time to gather all the necessary pieces in order to make it work. Hopefully you find this before you went through half as much research.The example here uses a domain with only the Admin server and no managed servers. For a variety of reasons I only want the Admin server to be run as a service. The standard documentation along with the example above should allow you to expand this to include managed servers should you feel the need.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • Windows Phone 7 and WS-Trust

    - by Your DisplayName here!
    A question that I often hear these days is: “Can I connect a Windows Phone 7 device to my existing enterprise services?”. Well – since most of my services are typically issued token based, this requires support for WS-Trust and WS-Security on the client. Let’s see what’s necessary to write a WP7 client for this scenario. First I converted the Silverlight library that comes with the Identity Training Kit to WP7. Some things are not supported in WP7 WCF (like message inspectors and some client runtime hooks) – but besides that this was a simple copy+paste job. Very nice! Next I used the WSTrustClient to request tokens from my STS: private WSTrustClient GetWSTrustClient() {     var client = new WSTrustClient(         new WSTrustBindingUsernameMixed(),         new EndpointAddress("https://identity.thinktecture.com/…/issue.svc/mixed/username"),         new UsernameCredentials(_txtUserName.Text, _txtPassword.Password));     return client; } private void _btnLogin_Click(object sender, RoutedEventArgs e) {     _client = GetWSTrustClient();       var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Bearer)     {         AppliesTo = new EndpointAddress("https://identity.thinktecture.com/rp/")     };       _client.IssueCompleted += client_IssueCompleted;     _client.IssueAsync(rst); } I then used the returned RSTR to talk to the WCF service. Due to a bug in the combination of the Silverlight library and the WP7 runtime – symmetric key tokens seem to have issues currently. Bearer tokens work fine. So I created the following binding for the WCF endpoint specifically for WP7. <customBinding>   <binding name="mixedNoSessionBearerBinary">     <security authenticationMode="IssuedTokenOverTransport"               messageSecurityVersion="WSSecurity11 WSTrust13 WSSecureConversation13 WSSecurityPolicy12 BasicSecurityProfile10">       <issuedTokenParameters keyType="BearerKey" />     </security>     <binaryMessageEncoding />     <httpsTransport/>   </binding> </customBinding> The binary encoding is not necessary, but will speed things up a little for mobile devices. I then call the service with the following code: private void _btnCallService_Click(object sender, RoutedEventArgs e) {     var binding = new CustomBinding(         new BinaryMessageEncodingBindingElement(),         new HttpsTransportBindingElement());       _proxy = new StarterServiceContractClient(         binding,         new EndpointAddress("…"));     using (var scope = new OperationContextScope(_proxy.InnerChannel))     {         OperationContext.Current.OutgoingMessageHeaders.Add(new IssuedTokenHeader(Globals.RSTR));         _proxy.GetClaimsAsync();     } } works. download

    Read the article

  • Juggling with JDKs on Apple OS X

    - by Blueberry Coder
    I recently got a shiny new MacBook Pro to help me support our ADF Mobile customers. It is really a wonderful piece of hardware, although I am still adjusting to Apple's peculiar keyboard layout. Did you know, for example, that the « delete » key actually performs a « backspace »? But I disgress... As you may know, ADF Mobile development still requires JDeveloper 11gR2, which in turn runs on Java 6. On the other hand, JDeveloper 12c needs JDK 7. I wanted to install both versions, and wasn't sure how to do it.   If you remember, I explained in a previous blog entry how to install JDeveloper 11gR2 on Apple's OS X. The trick was to use the /usr/libexec/java_home command in order to invoke the proper JDK. In this case, I could have done the same thing; the two JDKs can coexist without any problems, since they install in completely different locations. But I wanted more than just installing JDeveloper. I wanted to be able to select my JDK when using the command line as well. On Windows, this is easy, since I keep all my JDKs in a central location. I simply have to move to the appropriate folder or type the folder name in the command I want to execute. Problem is, on OS X, the paths to the JDKs are... let's say convoluted.  Here is the one for Java 6. /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home The Java 7 path is not better, just different. /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home Intuitive, isn't it? Clearly, I needed something better... On OS X, the default command shell is bash. It is possible to configure the shell environment by creating a file named « .profile » in a user's home folder. Thus, I created such a file and put the following inside: export JAVA_7_HOME=$(/usr/libexec/java_home -v1.7) export JAVA_6_HOME=$(/usr/libexec/java_home -v1.6) export JAVA_HOME=$JAVA_7_HOME alias java6='export JAVA_HOME=$JAVA_6_HOME' alias java7='export JAVA_HOME=$JAVA_7_HOME'  The first two lines retrieve the current paths for Java 7 and Java 6 and store them in two environment variables. The third line marks Java 7 as the default. The last two lines create command aliases. Thus, when I type java6, the value for JAVA_HOME is set to JAVA_6_HOME, for example.  I now have an environment which works even better than the one I have on Windows, since I can change my active JDK on a whim. Here a sample, fresh from my terminal window. fdesbien-mac:~ fdesbien$ java6 fdesbien-mac:~ fdesbien$ java -version java version "1.6.0_65" Java(TM) SE Runtime Environment (build 1.6.0_65-b14-462-11M4609) Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-462, mixed mode) fdesbien-mac:~ fdesbien$ fdesbien-mac:~ fdesbien$ java7 fdesbien-mac:~ fdesbien$ java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) fdesbien-mac:~ fdesbien$ Et voilà! Maximum flexibility without downsides, just I like it. 

    Read the article

  • Cream of the Crop

    - by KemButller
    JD Edwards has been working hard to ensure that you shouldn't have to work so hard! Yet there are still JD Edwards customers that may not be up to speed on all the new and or improved tools and utilities we have delivered, all designed to make your life easier. So today, I want to share what I consider to be the cream of the crop….those items that every customer should know about and leverage to make ERP life just a little bit (or A LOT) easier! These are my top picks, the cream of a very good crop! Explore and enjoy, and gain some of your time back to do with as you please. · www.runjde.com It’s where to go when you need to know! The Resource Kits available on www.runjde.com provide comprehensive Resource Kits (guides) by user type. The guides provide brief descriptions of the wide array of resources that are available to JD Edwards’s eco system and links to each of those resources. · My Oracle Support (MOS) Information Centers This link will take you to an index that is designed to provide you with simple and quick navigation to the available EnterpriseOne Information Centers. This index provides links to: · EnterpriseOne Application specific Information Centers · EnterpriseOne Tools and Technology Information Centers · EnterpriseOne Performance Information Center · EnterpriseOne 9.1 and 9.0 Information Centers Information Centers give Oracle the ability to aggregate content for a given focus area and present this content in categories for easy browsing by our customers. Information Centers offer a variety of focused dynamic content organized around one or more of the following tasks. · Overview · Use · Troubleshooting · Patching and Maintenance · Install and Configure · Upgrade · Optimize Performance · Security · Certify JD Edwards Newsletters Be in the know by reading the Global Customer Support Product Newsletters. They are PACKED with news and information covering a wide range of topics and news. It is a must read if you want to know what’s happening in the JD Edwards universe! Read the latest EntepriseOne newsletter Read the latest World newsletter Learn How to receive notification when a new newsletter edition is published Oracle Learning Library – (OLL) Oracle Learn Library is the place to go for easy access to JD Edwards Application and Tools training. For a comprehensive view of the training available for a specific product/functional area, explore the Knowledge Paths For Net Change (new feature) training, explore the TOI sessions (TOI stands for Transfer Of Information). Tip: Be sure to experiment with the search filters! · www.upgradejde.com The site designed to help customers and partners with the process of upgrading JD Edwards. The site is a wealth of information, tools and resources designed to assist in the evaluation, planning and execution steps required when upgrading. Of note is the wildly successful upgrade strategy known as “The Art of the Possible” wherein JD Edwards and many of our partners hold free workshops to teach customers how to conduct upgrades in 100 days or less. Equally important is the fact that on www.upgradejde.com, customers can gain visibility into planned enhancements using the Product and Technology Feature Catalogs. The catalogs are great for creating customer specific reports about the net change between older releases and current or planned releases. Examples of other key resources on www.upgradejde.com are the product data base changes between releases, extensibility guides, (formerly known as programmer’s guides), whitepapers, ROI calculators and much more!

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Why is this beat detection code failing to register some beats properly?

    - by Quincy
    I made this SoundAnalyzer class to detect beats in songs: class SoundAnalyzer { public SoundBuffer soundData; public Sound sound; public List<double> beatMarkers = new List<double>(); public SoundAnalyzer(string path) { soundData = new SoundBuffer(path); sound = new Sound(soundData); } // C = threshold, N = size of history buffer / 1024 B = bands public void PlaceBeatMarkers(float C, int N, int B) { List<double>[] instantEnergyList = new List<double>[B]; GetEnergyList(B, ref instantEnergyList); for (int i = 0; i < B; i++) { PlaceMarkers(instantEnergyList[i], N, C); } beatMarkers.Sort(); } private short[] getRange(int begin, int end, short[] array) { short[] result = new short[end - begin]; for (int i = 0; i < end - begin; i++) { result[i] = array[begin + i]; } return result; } // get a array of with a list of energy for each band private void GetEnergyList(int B, ref List<double>[] instantEnergyList) { for (int i = 0; i < B; i++) { instantEnergyList[i] = new List<double>(); } short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; int samplesPerBand = nextSamples / B; // for the whole song while (sampleIndex + nextSamples < samples.Length) { complex[] FFT = FastFourier.Calculate(getRange(sampleIndex, nextSamples + sampleIndex, samples)); // foreach band for (int i = 0; i < B; i++) { double energy = 0; for (int j = 0; j < samplesPerBand; j++) energy += FFT[i * samplesPerBand + j].GetMagnitude(); energy /= samplesPerBand; instantEnergyList[i].Add(energy); } if (sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; samplesPerBand = nextSamples / B; } } // place the actual markers private void PlaceMarkers(List<double> instantEnergyList, int N, float C) { double timePerSample = 1 / (double)soundData.SampleRate; int index = N; int numInBuffer = index; double historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } } } For some reason it's only detecting beats from 637 sec to around 641 sec, and I have no idea why. I know the beats are being inserted from multiple bands since I am finding duplicates, and it seems that it's assigning a beat to each instant energy value in between those values. It's modeled after this: http://www.flipcode.com/misc/BeatDetectionAlgorithms.pdf So why won't the beats register properly?

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Silverlight Binding with multiple collections

    - by George Evjen
    We're designing some sport specific applications. In one of our views we have a gridview that is bound to an observable collection of Teams. This is pretty straight forward in terms of getting Teams bound to the GridView. <telerik:RadGridView Grid.Row="0" Grid.Column="0" x:Name="UsersGrid" ItemsSource="{Binding TeamResults}" SelectedItem="{Binding SelectedTeam, Mode=TwoWay}"> <telerik:RadGridView.Columns> <telerik:GridViewDataColumn Header="Name/Group" DataMemberBinding="{Binding TeamName}" MinWidth="150"></telerik:GridViewDataColumn> </telerik:RadGridView.Columns> </telerik:RadGridView> We use the observable collection of teams as our items source and then bind the property of TeamName to the first column. You can set the binding to mode=TwoWay, we use a dialog where we edit the selected item, so our binding here is not set to two way. The issue comes when we want to bind to a property that has another collection in it. To continue on our code from above, we have an observable collection of teams, within that collection we have a collection of KeyPeople. We get this collection using RIA Serivces with the code below. return _TeamsRepository.All().Include("KeyPerson"); Here we are getting all the teams and also including the KeyPerson entity. So when we are done with our Load we will end up with an observable collection of Teams with a navigation property / entity of KeyPerson. Within this KeyPerson entity is a list of people associated with that particular team. We want to display the head coach from this list of KeyPersons. This list currently has a list of ten or more people that are bound to this team, but we just want to display the Head Coach in the column next to team name. The issue becomes how do we bind to this included entity? I have found about three different ways to solve this issue. The way that seemed to fit us best is to utilize the features within RIA Services. We can create client side properties that will do the work for us. We will create in the client side library a partial class of Team. We will end up in our library a file that is Team.shared.cs. The code below is what we will put into our partial team class. public KeyPerson Coach        {            get            {                if (this.KeyPerson != null && this.KeyPerson.Any())                { return this.KeyPerson.Where(x => x.RelationshipType == “HeadCoach”).FirstOrDefault(); }                 return null;            }        } We will return just the person that is the Head Coach and then be able to bind that and any other additional properties that we need. <telerik:GridViewDataColumn Header="Coach" DataMemberBinding="{Binding Coach.Name}" MinWidth="150"></telerik:GridViewDataColumn> There are other ways that we could have solved this issue but we felt that creating a partial class through RIA Services best suited our needs.

    Read the article

  • Auto-organized / smart inventory system?

    - by VeXe
    for the past week I've been working on an inventory system with Unity3D. At first I got help from the guys at Design3 but it wasn't too long till we split path, because I really didn't like the way they did their code, it didn't have any smell of OOP whatsoever. I took it further steps ahead - items take more than one slot, advanced placement system (items tries their best to find the best close fit), local mouse system (mouse gets trapped in active bag area), etc. Here's a demo of my work. What we would like to have in our game, is an auto-organizing feature - not auto-sort. We want this feature because our inventory's going to be in 'real-time' - not like in Resident Evil 1,2,3 etc where you would pause the game and do things in your inventory. Now imagine your self in a sticky situation surrounded by zombies, and you don't have bullets, you look around, you see that there are bullets nearby on the ground, so you go for them and try to pick them up, but they don't fit! you look at your inventory and find out that if you reorganize some of the items, it will fit! - now the player - in that situation doesn't have time to reorganize because he's surrounded with zombies and will die if he stops and organizes the inventory to make space (remember inventory in real-time, no pausing) - wouldn't it be nice for that to happen automatically? - Yes! (I believe this has been implemented in some games like Dungeon siege or something, so sure it's doable) take a look at this picture for example: Yes, so if you auto-sort the issue you will get your spaces but it's bad because: 1- Expensive: it doesn't need a whole sort operation to free those spaces, in the first picture, just slide the red item at the bottom to the very left, and you get the same spaces that you got from the auto-sort. 2- It's annoying to the player: "Who the F told you to re-order my stuff?" I'm not asking for "How to write the code" for this, I'm just asking for some guidance, where to look, what algorithms are involved? Is this something related to graphs and shortest path stuff? I hope not cuz I didn't manage to continue my college studies :/ But even if it is, just tell me and I will learn the stuff related. Notice there could be more than just one solution. So I guess the first thing I have to do is figure out if the situation is 'solvable' - if I know how to determine if a situation is solvable or not, then I can 'solve' it. I just need to know the conditions that makes it 'solvable'. And I believe there must be some algorithm/data structure for this. Here's a pic for more than one solution of trying to fit a 1x3 item: The arrows show just one of the solutions, but if you look you will find more than one. This is what I ultimately not auto-sorting but find a solution and applying it. Note that if I spend time on it I will come up with a way to solve it, but it wouldn't be the best way, it's like, holding a car wheel with your feet instead of your hands! XD Or just like trying to solve an issue that requires arrays, but you're not yet aware of their existence! So what is the right approach to this? Hope somebody helps, thanks a lot in advance :)

    Read the article

  • Dynamically Changing the Display Names of Menus and Popups

    - by Geertjan
    Very interesting thing and handy to know when needed is the fact that "menuText" and "popupText" (from org.openide.awt.ActionRegistration) can be changed dynamically, via "putValue" as shown below for "popupText". The Action class, in this case, needs to be eager, hence you won't receive the object of interest via the constructor, but you can easily use the global Lookup for that purpose instead, as also shown below. import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.DemoProjectAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference(path = "Projects/Actions", position = 0) public final class DemoProjectAction extends AbstractAction{ private final ProjectInformation context; public DemoProjectAction() { putValue("popupText", "Select Me To See Current Time!"); context = ProjectUtils.getInformation( Utilities.actionsGlobalContext().lookup(Project.class)); } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { DateFormat formatter = new SimpleDateFormat("HH:mm:ss"); String formatted = formatter.format(System.currentTimeMillis()); putValue("popupText", "Time: " + formatted + " (" + context.getDisplayName() +")"); } } Now, let's do something semi useful and display, in the popup, which is available when you right-click a project, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.TrackProjectTimerAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectTimerAction extends AbstractAction implements FileChangeListener { private final Project context; private Long startTime; private Long changedTime; private DateFormat formatter; public TrackProjectTimerAction() { putValue("popupText", "Enable project time tracker"); this.formatter = new SimpleDateFormat("HH:mm:ss"); context = Utilities.actionsGlobalContext().lookup(Project.class); context.getProjectDirectory().addRecursiveListener(this); } @Override public void actionPerformed(ActionEvent e) { startTimer(); } protected void startTimer() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} }

    Read the article

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Disabling the right-click sub menu using JQuery

    - by nikolaosk
    Recently I needed to disable the right-click contextual menu in an HTML page for a very simple HTML application I was creating for a friend.This is going to be a short post where I will demonstrate how to disable the right-click contextual menu.I will use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/downloadPlease find here all my posts regarding JQuery.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. I am going to create a very simple HTML 5 page with some text and an image. The HTML markup for the page follows. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>        <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">     <script type="text/javascript" src="jquery-1.8.2.min.js">        </script><script type="text/javascript"> (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); </script>       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <h2>HTML 5, JQuery, CSS3</h2>    </div>      <figure>  <img src="html5.png" alt="HTML 5"></figure>        <div id="main">          <h2>HTML 5</h2>                        <article>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          </article>      </div>             </body>  </html> This is the JQuery code, I use (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); I simply disable/cancel the contextmenu event.When I load the simple page on the browser and I right-click the context menu does not appear.Hope it helps!!!

    Read the article

  • Slide 2d Vector to destination over a period of time

    - by SchautDollar
    I am making a library of GUI controls for games I make with XNA. I am currently developing the library as I make a game so I can test the features and find errors/bugs and hopefully smash them right away. My current issue is on a slide feature I want to implement for my base class that all controls inherit. My goal is to get the control to slide to a specified point over a specified amount of time. Here is the #region containing the code #region Slide private bool sliding; private Vector2 endPoint; private float slideTimeLeft; private float speed; private bool wasEnabled; private Vector2 slideDirection; private float slideDistance; public void Slide(Vector2 startPoint, Vector2 endPoint, float slideTime) { this.location = startPoint; Slide(endPoint,slideTime); } public void Slide(Vector2 endPoint, float slideTime) { this.wasEnabled = this.enabled; this.enabled = false; this.sliding = true; Vector2 tempLength = endPoint - this.location; this.slideDistance = tempLength.Length(); //Was this.slideDistance = (float)Math.Sqrt(tempLength.LengthSquared()); this.speed = slideTime / this.slideDistance; this.endPoint = endPoint; this.slideTimeLeft = slideTime; } private void UpdateSlide(GameTime gameTime) { if (this.sliding) { this.slideTimeLeft -= gameTime.ElapsedGameTime.Milliseconds; if (this.slideTimeLeft >= 0 ) { if ((this.endPoint-this.location).Length() != 0){//Was if (this.endPoint.LengthSquared() > 0 || this.location.LengthSquared() > 0) { this.slideDirection = Vector2.Normalize(this.endPoint - this.location); } this.location += this.slideDirection * speed * gameTime.ElapsedGameTime.Milliseconds;//This is where I believe the issue is, but I'm not sure. It seems right to me... (Even though it doesn't work) } else { this.enabled = this.wasEnabled; this.location = this.endPoint;//After time, the controls position will get set to be the endpoint. this.sliding = false; } } } #endregion this.location is the location of the control elsewhere defined in the class. I have looked at this blog as a huge reference and have googled around quite and have looked on many forums but can't find anything that shows how to implement it. Please and Thanks for your time! EDIT: I have switched this line "this.location += this.slideDirection * speed * gameTime.ElapsedGameTime.Milliseconds;" several times to see what it does. My issue is getting the control to smoothly move to the end location. It moves after the time has expired, but It doesn't move other then that except flash in my face. EDIT2: I have used the first slide method with 3 parameters and it works except it doesn't do it in a period of time and once it gets to its destination, it starts moving randomly towards the previous location and the end location.

    Read the article

  • MVC HTML.RenderAction – Error: Duration must be a positive number

    - by BarDev
    On my website I want the user to have the ability to login/logout from any page. When the user select login button a modal dialog will be present to the user for him to enter in his credentials. Since login will be on every page, I thought I would create a partial view for the login and add it to the layout page. But when I did this I got the following error: Exception Details: System.InvalidOperationException: Duration must be a positive number. There are other ways to work around this that would not using partial views, but I believe this should work. So to test this, I decided to make everything simple with the following code: Created a layout page with the following code @{Html.RenderAction("_Login", "Account");} In the AccountController: public ActionResult _Login() { return PartialView("_Login"); } Partial View _Login <a id="signin">Login</a> But when I run this simple version this I still get this error: Exception Details: System.InvalidOperationException: Duration must be a positive number. Source of error points to "@{Html.RenderAction("_Login", "Account");}" There are some conversations on the web that are similar to my problem, which identifies this as bug with MVC (see links below). But the links pertain to Caching, and I'm not doing any caching. OuputCache Cache Profile does not work for child actions http://aspnet.codeplex.com/workitem/7923 Asp.Net MVC 3 Partial Page Output Caching Not Honoring Config Settings Asp.Net MVC 3 Partial Page Output Caching Not Honoring Config Settings Caching ChildActions using cache profiles won't work? Caching ChildActions using cache profiles won't work? I'm not sure if this makes a difference, but I'll go ahead and add it here. I'm using MVC 3 with Razor. Update Stack Trace [InvalidOperationException: Duration must be a positive number.] System.Web.Mvc.OutputCacheAttribute.ValidateChildActionConfiguration() +624394 System.Web.Mvc.OutputCacheAttribute.OnActionExecuting(ActionExecutingContext filterContext) +127 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func1 continuation) +72 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func1 continuation) +784922 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList1 filters, ActionDescriptor actionDescriptor, IDictionary2 parameters) +314 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +784976 System.Web.Mvc.Controller.ExecuteCore() +159 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +335 System.Web.Mvc.<c_DisplayClassb.b_5() +62 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +20 System.Web.Mvc.<c_DisplayClasse.b_d() +54 System.Web.Mvc.<c_DisplayClass4.b_3() +15 System.Web.Mvc.ServerExecuteHttpHandlerWrapper.Wrap(Func`1 func) +41 System.Web.HttpServerUtility.ExecuteInternal(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage, VirtualPath path, VirtualPath filePath, String physPath, Exception error, String queryStringOverride) +1363 [HttpException (0x80004005): Error executing child request for handler 'System.Web.Mvc.HttpHandlerUtil+ServerExecuteHttpHandlerAsyncWrapper'.] System.Web.HttpServerUtility.ExecuteInternal(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage, VirtualPath path, VirtualPath filePath, String physPath, Exception error, String queryStringOverride) +2419 System.Web.HttpServerUtility.Execute(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage) +275 System.Web.HttpServerUtilityWrapper.Execute(IHttpHandler handler, TextWriter writer, Boolean preserveForm) +94 System.Web.Mvc.Html.ChildActionExtensions.ActionHelper(HtmlHelper htmlHelper, String actionName, String controllerName, RouteValueDictionary routeValues, TextWriter textWriter) +838 System.Web.Mvc.Html.ChildActionExtensions.RenderAction(HtmlHelper htmlHelper, String actionName, String controllerName, RouteValueDictionary routeValues) +56 ASP._Page_Views_Shared_SiteLayout_cshtml.Execute() in c:\Projects\Odat Projects\Odat\Source\Presentation\Odat.PublicWebSite\Views\Shared\SiteLayout.cshtml:80 System.Web.WebPages.WebPageBase.ExecutePageHierarchy() +280 System.Web.Mvc.WebViewPage.ExecutePageHierarchy() +104 System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) +173 System.Web.WebPages.WebPageBase.Write(HelperResult result) +89 System.Web.WebPages.WebPageBase.RenderSurrounding(String partialViewName, Action1 body) +234 System.Web.WebPages.WebPageBase.PopContext() +234 System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) +384 System.Web.Mvc.<>c__DisplayClass1c.<InvokeActionResultWithFilters>b__19() +33 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func1 continuation) +784900 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func1 continuation) +784900 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList1 filters, ActionResult actionResult) +265 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +784976 System.Web.Mvc.Controller.ExecuteCore() +159 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +335 System.Web.Mvc.<c_DisplayClassb.b_5() +62 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +20 System.Web.Mvc.<c_DisplayClasse.b_d() +54 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +453 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371 Update When I Break in Code, it errors at @{Html.RenderAction("_Login", "Account");} with the following exception. The inner exception Error executing child request for handler 'System.Web.Mvc.HttpHandlerUtil+ServerExecuteHttpHandlerAsyncWrapper'. at System.Web.HttpServerUtility.ExecuteInternal(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage, VirtualPath path, VirtualPath filePath, String physPath, Exception error, String queryStringOverride) at System.Web.HttpServerUtility.Execute(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage) at System.Web.HttpServerUtilityWrapper.Execute(IHttpHandler handler, TextWriter writer, Boolean preserveForm) at System.Web.Mvc.Html.ChildActionExtensions.ActionHelper(HtmlHelper htmlHelper, String actionName, String controllerName, RouteValueDictionary routeValues, TextWriter textWriter) at System.Web.Mvc.Html.ChildActionExtensions.RenderAction(HtmlHelper htmlHelper, String actionName, String controllerName, RouteValueDictionary routeValues) at ASP._Page_Views_Shared_SiteLayout_cshtml.Execute() in c:\Projects\Odat Projects\Odat\Source\Presentation\Odat.PublicWebSite\Views\Shared\SiteLayout.cshtml:line 80 at System.Web.WebPages.WebPageBase.ExecutePageHierarchy() at System.Web.Mvc.WebViewPage.ExecutePageHierarchy() at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage) at System.Web.WebPages.WebPageBase.Write(HelperResult result) at System.Web.WebPages.WebPageBase.RenderSurrounding(String partialViewName, Action1 body) at System.Web.WebPages.WebPageBase.PopContext() at System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClass1c.<InvokeActionResultWithFilters>b__19() at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilter(IResultFilter filter, ResultExecutingContext preContext, Func1 continuation) Answer Thanks Darin Dimitrov Come to find out, my AccountController had the following attribute [System.Web.Mvc.OutputCache(NoStore =true, Duration = 0, VaryByParam = "*")]. I don't believe this should caused a problem, but when I removed the attribute everything worked. BarDev

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >