Search Results

Search found 3524 results on 141 pages for 'persistence delegate'.

Page 104/141 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Does RazorEngine require MVC3 to be installed?

    - by tkha007
    I am working on a web project that uses MVC2. I decided to try out RazorEngine to do some e-mail templating. This appeared to work fine when I was protyping using an MVC2 project so I assumed that RazorEngine will work fine for my e-mail templating solution. What I had forgotten at the time was that I actually had MVC3 installed on my local development machine. After deploying the project on a pre-test server I get the following error in the logs when the application attempts to do anything with RazorEngine: Could not load file or assembly 'System.Web.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. File name: 'System.Web.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' at RazorEngine.Compilation.DefaultCompilerServiceFactory.CreateCompilerService(Language language) at RazorEngine.Templating.TemplateService.CreateTemplateType(String razorTemplate, Type modelType) at RazorEngine.Templating.TemplateService.CreateTemplate[T](String razorTemplate, T model) at RazorEngine.Templating.TemplateService.Parse[T](String razorTemplate, T model) at RazorEngine.Razor.Parse[T](String razorTemplate, T model) at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2) at Persistence.Utility.RazorEngineHelper.Parse(String templateName, Object model) in The fact that it can't find 'System.Web.Razor' means that this DLL does not exist on the deployed server. The only difference I can think of between the deployment server and my local dev machine is that the deployment server does not have MVC3 installed but I may be mistaken because the deployment server is not something I normally control and as such I don't have a lot of information about it. It is meant to host this particular application so there have been previous deployments of this application to this server. This is the first time I'm making a deployment with RazorEngine as a dependency.

    Read the article

  • how to pass instance variables between handlers (routes) in sinatra (without flash, sessions, class variable or db)?

    - by jj_
    Say you have: get '/' do haml :index end get '/form' do haml :form end post '/form' do @message = params[:message] redirect to ('/') --- how to pass @message here? end I'd like the @message instance variable to be available (passed to) in "/" action as well, so I can show it in haml view. How can I do that without using session, flash, a @@class_variable, or db persistence ? I'd simply like to pass values as if I was working with passing values between methods. I don't want to use session cookies because user could have them turned off, I don't like it being a class variable which is exposed to all code, and I don't need to overhead of a db. Thanks edit: This is another question explaining a very easy way to deal with this in rails Passing parameters in rails redirect_to This is some more info i gathered around from forums. The following works for rails, i've tried it in Sinatra but no luck, but please try it, maybe I did something wrong, I don't know, and if this code help someone come up with a new idea, please share it If you are redirecting to action2 at the end of action1, just append the value to the end of the redirect: my_var = <some logic> redirect_to :action => 'action2', :my_var => my_var on the same thread another user proposes the folowing: def action1 redirect_to :action => 'action2', :value => params[:current_varaible] end def action2 puts params[:value].inspect end source: http://www.ruby-forum.com/topic/134953 Can something like this work in Sinatra? Thanks

    Read the article

  • Failed MDADM Array With Ext.4 Partition - "e2fsck: unable to set superblock flags on /dev/md0"

    - by Matthew Hodgkins
    Had a power failure and now my mdadm array is having problems. sudo mdadm -D /dev/md0 [hodge@hodge-fs ~]$ sudo mdadm -D /dev/md0 /dev/md0: Version : 0.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Array Size : 8790815232 (8383.57 GiB 9001.79 GB) Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 7 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sat Aug 7 19:10:28 2010 State : clean, degraded, recovering Active Devices : 6 Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 128K Rebuild Status : 10% complete UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1307608 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 2 8 113 2 active sync /dev/sdh1 3 8 65 3 active sync /dev/sde1 4 8 49 4 active sync /dev/sdd1 7 8 33 5 spare rebuilding /dev/sdc1 6 8 16 6 active sync /dev/sdb sudo mount -a [hodge@hodge-fs ~]$ sudo mount -a mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so sudo fsck.ext4 /dev/md0 [hodge@hodge-fs ~]$ sudo fsck.ext4 /dev/md0 e2fsck 1.41.12 (17-May-2010) fsck.ext4: Group descriptors look bad... trying backup blocks... /dev/md0: recovering journal fsck.ext4: unable to set superblock flags on /dev/md0 sudo dumpe2fs /dev/md0 | grep -i superblock [hodge@hodge-fs ~]$ sudo dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.41.12 (17-May-2010) Primary superblock at 0, Group descriptors at 1-524 Backup superblock at 32768, Group descriptors at 32769-33292 Backup superblock at 98304, Group descriptors at 98305-98828 Backup superblock at 163840, Group descriptors at 163841-164364 Backup superblock at 229376, Group descriptors at 229377-229900 Backup superblock at 294912, Group descriptors at 294913-295436 Backup superblock at 819200, Group descriptors at 819201-819724 Backup superblock at 884736, Group descriptors at 884737-885260 Backup superblock at 1605632, Group descriptors at 1605633-1606156 Backup superblock at 2654208, Group descriptors at 2654209-2654732 Backup superblock at 4096000, Group descriptors at 4096001-4096524 Backup superblock at 7962624, Group descriptors at 7962625-7963148 Backup superblock at 11239424, Group descriptors at 11239425-11239948 Backup superblock at 20480000, Group descriptors at 20480001-20480524 Backup superblock at 23887872, Group descriptors at 23887873-23888396 Backup superblock at 71663616, Group descriptors at 71663617-71664140 Backup superblock at 78675968, Group descriptors at 78675969-78676492 Backup superblock at 102400000, Group descriptors at 102400001-102400524 Backup superblock at 214990848, Group descriptors at 214990849-214991372 Backup superblock at 512000000, Group descriptors at 512000001-512000524 Backup superblock at 550731776, Group descriptors at 550731777-550732300 Backup superblock at 644972544, Group descriptors at 644972545-644973068 Backup superblock at 1934917632, Group descriptors at 1934917633-1934918156 sudo e2fsck -b 32768 /dev/md0 [hodge@hodge-fs ~]$ sudo e2fsck -b 32768 /dev/md0 e2fsck 1.41.12 (17-May-2010) /dev/md0: recovering journal e2fsck: unable to set superblock flags on /dev/md0 sudo dmesg | tail [hodge@hodge-fs ~]$ sudo dmesg | tail EXT4-fs (md0): ext4_check_descriptors: Checksum for group 0 failed (59837!=29115) EXT4-fs (md0): group descriptors corrupted! EXT4-fs (md0): ext4_check_descriptors: Checksum for group 0 failed (59837!=29115) EXT4-fs (md0): group descriptors corrupted! Please Help!!!

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • Simple mdadm RAID 1 not activating spare

    - by Nick Liu
    I had created two 2TB HDD partitions (/dev/sdb1 and /dev/sdc1) in a RAID 1 array called /dev/md0 using mdadm on Ubuntu 12.04 LTS Precise Pangolin. The command sudo mdadm --detail /dev/md0 used to indicate both drives as active sync. Then, for testing, I failed /dev/sdb1, removed it, then added it again with the command sudo mdadm /dev/md0 --add /dev/sdb1 watch cat /proc/mdstat showed a progress bar of the array rebuilding, but I wouldn't spend hours watching it, so I assumed that the software knew what it was doing. After the progress bar was no longer showing, cat /proc/mdstat displays: md0 : active raid1 sdb1[2](S) sdc1[1] 1953511288 blocks super 1.2 [2/1] [U_] And sudo mdadm --detail /dev/md0 shows: /dev/md0: Version : 1.2 Creation Time : Sun May 27 11:26:05 2012 Raid Level : raid1 Array Size : 1953511288 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953511288 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon May 28 11:16:49 2012 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : Deltique:0 (local to host Deltique) UUID : 49733c26:dd5f67b5:13741fb7:c568bd04 Events : 32365 Number Major Minor RaidDevice State 1 8 33 0 active sync /dev/sdc1 1 0 0 1 removed 2 8 17 - spare /dev/sdb1 I've been told that mdadm automatically replaces removed drives with spares, but /dev/sdb1 isn't being moved into the expected position, RaidDevice 1. UPDATE (30 May 2012): A badblocks destructive read-write test of the entire /dev/sdb yielded no errors as expected; both HDDs are new. As of the latest edit, I assembled the array with this command: sudo mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1 The output was: mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding. Rebuilding looks like it's progressing normally: md0 : active raid1 sdc1[1] sdb1[2] 1953511288 blocks super 1.2 [2/1] [U_] [>....................] recovery = 0.6% (13261504/1953511288) finish=2299.7min speed=14060K/sec unused devices: <none> I'm now waiting on this rebuild, but I'm expecting /dev/sdb1 to become a spare just like the five or six times that I've tried rebuilding before. UPDATE (31 May 2012): Yeah, it's still a spare. Ugh! UPDATE (01 June 2012): I'm trying Adrian Kelly's suggested command: sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1 Waiting on the rebuild now... My questions are: Why isn't the spare drive becoming active sync? How can I make the spare drive become active?

    Read the article

  • Why many applications close after opening a document or doing a specific actions?

    - by Mohsen Farjami
    I have some encrypted pdf files that have no problem and in my last windows, I could open them easily with Adobe Reader 9.2 and other pdf readers. But now, I can only open non-encrypted pdf files and one encrypted file with Adobe Reader. every time I open almost any encrypted pdf, it closes itself. Also, when I try to search a folder for a keyword with Foxit Reader, once it closed. This is not related to Adobe Reader, because I have the same problem with Word 2007. When I open a document, sometimes it closes instantly and sometimes it closes after a few seconds and sometimes it is stable. My windows is Fresh. I have installed it a few days ago. I have ESET Smart Security 5.2 and I have updated it today. OS: XP Pro SP3, RAM: 3 GB, CPU: 2 GHZ, HDD: 320 GB My installed applications: Adobe AIR Adobe Flash Player 11 ActiveX Adobe Flash Player 11 Plugin Adobe Photoshop CS4 Adobe Reader 9.2 Atheros Wireless LAN Client Adapter Babylon Bluetooth Stack for Windows by Toshiba CCleaner Conexant HD Audio Dell Touchpad ESET Smart Security Farsi (101) Custom Foxit Reader Framing Studio 3.27 Google Chrome Hard Disk Sentinel PRO HDAUDIO Soft Data Fax Modem with SmartCP Intel(R) Graphics Media Accelerator Driver IrfanView (remove only) Java(TM) 6 Update 18 K-Lite Mega Codec Pack 8.8.0 Microsoft .NET Framework 2.0 Service Pack 1 Microsoft .NET Framework 3.0 Service Pack 1 Microsoft .NET Framework 3.5 Microsoft Data Access Components KB870669 Microsoft Office 2007 Primary Interop Assemblies Microsoft Office Enterprise 2007 Microsoft User-Mode Driver Framework Feature Pack 1.0.0 (Pre-Release 5348) Mozilla Firefox 7.0.1 (x86 en-US) Notepad++ Office Tab FreeEdition 8.50 ParsQuran PerfectDisk 12 Professional Registry First Aid RICOH R5C83x/84x Flash Media Controller Driver Ver.3.54.06 Sahar Money Manager 2.5 Stickies 7.1d The KMPlayer (remove only) TurboLaunch 5.1.2 Unlocker 1.9.1 USB Safely Remove 4.2 Virastyar Visual Studio 2005 Tools for Office Second Edition Runtime Winamp Windows Internet Explorer 8 Windows Media Player 11.0.5358.4826 Windows XP Service Pack 3 WinRAR 4.11 (32-bit) WorkPause 1.2 Z Dictionary My startup applications: WorkPause USB Safely Remove TurboLaunch SunJavaUpdateSched Stickies rfagent Persistence ParsQuran Daily Verse ITSecMng IgfxTray HotKeysCmds Hard Disk Sentinel egui disable shift+delete CTFMON.EXE Bluetooth Manager Babylon Client Apoint AdobeCS4ServiceManager Adobe Reader Speed Launcher Adobe ARM What should I do to solve it? If you recommend installing Windows again, what guarantees that it won't happen again?

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Add Machine Key to machine.config in Load Balancing environment to multiple versions of .net framework

    - by davidb
    I have two web servers behind a F5 load balancer. Each web server has identical applications to the other. There was no issue until the config of the load balancer changed from source address persistence to least connections. Now in some applications I receieve this error Server Error in '/' Application. Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. How do I add a machine key to the machine.config file? Do I do it at server level in IIS or at website/application level for each site? Does the validation and decryption keys have to be the same across both web servers or are they different? Should they be different for each machine.config version of .net? I cannot find any documentation of this scenario.

    Read the article

  • mdadm raid1 fails to resync

    - by JuanD
    Hello, I'm trying to solve this problem I'm having with an mdadm raid1. I have an ubuntu 9.04 server running on a software 2-drive raid1 with mdadm. Yesterday, one of the drives failed, and so I replaced it with a brand new drive of the same size. I removed the faulty drive, copied the partition from the remaining good drive to the new drive and then added it to the raid. It re-synced and the system worked fine, until the drive that hadn't failed, was also labeled failed. Now I had the raid running solely on the new drive. So I purchased another drive and repeated the procedure above. So now I had 2 brand new drives and the raid was syncing. However, after a few minutes I checked /proc/mdstat and the raid was no longer syncing. mdadm --detail /dev/md1 shows: (sdb is the first new drive, and sdc is the second new drive) root@dola:/home/jjaramillo# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Sat Dec 20 00:42:05 2008 Raid Level : raid1 Array Size : 974711680 (929.56 GiB 998.10 GB) Used Dev Size : 974711680 (929.56 GiB 998.10 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Jun 2 10:09:35 2010 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : bba497c6:5029ba0b:bfa4f887:c0dc8f3d Events : 0.5395594 Number Major Minor RaidDevice State 2 8 35 0 spare rebuilding /dev/sdc3 1 8 19 1 active sync /dev/sdb3 I've tried removing and re-adding the drive a few times, but the same thing happens. The raid fails to resync. I've looked at /var/log/messages, and found the following: Jun 2 07:57:36 dola kernel: [35708.917337] sd 5:0:0:0: [sdb] Unhandled sense code Jun 2 07:57:36 dola kernel: [35708.917339] sd 5:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 2 07:57:36 dola kernel: [35708.917342] sd 5:0:0:0: [sdb] Sense Key : Medium Error [current] [descriptor] Jun 2 07:57:36 dola kernel: [35708.917346] Descriptor sense data with sense descriptors (in hex): Jun 2 07:57:36 dola kernel: [35708.917348] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 Jun 2 07:57:36 dola kernel: [35708.917357] 00 43 9e 47 Jun 2 07:57:36 dola kernel: [35708.917360] sd 5:0:0:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed So it looks like there's some kind of error on sdb (the first new drive). My question is, what would be the best approach to get the raid up and running again? I've thought about dd'ing the /dev/md1 to a blank hard drive, then re-doing the raid from scratch and loading the data back, but there could be an easier solution.. Any help would be appreciated.

    Read the article

  • JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g

    - by John-Brown.Evans
    JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g ol{margin:0;padding:0} .c5{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 2pt 0pt 2pt} .c7{list-style-type:disc;margin:0;padding:0} .c4{background-color:#ffffff} .c14{color:#1155cc;text-decoration:underline} .c6{height:11pt;text-align:center} .c13{color:inherit;text-decoration:inherit} .c3{padding-left:0pt;margin-left:36pt} .c0{border-collapse:collapse} .c12{text-align:center} .c1{direction:ltr} .c8{background-color:#f3f3f3} .c2{line-height:1.0} .c11{font-style:italic} .c10{height:11pt} .c9{font-weight:bold} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:12pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This example shows the steps to create a simple JMS queue in WebLogic Server 11g for testing purposes. For example, to use with the two sample programs QueueSend.java and QueueReceive.java which will be shown in later examples. Additional, detailed information on JMS can be found in the following Oracle documentation: Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 http://docs.oracle.com/cd/E23943_01/web.1111/e13738/toc.htm 1. Introduction and Definitions A JMS queue in Weblogic Server is associated with a number of additional resources: JMS Server A JMS server acts as a management container for resources within JMS modules. Some of its responsibilities include the maintenance of persistence and state of messages and subscribers. A JMS server is required in order to create a JMS module. JMS Module A JMS module is a definition which contains JMS resources such as queues and topics. A JMS module is required in order to create a JMS queue. Subdeployment JMS modules are targeted to one or more WLS instances or a cluster. Resources within a JMS module, such as queues and topics are also targeted to a JMS server or WLS server instances. A subdeployment is a grouping of targets. It is also known as advanced targeting. Connection Factory A connection factory is a resource that enables JMS clients to create connections to JMS destinations. JMS Queue A JMS queue (as opposed to a JMS topic) is a point-to-point destination type. A message is written to a specific queue or received from a specific queue. The objects used in this example are: Object Name Type JNDI Name TestJMSServer JMS Server TestJMSModule JMS Module TestSubDeployment Subdeployment TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue 2. Configuration Steps The following steps are done in the WebLogic Server Console, beginning with the left-hand navigation menu. 2.1 Create a JMS Server Services > Messaging > JMS Servers Select New Name: TestJMSServer Persistent Store: (none) Target: soa_server1  (or choose an available server) Finish The JMS server should now be visible in the list with Health OK. 2.2 Create a JMS Module Services > Messaging > JMS Modules Select New Name: TestJMSModule Leave the other options empty Targets: soa_server1  (or choose the same one as the JMS server)Press Next Leave “Would you like to add resources to this JMS system module” unchecked and  press Finish . 2.3 Create a SubDeployment A subdeployment is not necessary for the JMS queue to work, but it allows you to easily target subcomponents of the JMS module to a single target or group of targets. We will use the subdeployment in this example to target the following connection factory and JMS queue to the JMS server we created earlier. Services > Messaging > JMS Modules Select TestJMSModule Select the Subdeployments  tab and New Subdeployment Name: TestSubdeployment Press Next Here you can select the target(s) for the subdeployment. You can choose either Servers (i.e. WebLogic managed servers, such as the soa_server1) or JMS Servers such as the JMS Server created earlier. As the purpose of our subdeployment in this example is to target a specific JMS server, we will choose the JMS Server option. Select the TestJMSServer created earlier Press Finish 2.4  Create a Connection Factory Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Connection Factory  and Next Name: TestConnectionFactory JNDI Name: jms/TestConnectionFactory Leave the other values at default On the Targets page, select the Advanced Targeting  button and select TestSubdeployment Press Finish The connection factory should be listed on the following page with TestSubdeployment and TestJMSServer as the target. 2.5 Create a JMS Queue Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Queue and Next Name: TestJMSQueueJNDI Name: jms/TestJMSQueueTemplate: NonePress Next Subdeployments: TestSubdeployment Finish The TestJMSQueue should be listed on the following page with TestSubdeployment and TestJMSServer. Confirm the resources for the TestJMSModule. Using the Domain Structure tree, navigate to soa_domain > Services > Messaging > JMS Modules then select TestJMSModule You should see the following resources The JMS queue is now complete and can be accessed using the JNDI names jms/TestConnectionFactory andjms/TestJMSQueue. In the following blog post in this series, I will show you how to write a message to this queue, using the WebLogic sample Java program QueueSend.java.

    Read the article

  • New MySQL Cluster 7.3 Previews: Foreign Keys, NoSQL Node.js API and Auto-Tuned Clusters

    - by Mat Keep
    At this weeks MySQL Connect conference, Oracle previewed an exciting new wave of developments for MySQL Cluster, further extending its simplicity and flexibility by expanding the range of use-cases, adding new NoSQL options, and automating configuration. What’s new: Development Release 1: MySQL Cluster 7.3 with Foreign Keys Early Access “Labs” Preview: MySQL Cluster NoSQL API for Node.js Early Access “Labs” Preview: MySQL Cluster GUI-Based Auto-Installer In this blog, I'll introduce you to the features being previewed. Review the blogs listed below for more detail on each of the specific features discussed. Save the date!: A live webinar is scheduled for Thursday 25th October at 0900 Pacific Time / 1600UTC where we will discuss each of these enhancements in more detail. Registration will be open soon and published to the MySQL webinars page MySQL Cluster 7.3: Development Release 1 The first MySQL Cluster 7.3 Development Milestone Release (DMR) previews Foreign Keys, bringing powerful new functionality to MySQL Cluster while eliminating development complexity. Foreign Key support has been one of the most requested enhancements to MySQL Cluster – enabling users to simplify their data models and application logic – while extending the range of use-cases for both custom projects requiring referential integrity and packaged applications, such as eCommerce, CRM, CMS, etc. Implementation The Foreign Key functionality is implemented directly within the MySQL Cluster data nodes, allowing any client API accessing the cluster to benefit from them – whether they are SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA, HTTP/REST or the new Node.js API - discussed later.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation.  This represents a further enhancement to MySQL Cluster's support for on0line schema changes, ie adding and dropping indexes, adding columns, etc.  Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  Getting Started with MySQL Cluster 7.3 DMR1: Users can download either the source or binary and evaluate the MySQL Cluster 7.3 DMR with Foreign Keys now! (Select the Development Release tab). MySQL Cluster NoSQL API for Node.js Node.js is hot! In a little over 3 years, it has become one of the most popular environments for developing next generation web, cloud, mobile and social applications. Bringing JavaScript from the browser to the server, the design goal of Node.js is to build new real-time applications supporting millions of client connections, serviced by a single CPU core. Making it simple to further extend the flexibility and power of Node.js to the database layer, we are previewing the Node.js Javascript API for MySQL Cluster as an Early Access release, available for download now from http://labs.mysql.com/. Select the following build: MySQL-Cluster-NoSQL-Connector-for-Node-js Alternatively, you can clone the project at the MySQL GitHub page.  Implemented as a module for the V8 engine, the new API provides Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. Figure 1: MySQL Cluster NoSQL API for Node.js enables end-to-end JavaScript development Rather than just presenting a simple interface to the database, the Node.js module integrates the MySQL Cluster native API library directly within the web application itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The new Node.js API joins a rich array of NoSQL interfaces available for MySQL Cluster. Whichever API is chosen for an application, SQL and NoSQL can be used concurrently across the same data set, providing the ultimate in developer flexibility.  Get started with MySQL Cluster NoSQL API for Node.js tutorial MySQL Cluster GUI-Based Auto-Installer Compatible with both MySQL Cluster 7.2 and 7.3, the Auto-Installer makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments – whether on-premise or in the cloud. Implemented with a standard HTML GUI and Python-based web server back-end, the Auto-Installer intelligently configures MySQL Cluster based on application requirements and auto-discovered hardware resources Figure 2: Automated Tuning and Configuration of MySQL Cluster Developed by the same engineering team responsible for the MySQL Cluster database, the installer provides standardized configurations that make it simple, quick and easy to build stable and high performance clustered environments. The auto-installer is previewed as an Early Access release, available for download now from http://labs.mysql.com/, by selecting the MySQL-Cluster-Auto-Installer build. You can read more about getting started with the MySQL Cluster auto-installer here. Watch the YouTube video for a demonstration of using the MySQL Cluster auto-installer Getting Started with MySQL Cluster If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3 and the Early Access previews). Or use the new MySQL Cluster Auto-Installer! Download the Guide to Scaling Web Databases with MySQL Cluster (to learn more about its architecture, design and ideal use-cases). Post any questions to the MySQL Cluster forum where our Engineering team and the MySQL Cluster community will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section here or in the blogs referenced in this article. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. The first Development Release of MySQL Cluster 7.3 and the Early Access previews give you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases, by using the comments of this blog.

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Knockoutjs - stringify to handling observables and custom events

    - by Renso
    Goal: Once you viewmodel has been built and populated with data, at some point it goal of it all is to persist the data to the database (or some other media). Regardless of where you want to save it, your client-side viewmodel needs to be converted to a JSON string and sent back to the server. Environment considerations: jQuery 1.4.3+ Knockoutjs version 1.1.2   How to: So let’s set the stage, you are using Knockoutjs and you have a viewmodel with some Knockout dependencies. You want to make sure it is in the proper JSON format and via ajax post it to the server for persistence.   First order of business is to deal with the viewmodel (JSON) object. To most the JSON stringifier sounds familiar. The JSON stringifier converts JavaScript data structures into JSON text. JSON does not support cyclic data structures, so be careful to not give cyclical structures to the JSON stringifier. You may ask, is this the best way to do it? What about those observables and other Knockout properties that I don’t want to persist or want their actual value persisted and not their function, etc. Not sure if you were aware, but KO already has a method; ko.utils.stringifyJson() - it's mostly just a wrapper around JSON.stringify. (which is native in some browsers, and can be made available by referencing json2.js in others). What does it do that the regular stringify does not is that it automatically converts observable, dependentObservable, or observableArray to their underlying value to JSON. Hold on! There is a new feature in this version of Knockout, the ko.toJSON. It is part of the core library and it will clone the view model’s object graph, so you don’t mess it up after you have stringified  it and unwrap all its observables. It's smart enough to avoid reference cycles. Since you are using the MVVM pattern it would assume you are not trying to reference DOM nodes from your view. Wait a minute. I can already see this info on the http://knockoutjs.com/examples/contactsEditor.html website, why mention it all here? First of this is a much nicer blog, no orange ? At this time, you may want to have a look at the blog and see what I am talking about. See the save event, how they stringify the view model’s contacts only? That’s cool but what if your view model is a representation of your object you want to persist, meaning it has no property that represents the json object you want to persist, it is the view model itself. The example in http://knockoutjs.com/examples/contactsEditor.html assumes you have a list of contacts you may want to persist. In the example here, you want to persist the view model itself. The viewmodel here looks something like this:     var myViewmodel = {         accountName: ko.observable(""),         accountType: ko.observable("Active")     };     myViewmodel.isItActive = ko.dependentObservable(function () {         return myViewmodel.accountType() == "Active";     });     myViewmodel.clickToSaveMe = function() {         SaveTheAccount();     }; Here is the function in charge of saving the account: Function SaveTheAccount() {     $.ajax({         data: ko.toJSON(viewmodel),         url: $('#ajaxSaveAccountUrl').val(),         type: "POST",         dataType: "json",         async: false,         success: function (result) {             if (result && result.Success == true) {                 $('#accountMessage').html('<span class="fadeMyContainerSlowly">The account has been saved</span>').show();                 FadeContainerAwaySlowly();             }         },         error: function (xmlHttpRequest, textStatus, errorThrown) {             alert('An error occurred: ' + errorThrown);         }     }); //ajax }; Try run this and your browser will eventually freeze up or crash. Firebug will tell you that you have a repetitive call to the first function call in your model that keeps firing infinitely.  What is happening is that Knockout serializes the view model to a JSON string by traversing the object graph and firing off the functions, again-and-again. Not sure why it does that, but it does. So what is the work around: Nullify your function calls and then post it:         var lightweightModel = viewmodel.clickToSaveMe = null;         data: ko.toJSON(lightweightModel), So then I traced the JSON string on the server and found it having issues with primitive types. C#, by the way. So I changed ko.toJSON(model) to ko.toJS(model), and that solved my problem. Of course you could just create a property on the viewmodel for the account itself, so you only have to serialize the property and not the entire viewmodel. If that is an option then that would be the way to go. If your view model contains other properties in the view model that you also want to post then that would not be an option and then you’ll know what to watch out for. Hope this helps.

    Read the article

  • Christian Radio Locator iPhone app

    - by Tim Hibbard
    For the last three months or so I've been working on an iPhone (and iPad) app in my spare time. It all started when I took the kids to Minneapolis and had a hard time finding radio stations to listen to on the trip. I looked in the App Store for an app that would use my GPS to show me Christian radio stations nearby, but there wasn't one. So I decided to build my own. Using public information from the FCC and a few other sources, I built a database in Google docs that contains the frequency for all Christian radio stations, where the tower is located and how far the tower can reach. I also included any streaming audio information and other contact information like Facebook or Twitter that I could find. Google spreadsheets publish in JSON format (yes, really) and Xcode can automatically deserialize JSON into a properly formatted entity. This is one area that Xcode is far superior to C#. In a just a few lines of code, I can have a list of in-memory strongly typed objects from a web-based JSON feed. To accomplish the same thing natively in .NET would be much more work and wouldn't feel nearly as clean when it was said and done. The snazzy icon shown above was built by my very talented wife. She hasn't yet provided any feedback on the app's user interface, which is why it is so plain and boring. I used a navigation view controller and EGO pull to refresh table view to construct the main window. Pulling down to refresh initiates a GPS lookup, which queries the database for radio stations in range (yes, you can pass parameters to Google spreadsheets and get a subset back in JSON). Pulling up on the table extends the range of the search and includes stations that may not be close enough to get clear audio. This feature is not that intuitive and the next version contains an update to that functionality. Tapping a cell will show a detail view that displays additional information about the station. The user can click to view the station on a map, click to listen to an online stream (if available) or click to see the station's Facebook or Twitter pages. Swiping back and forth on the table changes the information that is displayed on the right hand side of the table cell. It scrolls through the city where the tower is located, how far the phone is from the tower, the range of the tower and in the next version a signal strength indicator. This was pretty easy to implement once I figured out how to assign the gesture recognizer delegate.  Tapping and holding on a cell will jump the user to the map view screen. Which is pretty cool, but very hard for even a power user to discover. To tackle the issue of discoverability, the next version has a series of instructions displayed at the bottom of the screen to show the user the various shortcuts. Once the user has performed the swipes and long holds, the instructions disappear. I've learned a lot developing this app. Spending over a decade exclusively in .NET made the learning curve a bit steep, but once I learned the structure and syntax of Objective-C, I've learned to appreciate the power and simplicity of it. Here are a few screenshots. I would really appreciate any feedback and especially iTunes reviews. Technically it is open source and a smart googler could probably find it. I just haven't promoted it as open source.     Cross posted from timhibbard.com

    Read the article

  • Extended Logging with Caller Info Attributes

    - by João Angelo
    .NET 4.5 caller info attributes may be one of those features that do not get much airtime, but nonetheless are a great addition to the framework. These attributes will allow you to programmatically access information about the caller of a given method, more specifically, the code file full path, the member name of the caller and the line number at which the method was called. They are implemented by taking advantage of C# 4.0 optional parameters and are a compile time feature so as an added bonus the returned member name is not affected by obfuscation. The main usage scenario will be for tracing and debugging routines as will see right now. In this sample code I’ll be using NLog, but the example is also applicable to other logging frameworks like log4net. First an helper class, without any dependencies and that can be used anywhere to obtain caller information: using System; using System.IO; using System.Runtime.CompilerServices; public sealed class CallerInfo { private CallerInfo(string filePath, string memberName, int lineNumber) { this.FilePath = filePath; this.MemberName = memberName; this.LineNumber = lineNumber; } public static CallerInfo Create( [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { return new CallerInfo(filePath, memberName, lineNumber); } public string FilePath { get; private set; } public string FileName { get { return this.fileName ?? (this.fileName = Path.GetFileName(this.FilePath)); } } public string MemberName { get; private set; } public int LineNumber { get; private set; } public override string ToString() { return string.Concat(this.FilePath, "|", this.MemberName, "|", this.LineNumber); } private string fileName; } Then an extension class specific for NLog Logger: using System; using System.Runtime.CompilerServices; using NLog; public static class LoggerExtensions { public static void TraceMemberEntry( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberEntry(logger, LogLevel.Trace, filePath, memberName, lineNumber); } public static void TraceMemberExit( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberExit(logger, LogLevel.Trace, filePath, memberName, lineNumber); } public static void DebugMemberEntry( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberEntry(logger, LogLevel.Debug, filePath, memberName, lineNumber); } public static void DebugMemberExit( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberExit(logger, LogLevel.Debug, filePath, memberName, lineNumber); } public static void LogMemberEntry( this Logger logger, LogLevel logLevel, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { const string MsgFormat = "Entering member {1} at line {2}"; InternalLog(logger, logLevel, MsgFormat, filePath, memberName, lineNumber); } public static void LogMemberExit( this Logger logger, LogLevel logLevel, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { const string MsgFormat = "Exiting member {1} at line {2}"; InternalLog(logger, logLevel, MsgFormat, filePath, memberName, lineNumber); } private static void InternalLog( Logger logger, LogLevel logLevel, string format, string filePath, string memberName, int lineNumber) { if (logger == null) throw new ArgumentNullException("logger"); if (logLevel == null) throw new ArgumentNullException("logLevel"); logger.Log(logLevel, format, filePath, memberName, lineNumber); } } Finally an usage example: using NLog; internal static class Program { private static readonly Logger Logger = LogManager.GetCurrentClassLogger(); private static void Main(string[] args) { Logger.TraceMemberEntry(); // Compile time feature // Next three lines output the same except for line number Logger.Trace(CallerInfo.Create().ToString()); Logger.Trace(() => CallerInfo.Create().ToString()); Logger.Trace(delegate() { return CallerInfo.Create().ToString(); }); Logger.TraceMemberExit(); } } NOTE: Code for helper class and Logger extension also available here.

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • WCF RIA Services DomainContext Abstraction Strategies–Say That 10 Times!

    - by dwahlin
    The DomainContext available with WCF RIA Services provides a lot of functionality that can help track object state and handle making calls from a Silverlight client to a DomainService. One of the questions I get quite often in our Silverlight training classes (and see often in various forums and other areas) is how the DomainContext can be abstracted out of ViewModel classes when using the MVVM pattern in Silverlight applications. It’s not something that’s super obvious at first especially if you don’t work with delegates a lot, but it can definitely be done. There are various techniques and strategies that can be used but I thought I’d share some of the core techniques I find useful. To start, let’s assume you have the following ViewModel class (this is from my Silverlight Firestarter talk available to watch online here if you’re interested in getting started with WCF RIA Services): public class AdminViewModel : ViewModelBase { BookClubContext _Context = new BookClubContext(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, null); } private void LoadBooksCallback(LoadOperation<Book> books) { Books = new ObservableCollection<Book>(books.Entities); } } Notice that BookClubContext is being used directly in the ViewModel class. There’s nothing wrong with that of course, but if other ViewModel objects need to load books then code would be duplicated across classes. Plus, the ViewModel has direct knowledge of how to load data and I like to make it more loosely-coupled. To do this I create what I call a “Service Agent” class. This class is responsible for getting data from the DomainService and returning it to a ViewModel. It only knows how to get and return data but doesn’t know how data should be stored and isn’t used with data binding operations. An example of a simple ServiceAgent class is shown next. Notice that I’m using the Action<T> delegate to handle callbacks from the ServiceAgent to the ViewModel object. Because LoadBooks accepts an Action<ObservableCollection<Book>>, the callback method in the ViewModel must accept ObservableCollection<Book> as a parameter. The callback is initiated by calling the Invoke method exposed by Action<T>: public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, callback); } public void LoadBooksCallback(LoadOperation<Book> lo) { //Check for errors of course...keeping this brief var books = new ObservableCollection<Book>(lo.Entities); var action = (Action<ObservableCollection<Book>>)lo.UserState; action.Invoke(books); } } This can be simplified by taking advantage of lambda expressions. Notice that in the following code I don’t have a separate callback method and don’t have to worry about passing any user state or casting any user state (the user state is the 3rd parameter in the _Context.Load method call shown above). public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), (lo) => { var books = new ObservableCollection<Book>(lo.Entities); callback.Invoke(books); }, null); } } A ViewModel class can then call into the ServiceAgent to retrieve books yet never know anything about the DomainContext object or even know how data is loaded behind the scenes: public class AdminViewModel : ViewModelBase { ServiceAgent _ServiceAgent = new ServiceAgent(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _ServiceAgent.LoadBooks(LoadBooksCallback); } private void LoadBooksCallback(ObservableCollection<Book> books) { Books = books } } You could also handle the LoadBooksCallback method using a lambda if you wanted to minimize code just like I did earlier with the LoadBooks method in the ServiceAgent class.  If you’re into Dependency Injection (DI), you could create an interface for the ServiceAgent type, reference it in the ViewModel and then inject in the object to use at runtime. There are certainly other techniques and strategies that can be used, but the code shown here provides an introductory look at the topic that should help get you started abstracting the DomainContext out of your ViewModel classes when using WCF RIA Services in Silverlight applications.

    Read the article

  • Windows Workflow Foundation (WF) and things I wish were more intuitive

    - by pjohnson
    I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event. You need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds. TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error: The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]". ...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it. I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • What Counts For a DBA – Decisions

    - by Louis Davidson
    It’s Friday afternoon, and the lead DBA, a very talented guy, is getting ready to head out for two well-earned weeks of vacation, with his family, when this error message pops up in his inbox: Msg 211, Level 23, State 51, Line 1. Possible schema corruption. Run DBCC CHECKCATALOG. His heart sinks. It’s ten…no eight…minutes till it’s time to walk out the door. He glances around at his coworkers, competent to handle many problems, but probably not up to the challenge of fixing possible database corruption. What does he do? After a few agonizing moments of indecision, he clicks shut his laptop. He’ll just wait and see. It was unlikely to come to anything; after all, it did say “possible” schema corruption, not definite. In that moment, his fate was sealed. The start of the solution to the problem (run DBCC CHECKCATALOG) had been right there in the error message. Had he done this, or at least took two of those eight minutes to delegate the task to a coworker, then he wouldn’t have ended up spending two-thirds of an idyllic vacation (for the rest of the family, at least) dealing with a problem that got consistently worse as the weekend progressed until the entire system was down. When I told this story to a friend of mine, an opera fan, he smiled and said it described the basic plotline of almost every opera or ‘Greek Tragedy’ ever written. The particular joy in opera, he told me, isn’t the warbly voiced leading ladies, or the plump middle-aged romantic leads, or even the music. No, what packs the opera houses in Italy is the drama of characters who, by the very nature of their life-experiences and emotional baggage, make all sorts of bad choices when faced with ordinary decisions, and so move inexorably to their fate. The audience is gripped by the spectacle of exotic characters doomed by their inability to see the obvious. I confess, my personal experience with opera is limited to Bugs Bunny in “What’s Opera, Doc?” (Elmer Fudd is a great example of a bad decision maker, if ever one existed), but I was struck by my friend’s analogy. If all the DBA cubicles were a stage, I think we would hear many similarly tragic tales, played out to music: “Error handling? We write our code to never experience errors, so nah…“ “Backups failed today, but it’s okay, we’ll back up tomorrow (we’ll back up tomorrow)“ And similarly, they would leave their audience gasping, not necessarily at the beauty of the music, or poetry of the lyrics, but at the inevitable, grisly fate of the protagonists. If you choose not to use proper error handling, or if you choose to skip a backup because, hey, you haven’t had a server crash in 10 years, then inevitably, in that moment you expected to be enjoying a vacation, or a football game, with your family and friends, you will instead be sitting in front of a computer screen, paying for your poor choices. Tragedies are very much part of IT. Most of a DBA’s day to day work has limited potential to wreak havoc; paperwork, timesheets, random anonymous threats to developers, routine maintenance and whatnot. However, just occasionally, you, as a DBA, will face one of those decisions that really matter, and which has the possibility to greatly affect your future and the future of your user’s data. Make those decisions count, and you’ll avoid the tragic fate of many an operatic hero or villain.

    Read the article

  • DeveloperDeveloperDeveloper! Scotland 2010 - DDDSCOT

    - by Plip
    DDD in Scotland was held on the 8th May 2010 in Glasgow and I was there, not as is uaual at these kind of things as an organiser but actually as a speaker and delegate. The weekend started for me back on Thursday with the arrival of Dave Sussman to my place in Lancashire, after a curry and watching the Electon night TV coverage we retired to our respective beds (yes, I know, I hate to shatter the illusion we both sleep in the same bed wearing matching pijamas is something I've shattered now) ready for the drive up to Glasgow the following afternoon. Before heading up to Glasgow we had to pick up Young Mr Hardy from Wigan then we began the four hour drive back in time... Something that struck me on the journey up is just how beautiful Scotland is. The menacing landscapes bordered with fluffy sheep and whirly-ma-gigs are awe inspiring - well worth driving up if you ever get the chance. Anywho we arrived in Glasgow, got settled intot he hotel and went in search of Speakers for pre conference drinks and food. We discovered a gaggle (I believe that's the collective term) of speakers in the Bar and when we reached critical mass headed off to the Speakers Dinner location. During dinner, SOMEONE set my hair on FIRE. That's all I'm going to say on the matter. Whilst I was enjoying my evening there was something nagging at me, I realised that I should really write my session as I was due to give it the following morning. So after a few more drinks I headed back to the hotel and got some well earned sleep (and washed the fire damage out of my hair). Next day, headed off to the conference which was a lovely stroll through Glasgow City Centre. Non of us got mugged, murdered (or set on fire) arriving safely at the venue, which was a bonus.   I was asked to read out the opening Slides for Barry Carr's session which I did dilligently and with such professionalism that I shocked even myself. At which point I reliased in just over an hour I had to give my presentation, so headed back to the speaker room to finish writing it. Wham, bam and it was all over. Session seemed to go well. I was speaking on Exception Driven Development, which isn't so much a technical solution but rather a mindset around how one should treat exceptions and their code. To be honest, I've not been so nervous giving a session for years - something about this topic worried me, I was concerned I was being too abstract in my thinking or that what I was saying was so obvious that everyone would know it, but it seems to have been well recieved which makes me a happy Speaker. Craig Murphy has some brilliant pictures of DDD Scotland 2010. After my session was done I grabbed some lunch and headed back to the hotel and into town to do some shopping (thus my conspicuous omission from the above photo). Later on we headed out to the geek dinner which again was a rum affair followed by a few drinks and a little boogie woogie. All in all a well run, well attended conference, by the community for the community. I tip my hat to the whole team who put on DDD Scotland!       

    Read the article

  • Implementing Search for BlogReader Windows 8 Sample

    - by Harish Ranganathan
    The BlogReader sample is an excellent place to start speeding up your Windows 8 development skills.  The tutorial is available here and the complete source code is available here Create a project called WindowsBlogReader and create pages for ItemsPage.xaml, SplitPage.xaml and DetailPage.xaml and copy the corresponding code blocks from the sample listed above. Created a class file FeedData.cs and copy the code.  Finally, create a class DateConverter.cs and copy the code associated with it. With that you should be able to build and run the project.  There seems to be one issue in the sample feeds listed that the first week (feed1) doesn’t seem to expose it.  So you can skip that and use the second feed as first feed.  You will end up with one feed less but it works. I had demonstrated this in the recent TechDays at Chennai.  How we can use the Search Contract and implement Search for within the Blog Titles. First off, we need to declare that the App will be using Search Contract, in the Package.appmanifest file Next, we would need a handle of the Search Contract when user types on the search window in Charms Menu. If you had completed the code sample from the link above, you would have ItemsPage.xaml and ItemsPage.xaml.cs.  Open the ItemsPage.xaml.cs. Import the namespaces using System.Collections.ObjectModel and System.Linq. in the ItemsPage() constructor, right after this.InitializeComponent(); add the following code Windows.ApplicationModel.Search.SearchPane.GetForCurrentView().QuerySubmitted += ItemsPage_QuerySubmitted; This event is fired when users open up the Search Panel from Charms Menu, type something and hit enter. We need to handle this event declared in the delegate.  For that we need to pull the FeedDataSource instantiation to the root of the class to make it global. So, add the following as the first line within the partial class FeedDataSource feedDataSource; Also, modify the LoadState method, as follows:- protected override void LoadState(Object navigationParameter, Dictionary<String, Object> pageState)        {            feedDataSource = (FeedDataSource)App.Current.Resources["feedDataSource"];            if (feedDataSource != null)            {                this.DefaultViewModel["Items"] = feedDataSource.Feeds;            }        } Next is to implement the ItemsPage_QuerySubmitted method void ItemsPage_QuerySubmitted(Windows.ApplicationModel.Search.SearchPane sender, Windows.ApplicationModel.Search.SearchPaneQuerySubmittedEventArgs args)         {             this.DefaultViewModel["Items"] = from dynamic item in feedDataSource.Feeds                                              where                                              item.Title.Contains(args.QueryText)                                              select item;         } As you can see we are almost using the same defaultviewmodel with the change that we are using a linq query to do a search on feeds which has the Title that matches QueryText. With this we are ready to run the app. Run the App.  Hit the Charms Menu with Windows + C key combination and type a text to search within the blog. You can see that it filters the Blogs which has the matching text. We can modify the above Linq query to do a search for the Text in other attributes like description, actual blog content etc., I have uploaded the complete code since the original WindowsBlogReader Code is not available for download.  You can download it from here note:  this code is provided as-is without any warranties.  Cheers!!!

    Read the article

  • Partner Blog: Hub City Media Introduces iPad Application for Oracle Identity Analytics

    - by Tanu Sood
    About the Writer:Steve Giovannetti is CTO of Hub City Media, Inc., a company that specializes in implementation and product development on the Oracle Identity Management platform. Recently, Hub City Media announced the introduction of iPad application IdentityCert for Oracle Identity Analytics. This post explore the business use cases and application of IdentityCert.Hub City Media(HCM) has been deploying certification solutions based on Oracle Identity Analytics since it first appeared on the market as Vaau RBACx. With each deployment we've seen the same pattern repeat time and time again:1. Customers suffering under the weight of manual access certification regimens deploy Oracle Identity Analytics (OIA) for automated certification. 2. OIA improves the frequency, speed, accuracy, and participation of certifications across the organization. 3. Then the certifiers, typically managers and supervisors, ask, “Is there any easier way to do these certifications offline?”The current version of OIA has a way to export certification data to a spreadsheet.  For some customers, we've leveraged this feature and combined it with some of our own custom code to provide a solution based on spreadsheet exports and imports.  Customers export the certification to Microsoft Excel, complete it, and then import the spreadsheet to OIA. It worked well for offline certification, but if the user accidentally altered the format of the spreadsheet, the import of the data could fail. We were close to a solution but it wasn’t reliable.Over the past few years, we've seen the proliferation of Apple iOS devices, specifically the iPhone and iPad, in the enterprise.  As our customers were asking for offline certification, we noticed the same population of users traditionally responsible for access certification, were early adopters of the iPad. The environment seemed ideal for us to create an iPad application to support offline certifications using Oracle Identity Analytics. That’s why we created IdentityCert™.IdentityCert allows users to view their analytics dashboard, complete user certifications, and resolve policy violations with OIA, from their iPads.The current IdentityCert analytics dashboard displays the same charts that are available in the Oracle Identity Analytics product. However, we plan to expand the number of available analytics in future releases.The main function of IdentityCert is user certification which can be performed quickly and efficiently using a simple touch interface. Managers tap into a certification, use simple gestures to claim users and certify their access.  Certifications can be securely downloaded to IdentityCert and can be completed with or without a network connection. The user can upload the completed certifications once they are connected to a cellular or wi-fi network.Oracle Identity Analytics can generate policy violation notifications based on detective scans of identity warehouse or via preventative analysis of identity access requests. IdentityCert allows users to view all policy violations, resolve, or delegate them to appropriate users. IdentityCert also analyzes the policy violation expression and produces more human friendly descriptions of the policy violation which improves the ability of users to resolve the violation. IdentityCert can be deployed quickly into a customer's environment. It is deployed with Hub City Media's ID Services to connect Oracle Identity Analytics securely with the iPad application.Oracle Identity Management 11g R2 is an important evolutionary release. Oracle's Identity Management suite has more characteristics of a cohesive platform. This platform provides an integrated set of identity services that can be used to protect, manage, and audit security within the enterprise. At HCM we take the platform concept a step further and see it as an opportunity to create unique solutions for Oracle Identity Management customers. IdentityCert is our commitment to this platform. You can download IdentityCert from the Apple iOS App Store today. It includes a demo dataset that you can use to explore the functions of the product without any server infrastructure. Download it. Give it a try. We would appreciate your interest and welcome any feedback.Resources:Press Release: Hub City Media Introduces iPad Application IdentityCert™ for Oracle Identity AnalyticsApp Store Download: http://bit.ly/IdentityCertOracle Identity Governance Suite

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >