Search Results

Search found 31410 results on 1257 pages for 'disk based'.

Page 209/1257 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • Cloned Win7: Keyboard doesn't work

    - by Marc
    I cloned my old Windows7 hard disk to a shiny new Seagate Momentus XT 500GB using the free EaseUs Disk Copy tool on my laptop. After the clone process I used the Windows 7 installation disc to start the automatic startup repair. This took maybe 15 minutes and then my cloned disk was able to start. Now the cloned disk boots until the login screen and then I can't do anything because my keyboard just doesn't work. I tried connecting an external USB keyboard but this didn't help. The mouse is working fine. Note that the keyboard works fine in BIOS and in the Windows startup options menu. I booted into safe mode and again the keyboard is not working at all. I also noticed that the letters "Press CTRL+ALT+Delete to login" are now shown in italic font but they used to be shown non-italic on the original disk. I have now replaced the clone with the original disk again and from here everything works fine. Doesn't anybody have an idea how I can get my keyboard back?

    Read the article

  • SATA DVD drive refuses to read movie DVDs

    - by poke
    Hey, I have a problem with my DVD Drive (Asus DRW-2014L1T, most current firmware installed) on Windows 7 x64. When I insert a DVD movie and Windows starts to access the drive (for autoplay, or when I manually click on the drive icon), my computer hangs up in a particular way, while trying to read the disk. Explorer stops reacting and several programs won't run or their launch is horribly delayed (like the device manager). In th end, I can't access the movie and can't even eject the disk (probably because Windows is still trying to access it). To get the disk out of the drive I then have to reboot (which sometimes doesn't work either) and eject the disk before Windows boots. BIOS recognizes the drive just fine, and Windows is also able to read data disks (tried it with some software disks), but just refuses any movies. I have checked the region code in the device manager, but it is correct. My notebook is reading the disks just fine btw.. I remember having the same problem with an older drive as well, but I don't remember what I did to make it work again (maybe I didn't even fix the problem back then). I do remember however that booting with the disk inserted made Windows recognize the disk, however this doesn't work in this situation either. Do you have any idea what to do to fix that problem?

    Read the article

  • Western Digital HDD disappears and reappears in BIOS

    - by tbkn23
    I know many people asked about similar problems, but I have a very specific case where I can't understand what's going on... I have a 3TB Western Digital Caviar Green disk connected in my Desktop, that also has a seagate 1.5TB disk and 2 SSD drives (OCZ and Sandisk). After working fine for quite some time (probably more than a year), suddenly my Caviar Green drive disappeared from windows. I checked the BIOS, and it wasn't there either. I opened my PC, played with the connectors, power, etc, but nothing helped. Even tried switching connectors with those of the 1.5TB disk, and nothing changed, the 1.5TB seagate was there, but the 3TB WD was not. Ok, now for the strange part. I have another desktop at home, so I took out my 3TB drive, connected it there, and it worked fine! I copied the most important files out of it, and then made another attempt in the original desktop. Surprise! It now appeared in the BIOS and worked fine! I even ran the SMART test with the WD tools and it said everything was intact. It doesn't end here. After leaving it overnight in the original desktop, it disappeared again in the morning. I repeated the entire process, connecting it to the second desktop, and there it is again working fine. Now for my question... Whats going on? The disk seems to be appearing on/off in my original Desktop, while other drives there work fine. SMART test says the disk is fine. Any ideas? Is the disk defective and should be replaced? Or maybe there's a problem with the controller in the desktop? I'm using a Gigabyte GA-880GA-UD3H motherboard and tried connecting the drive to both bridges (SATA2 and SATA3 bridges). Thanks EDIT: Power options are set never to turn off hard drives:

    Read the article

  • Boot records messed on dual boot (win7 and ubuntu) machine with SSD and HDD

    - by Michael
    i have a lenovo ideapad y570 with two hard drives: SSD and normal HDD both managed by RapidDrive and windows 7 pre-installed. First, i have shrunk my 500 GB HDD a little bit to make some place for a linux installation. Then i installed linux mint 12 to it, also installed grub onto the drive (dev/sdb). Installation programm has not allowed me to install grub on sda. Then i replaced linux mint with ubuntu 12.04 but installed grub onto the SSD (which is dev/sda and was the default-option). After that i could boot into my windows, only ubuntu worked. So i did a research, and tried: rewriting mbr of windows into sda1, reinstalling grub, replacing grub2 with grub-legacy, and now i think my partitions table are totally messed. Here is fdisk -l output: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 * 2048 411647 204800 7 HPFS/NTFS/exFAT /dev/sda2 411648 1009430959 504509656 7 HPFS/NTFS/exFAT Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5e5d1cc8 Device Boot Start End Blocks Id System /dev/sdb1 * 1979 884389887 442193954+ 12 Compaq diagnostics /dev/sdb2 884391934 976771071 46189569 5 Extended /dev/sdb5 884391936 937705471 26656768 83 Linux /dev/sdb6 937707520 967006207 14649344 83 Linux /dev/sdb7 967008256 976771071 4881408 82 Linux swap / Solaris I also cant mount any windows partitions to recover data. And when i open gparted, the whole sda-disk appears unallocated and it states "can not have a partition outside the disk!", also the end-sector address of /dev/sda2 confuses me. If i boot from the SSD, it throws some mbr error and wont boot, if i boot from the HDD, i only get the grub bash. How do i restore the partition tables? I can boot only from a live-cd at the machine. Thanks for any help.

    Read the article

  • Can't mount hard drive. Ubuntu 12.04

    - by Sam
    I am trying to recover some pictures on my 320 GB Hard Disk, so I put in a Live Ubuntu CD and am in that right now. In the devices list, it shows my USB drive, but not my 320 GB Hard Disk. I can see the disk in Disk Utility (it says it's on /dev/sda), but it's not mounted, and it says it has a few bad sectors but it is OK. In Disk Usage Analyzer, it says my maximum capacity is 13.4 GB, so it's definitely not using the 320 GB Hard Disk. I tried the following: sudo mkdir /media/newhd (worked) sudo mount /dev/sda /media/newhd (didn't work. it says I must specify the filesystem type) I then tried: fsck.ext4 -f /dev/sda (didn't work. Said: Superblock invalid, trying to backup blocks. then: Bad magic number in super-block while trying to open /dev/sda. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock) Does anyone have any ideas? The whole problem started when my Windows Vista said "Can't find operating system". Any ideas on how I can get on to my hard drive at /dev/sda?

    Read the article

  • Windows Home Server 2011, No disks "suitable for a backup destination"

    - by Scott Beeson
    I recently installed Windows Home Server 2011 and love it. However, when I try to set up server backups, it says no suitable disks are available. Initially, before I set up my RAID, it found one of my twin drives and said it would work. Once I set up the mirroring, that one is no longer available (obviously). However, I have an internal SATA 1TB drive and an external USB2.0 1TB drive hooked up. Both are recognized by Disk Management. WHS11 still says nothing suitable for backups. The two drives details are as follows: Edit to clarify: The system partition is on Disk 0, not listed below. The two below are the two that SHOULD be available for system backups. Disk 1: Dynamic "Data" (D:) 931.51 GB NTFS, Healthy Disk 3: Basic 200 MB Healthy (EFI System Partition) "Backup" 930.66 GB NTFS, Healthy (Primary Partition) What's a bit odd is that in Disk Management the "Backup" volume does not show a drive letter, even though I assigned Z: (which is reflected in "My Computer". I also cannot make this a dynamic disk as it says it's unsupported by the device.

    Read the article

  • Partition problem tyring to install window 7 starter

    - by ant2009
    Hello HP Mini 210 I am trying to install windows starter 7. Currently I have installed fedora 14 xfce. And I have allocated 24 GB NTFS for hard disk for the windows partition. My current partitions are as follows: /dev/sda2 97G 4.9G 91G 6% / tmpfs 494M 92K 494M 1% /dev/shm /dev/sda1 485M 68M 392M 15% /boot /dev/sda5 169G 26G 135G 16% /home I have created a boot USD to install windows starter 7. When the computer boots into the windows setup and I selected the partition I want to install windows on. I get the following message: "Setup was unable to create a new system partition or locate an existing system partition." This is setup displaying all my partitions: Disk 0 Partition 1 500MB 0 Primary Disk 0 Partition 2 97.7GB 0 Primary Disk 0 Partition 3 4GB 0 Primary Disk 0 Partition 4 171.3GB 0 Logical Disk 0 Partition 5 24.6GB 24.5 Logical <-- Trying install on this partition NTFS I have also tried to delete the partition in setup and create a new one. And also tried to format the partition. However, I still get the same error message. Many thanks for any advice,

    Read the article

  • Shrinking windows and recovery partitions on the samsung new series 9

    - by bobbaluba
    I just bought a samsung NP900X3C, and as I was going to install linux, I noticed the windows partitions and recovery partitions occupied a major portion of the disk. The disk is a 128 GB SSD, and I want to keep the windows partition in order to play some games once in a while, but the windows disk is already 45GB full (with no installed programs) and the recovery partition is 20GB. That leaves under 60 GB for linux, which is not optimal, since that is what I'm going to be using most of the time, and there would be no room for games on the windows partition. There are also two small partitions that I don't know what are doing, one 100mb at the start of the disk that I'm guessing is some kind of boot partition, and one 5GB, that is described as an OS/2 hidden C: drive What I'm wondering is: can i delete the recovery partition? What about the mystical 5gb partition? Here is what fdisk reports: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x83953ffc Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 198273023 99033088 7 HPFS/NTFS/exFAT /dev/sda3 198273024 207276031 4501504 84 OS/2 hidden C: drive /dev/sda4 207276032 250068991 21396480 27 Hidden NTFS WinRE

    Read the article

  • List of available whitepapers as at 04 May 2010

    - by Anthony Shorten
    The following table lists the whitepapers available, from My Oracle Support, for any Oracle Utilities Application Framework based product: KB Id Document Title Contents 559880.1 ConfigLab Design Guidelines Whitepaper outlining how to design and implement a ConfigLab solution. 560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers.  560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items.  Release Management - Packaging configuration items into a release.  Distribution - Distribution and installation of  releases across environments  Change Management - Generic change management processes for product implementations. Status Accounting -Status reporting techniques using product facilities.  Defect Management -Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview Whitepaper summarizing the security facilities in the framework. Updated for OUAF 4.0.1 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Whitepaper summarizing how to integrate an external LDAP based security repository with the framework.  789060.1 Oracle Utilities Application Framework Integration Overview Whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products Whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper oulines the common and best practices implemented by sites all over the world. Updated for OUAF 4.0.1 856854.1 Technical Best Practices V1 Addendum  Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. Updated for OUAF 4.0.1 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Orade Identity Manager used to provision user and user group secuity information 1068958.1 Production Environment Configuration Guidelines (New!) Whitepaper outlining common production level settings for the products

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • SBS 2008 Backup Drive Full - Error Code '2147942512'

    - by HK1
    We are using Windows Backup on SBS 2008 SP2 and backing up to 1TB external hard drives. Recently after switching drives our backup started failing because the backup drive is full and auto-delete isn't automatically deleting older backups/show copies. I'm trying to get more information to help me effectively prevent this problem from reoccurring in the future. How I can tell that the drive is getting full: In the event viewer under Windows Logs Application, I'm seeing Event ID 517 but it fails to show an intelligible description. However, under Applications and Services Logs Microsoft Windows Backup Operational, I'm seeing an event with the ID of 5 and a description like this: Backup started at '10/4/2011 12:30:12 PM' failed with following error code '2147942512'. One of the most informative posts I've found on this error is located on Microsoft's Technet Forums here. In that post, a Microsoft representative gives this hazy explanation: auto-delete feature to ensure that at least some old backup copies are maintained on the disk -- does not automatically delete backups if space utilization by older copies is less than 1/8 of the disk size or in other words, 13% of the disk size. that means if the one full backup copy does not fit in the 7/8 of the disk size, backup may fail with disk full error. auto-delete will not automatically delete older versions to reclaim more older versions of backup. In the above explanation, I do not understand what is meant by "older copies" except that it appears that anything older than the very last shadow copy would be considered "older copies". I'm going to make the assumption that this problem where auto-delete will not work will affect any hard drive that is large enough to make an effective backup drive, or in other words, any hard drive that is large enough to hold more than one backup/shadow copy at once. The same MS representative proposes the solution of using a larger backup drive. I can't understand how this will help. It appears to me it will simply delay the problem until a later date. In order to resolve this problem for now, I did the following: Assign the backup drive a disk letter under disk management. Run the command line with Administrative rights. diskshadow.exe [enter] delete shadows oldest x: [enter] (where X: is the letter you assigned your backup drive) I manually ran the above command some 60 or 80 times to free up about 200 GB of space on my 1 Terrabyte External Hard drive. However, I do not feel this is a satisfactory solution to prevent the problem from happening again in the future. Does anyone have a solution to prevent your Windows Server backup drive from getting full?

    Read the article

  • Is there a C pre-processor which eliminates #ifdef blocks based on values defined/undefined?

    - by Jonathan Leffler
    Original Question What I'd like is not a standard C pre-processor, but a variation on it which would accept from somewhere - probably the command line via -DNAME1 and -UNAME2 options - a specification of which macros are defined, and would then eliminate dead code. It may be easier to understand what I'm after with some examples: #ifdef NAME1 #define ALBUQUERQUE "ambidextrous" #else #define PHANTASMAGORIA "ghostly" #endif If the command were run with '-DNAME1', the output would be: #define ALBUQUERQUE "ambidextrous" If the command were run with '-UNAME1', the output would be: #define PHANTASMAGORIA "ghostly" If the command were run with neither option, the output would be the same as the input. This is a simple case - I'd be hoping that the code could handle more complex cases too. To illustrate with a real-world but still simple example: #ifdef USE_VOID #ifdef PLATFORM1 #define VOID void #else #undef VOID typedef void VOID; #endif /* PLATFORM1 */ typedef void * VOIDPTR; #else typedef mint VOID; typedef char * VOIDPTR; #endif /* USE_VOID */ I'd like to run the command with -DUSE_VOID -UPLATFORM1 and get the output: #undef VOID typedef void VOID; typedef void * VOIDPTR; Another example: #ifndef DOUBLEPAD #if (defined NT) || (defined OLDUNIX) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ Ideally, I'd like to run with -UOLDUNIX and get the output: #ifndef DOUBLEPAD #if (defined NT) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ This may be pushing my luck! Motivation: large, ancient code base with lots of conditional code. Many of the conditions no longer apply - the OLDUNIX platform, for example, is no longer made and no longer supported, so there is no need to have references to it in the code. Other conditions are always true. For example, features are added with conditional compilation so that a single version of the code can be used for both older versions of the software where the feature is not available and newer versions where it is available (more or less). Eventually, the old versions without the feature are no longer supported - everything uses the feature - so the condition on whether the feature is present or not should be removed, and the 'when feature is absent' code should be removed too. I'd like to have a tool to do the job automatically because it will be faster and more reliable than doing it manually (which is rather critical when the code base includes 21,500 source files). (A really clever version of the tool might read #include'd files to determine whether the control macros - those specified by -D or -U on the command line - are defined in those files. I'm not sure whether that's truly helpful except as a backup diagnostic. Whatever else it does, though, the pseudo-pre-processor must not expand macros or include files verbatim. The output must be source similar to, but usually simpler than, the input code.) Status Report (one year later) After a year of use, I am very happy with 'sunifdef' recommended by the selected answer. It hasn't made a mistake yet, and I don't expect it to. The only quibble I have with it is stylistic. Given an input such as: #if (defined(A) && defined(B)) || defined(C) || (defined(D) && defined(E)) and run with '-UC' (C is never defined), the output is: #if defined(A) && defined(B) || defined(D) && defined(E) This is technically correct because '&&' binds tighter than '||', but it is an open invitation to confusion. I would much prefer it to include parentheses around the sets of '&&' conditions, as in the original: #if (defined(A) && defined(B)) || (defined(D) && defined(E)) However, given the obscurity of some of the code I have to work with, for that to be the biggest nit-pick is a strong compliment; it is valuable tool to me. The New Kid on the Block Having checked the URL for inclusion in the information above, I see that (as predicted) there is an new program called Coan that is the successor to 'sunifdef'. It is available on SourceForge and has been since January 2010. I'll be checking it out...further reports later this year, or maybe next year, or sometime, or never.

    Read the article

  • How to change the data in Telerik's RadGrid based on Calendar's selected dates?

    - by Jronny
    I was creating another usercontrol with Telerik's RadGrid and Calendar. <%@ Register Assembly="Telerik.Web.UI" Namespace="Telerik.Web.UI" TagPrefix="telerik" %> <table class="style1"> <tr> <td>From</td> <td>To</td> </tr> <tr> <td><asp:Calendar ID="Calendar1" runat="server" SelectionMode="Day"></asp:Calendar></td> <td><asp:Calendar ID="Calendar2" runat="server" SelectionMode="Day"></asp:Calendar></td> </tr> <tr> <td><asp:Button ID="btnSubmit" runat="server" Text="Submit" OnClick="btnSubmit_Click" /></td> <td><asp:Button ID="btnClear" runat="server" Text="Clear" OnClick="btnClear_Click" /></td> </tr> </table> <telerik:RadGrid ID="RadGrid1" runat="server"> <MasterTableView CommandItemDisplay="Top"></MasterTableView> </telerik:RadGrid> and I am using Linq in code-behind: Entities1 entities = new Entities1(); public static object DataSource = null; protected void Page_Load(object sender, EventArgs e) { if (DataSource == null) { DataSource = (from entity in entities.nsc_moneytransaction select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }).OrderByDescending(a => a.date); } BindData(); } public void BindData() { RadGrid1.DataSource = DataSource; } protected void btnSubmit_Click(object sender, EventArgs e) { DateTime startdate = new DateTime(); DateTime enddatedate = new DateTime(); if (Calendar1.SelectedDate != null && Calendar2.SelectedDate != null) { startdate = Calendar1.SelectedDate; enddatedate = Calendar2.SelectedDate; var queryDateRange = from entity in entities.nsc_moneytransaction where DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) >= DateTime.Parse(startdate.ToShortDateString()) && DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) <= DateTime.Parse(enddatedate.ToShortDateString()) select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = queryDateRange.OrderByDescending(a => a.date); } else if (Calendar1.SelectedDate != null) { startdate = Calendar1.SelectedDate; var querySetDate = from entity in entities.nsc_moneytransaction where entity.transaction_date.Value == startdate select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = querySetDate.OrderByDescending(a => a.date); ; } BindData(); } protected void btnClear_Click(object sender, EventArgs e) { Calendar1.SelectedDates.Clear(); Calendar2.SelectedDates.Clear(); } The problems are, (1) when I click the submit button. the data in the RadGrid is not changed. (2) how can we check if there is nothing selected in the Calendar controls, because there is a date (01/01/0001) set even if we do not select anything from that calendar, thus Calendar1.SelectedDate != null is not enough. =( Thanks.

    Read the article

  • "Best" language /architecture for browser-based app with ODBC and sockets? (subjective)

    - by mawg
    Sorry to ask a subjective question, but I would welcome some advice. I am an experienced programmer of embedded s/w, but haven't done much network programming, although I have done a fair bit of hobbyist PHP. Anyway, I have to develop what is probably a fairly general type of app, as shown in this crude diagram --------------------------------------------------------------------------------- | Browser / user interface Takes input from user form and writes data to d/b. | | Also gets data and updates browser contents when when d/b contents are changed | | because of info received over TCP/IP. | |________________________________________________________________________________| | ODBC | |_______________________________________________________________________________| | database | |_______________________________________________________________________________| | ODBC | |_______________________________________________________________________________| | Socket (TCP/IP) | | Send data out when d/b is updated from browser. | | Also, update d/b when data are received over TCP/IP. | |_______________________________________________________________________________| As I say, I imagine this to be a fairly typical architecture? Am I right? Then client is insisting on MSIE - unless I can show compelling technical reasons for FireFox or other then it will have to be MSIE (are there any compelling technical reasons?). So, with MIE (almost) a given, I had though to use PHP, since I know it, but the client seems awfully keen on Java (which ought to be OK since I am conversant with C++) it woudl seem to make sense to use the same language for the "upper" interface between the web pages (which that app generates) and the d/b, and for the "lower" interface between the d/b and the socket. (a single language means a single set of tools since text approach, etc) So, the (probably highly subjective) question is "which language shoudl I choose". As I say, the client is keen on Java. Any compelling reason why not? Is it generally a good choice for the sort of thing described here? Any other hints & tips gratefully appreciated (and up-voted): URLs, books, tool chain suggestions, etc, etc

    Read the article

  • Spring 3 simple extentionless url mappings with annotation-based mapping - impossible?

    - by caerphilly
    Hi, I'm using Spring 3, and trying to set up a simple web-app using annotations to define controller mappings. This seems to be incredibly difficult without peppering all the urls with *.form or *.do Because part of the site needs to be password protected, these urls are all under /secure. There is a <security-constraint> in the web.xml protecting everything under that root. I want to map all the Spring controllers to /secure/app/. Example URLs would be: /secure/app/landingpage /secure/app/edit/customer/{id} each of which I would handle with an appropriate jsp/xml/whatever. So, in web.xml I have this: <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/secure/app/*</url-pattern> </servlet-mapping> And in despatcher-servlet.xml I have this: <context:component-scan base-package="controller" /> In the Controller package I have a controller class: package controller; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.servlet.ModelAndView; import javax.servlet.http.HttpServletRequest; @Controller @RequestMapping("/secure/app/main") public class HomePageController { public HomePageController() { } @RequestMapping(method = RequestMethod.GET) public ModelAndView getPage(HttpServletRequest request) { ModelAndView mav = new ModelAndView(); mav.setViewName("main"); return mav; } } Under /WEB-INF/jsp I have a "main.jsp", and a suitable view resolver set up to point to this. I had things working when mapping the despatcher using *.form, but can't get anything working using the above code. When Spring starts up it appears to map everything correctly: 13:22:36,762 INFO main annotation.DefaultAnnotationHandlerMapping:399 - Mapped URL path [/secure/app/main] onto handler [controller.HomePageController@2a8ab08f] I also noticed this line, which looked suspicious: 13:25:49,578 DEBUG main servlet.DispatcherServlet:443 - No HandlerMappings found in servlet 'dispatcher': using default And at run time any attempt to view /secure/app/main just returns a 404 error in Tomcat, with this log output: 13:25:53,382 DEBUG http-8080-1 servlet.DispatcherServlet:842 - DispatcherServlet with name 'dispatcher' determining Last-Modified value for [/secure/app/main] 13:25:53,383 DEBUG http-8080-1 servlet.DispatcherServlet:850 - No handler found in getLastModified 13:25:53,390 DEBUG http-8080-1 servlet.DispatcherServlet:690 - DispatcherServlet with name 'dispatcher' processing GET request for [/secure/app/main] 13:25:53,393 WARN http-8080-1 servlet.PageNotFound:962 - No mapping found for HTTP request with URI [/secure/app/main] in DispatcherServlet with name 'dispatcher' 13:25:53,393 DEBUG http-8080-1 servlet.DispatcherServlet:677 - Successfully completed request So... Spring maps a URL, and then "forgets" about that mapping a second later? What is going on? Thanks.

    Read the article

  • How can I line up WPF items in a Horizontal WrapPanel so they line up based on an arbitrary vertical

    - by Scott Whitlock
    I'm trying to create a View in WPF and having a hard time figuring out how to set it up. Here's what I'm trying to build: My ViewModel exposes an IEnumerable property called Items Each item is an event on a timeline, and each one implements ITimelineItem The ViewModel for each item has it's own DataTemplate to to display it I want to display all the items on the timeline connected by a line. I'm thinking a WrapPanel inside of a ListView would work well for this. However, the height of each item will vary depending on the information it displays. Some items will have graphic objects right on the line (like a circle or a diamond, or whatever), and some have annotations above or below the line. So it seems complicated to me. It seems that each item on the timeline has to render its own segment of the line. That's fine. But the distance between the top of the item to the line (and the bottom of the item to the line) could vary. If one item has the line 50 px down from the top and the next item has the line 100 px down from the top, then the first item needs 50 px of padding so that the line segments add up. I think I could solve that problem, however, we only need to add padding if these two items are on the same line in the WrapPanel! Let's say there are 5 items and only room on the screen for 3 across... the WrapPanel will put the other two on the next line. That's ok, but that means only the first 3 need to pad together, and the last 2 need to pad together. This is what's giving me a headache. Is there another approach I could look at?

    Read the article

  • PHP, Codeigniter: How to Set Date/Time based on users timezone/location globally in a web app?

    - by Abs
    Hello all, I have just realised if I add a particular record to my MySQL database - it will have a date/time of the server and not the particular user and where they are located which means my search function by date is useless! As they will not be able to search by when they have added it in their timezone rather when it was added in the servers timezone. Is there a way in Codeigniter to globally set time and date specific to a users location (maybe using their IP) and every time I call date() or time() that users timezone is used. What I am actually asking for is probably how to make my application dependent on each users timezone? Maybe its better to store each users timezone in their profile and have a standard time (servers time) and then convert the time to for each user? Thanks all

    Read the article

  • In Reporting Services how to filter drop down parameter list in based on other selected parameter?

    - by Lee Englestone
    Question In a Reporting Services Report, How do I filter a second drop down list of cars to only show cars whose ManufacturerId is equal the selected Manufacturer (from the first drop down list)? Report Datasets I have 2 datasets. Dataset 1. A list of Manufacturers. From a stored procedure Report_Manufacturers_P Dataset 2. A list of Cars, including a column called Manufacturers id. From a stored procedure Report_Cars_P Report Parameters On the Report I have 2 Parameters. Parameter 1. ManufacturerId. Set from A drop down list of Manufacturers (DataSet 1). Parameter 2. CarId. Set from A drop down list of Cars (DataSet 2). I've tried.. Creating another sproc called Report_Manufacturer_Cars_P that takes the ManufacturerId as an integer and returns a list of cars made by that manufacturer. Any Ideas. As selecting a Manufacturer doesn't seem to want to kick off anything that filters the Car list? Thanks in advance, -- Lee

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >