Search Results

Search found 39511 results on 1581 pages for 'exc bad access'.

Page 315/1581 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • CentOS - Add additional hard drive raid arrays on Dell Perc 5/i card

    - by Quanano
    We have a Dell Poweredge 2900 system with Dell Perc 5/i card and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on one raid array on this controller with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l: Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v: [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod: Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log blkid: /dev/sda1: UUID="bc4777d9-ae2c-4c58-96ea-cedb342b8338" TYPE="ext4" /dev/sda2: UUID="j2wRZr-Mlko-QWBR-BndC-V2uN-vdhO-iKCuYu" TYPE="LVM2_member" /dev/mapper/vg_lal2server-lv_root: UUID="9238208a-1daf-4c3c-aa9b-469f0387ebee" TYPE="ext4" /dev/mapper/vg_lal2server-lv_swap: UUID="dbefb39c-5871-4bc9-b767-1ef18f12bd3d" TYPE="swap" /dev/mapper/vg_lal2server-lv_home: UUID="ec698993-08b7-443e-84f0-9f9cb31c5da8" TYPE="ext4" dmesg shows: megaraid_sas: fw state:c0000000 megasas: fwstate:c0000000, dis_OCR=0 scsi2 : LSI SAS based MegaRAID driver scsi 2:0:0:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:1:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:2:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:3:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:4:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:5:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:8:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 scsi 2:0:9:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 i.e. the 3 RAID Arrays Seagate Hitatchi and Fujitsu hard drives respectively. FURTHER UPDATE I have installed the megaraid storage manager console and connected to the server. It appears that the two CentOS installation hard drives are OK. The other 6 drives, one raid array of 4 and one raid array of 2 disks. The other drives are listed as (Foreign) Unconfigured Good.

    Read the article

  • SSD install - what do I need to watch out for when reconfiguring SATA ports?

    - by tim11g
    I installed a Samsung 840 SSD in a Windows 7 machine. It seems to be working fine, but I'm not seeing the expected performance. The AS SSD benchmark gives 76 for read and 138 for write. At the upper left of the benchmark it says "pciide - BAD" and "31K - BAD". I'm assuming the "pciide BAD" means the motherboard (Gigabyte GA-P35-DS4) is configured as IDE emulation and needs to change to native SATA. I don't know what the "31K" refers to. The bios settings look like this: I saw this article that indicates that changing the SATA mode of the boot drive can cause problems (Blue Screen): Error message occurs after you change the SATA mode of the boot drive What is the correct procedure to change the SATA Mode without causing a system failure? Apply the registry change from the MSFT article above first, then reboot and change the SATA mode? Will the SATA mode change in the BIOS affect other drives?

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • TCP 30 small packets per second flood connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s - ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software - it can compute about 50k small packets/s so software is not a problem). I measure my ping through flash client - send packet with timestamp and immediatly send from server to client.

    Read the article

  • What is the procedure to replace a failing hard drive in a RAID array?

    - by slayton
    3 years ago a co-worker setup a software RAID-6 array on Ubuntu 9.04 and I'm getting messages from the OS that the drive has bad sectors and should be replaced. I'd like to remove this drive and replace it with a new drive, however, I have never done this before and I'm terrified that in the process of fixing the array I'm going to end up ruining it. I know the device ID of the array and I know the device IDs of the individual drives in the array. Additionally I physically have the bad drive. What are the steps to replace the bad drive with a new drive and get the array running again?

    Read the article

  • can power supply affect I/O

    - by user101289
    I have a dev server machine running Ubuntu 12.04. For a long while it's been throwing intermittent errors where it would suddenly tell me "File system is read only" or drop into a GRUB error console on boot. I've done disk checks, bad blocks, etc. and no real problems with the main SATA drive were detected. Finally the drive would not be detected at all-- but neither would other drives I plugged in (via SATA). I plugged the supposedly "bad" drive into another server and it worked fine, no issues, for days-- so I assumed the motherboard had a bad SATA controller, and replaced the motherboard with an identical model. I replaced the drive into the original machine with the new motherboard, rebooted-- and the same issues-- I/O errors, failure to read the drive at all, dropping into GRUB, etc. I'm wondering if there could be some other issue with this machine, that's not related to the drive-- possibly power supply? Thanks for ideas

    Read the article

  • Can't obtain reference to EKReminder array retrieved from fetchRemindersMatchingPredicate

    - by Scionwest
    When I create an NSPredicate via EKEventStore predicateForRemindersInCalendars; and pass it to EKEventStore fetchRemindersMatchingPredicate:completion:^ I can loop through the reminders array provided by the completion code block, but when I try to store a reference to the reminders array, or create a copy of the array into a local variable or instance variable, both array's remain empty. The reminders array is never copied to them. This is the method I am using, in the method, I create a predicate, pass it to the event store and then loop through all of the reminders logging their title via NSLog. I can see the reminder titles during runtime thanks to NSLog, but the local arrayOfReminders object is empty. I also try to add each reminder into an instance variable of NSMutableArray, but once I leave the completion code block, the instance variable remains empty. Am I missing something here? Can someone please tell me why I can't grab a reference to all of the reminders for use through-out the app? I am not having any issues at all accessing and storing EKEvents, but for some reason I can't do it with EKReminders. - (void)findAllReminders { NSPredicate *predicate = [self.eventStore predicateForRemindersInCalendars:nil]; __block NSArray *arrayOfReminders = [[NSArray alloc] init]; [self.eventStore fetchRemindersMatchingPredicate:predicate completion:^(NSArray *reminders) { arrayOfReminders = [reminders copy]; //Does not work. for (EKReminder *reminder in reminders) { [self.remindersForTheDay addObject:reminder]; NSLog(@"%@", reminder.title); } }]; //Always = 0; if ([self.remindersForTheDay count]) { NSLog(@"Instance Variable has reminders!"); } //Always = 0; if ([arrayOfReminders count]) { NSLog(@"Local Variable has reminders!"); } } The eventStore getter is where I perform my instantiation and get access to the event store. - (EKEventStore *)eventStore { if (!_eventStore) { _eventStore = [[EKEventStore alloc] init]; //respondsToSelector indicates iOS 6 support. if ([_eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { //Request access to user calendar [_eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore calendar granted."); } else { NSLog(@"Access to EventStore calendar denied."); } }]; //Request access to user Reminders [_eventStore requestAccessToEntityType:EKEntityTypeReminder completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore Reminders granted."); } else { NSLog(@"Access to EventStore Reminders denied."); } }]; } else { //iOS 5.x and lower support if Selector is not supported NSLog(@"iOS 5.x < Access to EventStore calendar granted."); } for (EKCalendar *cal in self.calendars) { NSLog(@"Calendar found: %@", cal.title); } [_eventStore reset]; } return _eventStore; } Lastly, just to show that I am initializing my remindersForTheDay instance variable using lazy instantiation. - (NSMutableArray *)remindersForTheDay { if (!_remindersForTheDay) _remindersForTheDay = [[NSMutableArray alloc] init]; return _remindersForTheDay; } I've read through the Apple documentation and it doesn't provide any explanation that I can find to answer this. I read through the Blocks Programming docs and it states that you can access local and instance variables without issues from within a block, but for some reason, the above code does not work. Any help would be greatly appreciated, I've scoured Google for answers but have yet to get this figured out. Thanks everyone! Johnathon.

    Read the article

  • Validate java SAML signature from C#

    - by Adrya
    How can i validate in .Net C# a SAML signature created in Java? Here is the SAML Signature that i get from Java: <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> </ds:CanonicalizationMethod> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"> </ds:SignatureMethod> <ds:Reference URI="#_e8bcba9d1c76d128938bddd5ae8c68e1"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"> </ds:Transform> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="code ds kind rw saml samlp typens #default xsd xsi"> </ec:InclusiveNamespaces> </ds:Transform> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"> </ds:DigestMethod> <ds:DigestValue>zEL7mB0Wkl+LtjMViO1imbucXiE=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> jpIX3WbX9SCFnqrpDyLj4TeJN5DGIvlEH+o/mb9M01VGdgFRLtfHqIm16BloApUPg2dDafmc9DwL Pyvs3TJ/hi0Q8f0ucaKdIuw+gBGxWFMcj/U68ZuLiv7U+Qe7i4ZA33rWPorkE82yfMacGf6ropPt v73mC0bpBP1ubo5qbM4= </ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509Certificate> MIIDBDCCAeygAwIBAgIIC/ktBs1lgYcwDQYJKoZIhvcNAQEFBQAwNzERMA8GA1UEAwwIQWRtaW5D QTExFTATBgNVBAoMDEVKQkNBIFNhbXBsZTELMAkGA1UEBhMCU0UwHhcNMDkwMjIzMTAwMzEzWhcN MTgxMDE1MDkyNTQyWjBaMRQwEgYDVQQDDAsxMC41NS40MC42MTEbMBkGA1UECwwST24gRGVtYW5k IFBsYXRmb3JtMRIwEAYDVQQLDAlPbiBEZW1hbmQxETAPBgNVBAsMCFNvZnR3YXJlMIGfMA0GCSqG SIb3DQEBAQUAA4GNADCBiQKBgQCk5EqiedxA6WEE9N2vegSCqleFpXMfGplkrcPOdXTRLLOuRgQJ LEsOaqspDFoqk7yJgr7kaQROjB9OicSH7Hhsu7HbdD6N3ntwQYoeNZ8nvLSSx4jz21zvswxAqw1p DoGl3J6hks5owL4eYs2yRHvqgqXyZoxCccYwc4fYzMi42wIDAQABo3UwczAdBgNVHQ4EFgQUkrpk yryZToKXOXuiU2hNsKXLbyIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSiviFUK7DUsjvByMfK g+pm4b2s7DAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDQYJKoZIhvcNAQEF BQADggEBAKb94tnK2obEyvw8ZJ87u7gvkMxIezpBi/SqXTEBK1by0NHs8VJmdDN9+aOvC5np4fOL fFcRH++n6fvemEGgIkK3pOmNL5WiPpbWxrx55Yqwnr6eLsbdATALE4cgyZWHl/E0uVO2Ixlqeygw XTfg450cCWj4yfPTVZ73raKaDTWZK/Tnt7+ulm8xN+YWUIIbtW3KBQbGomqOzpftALyIKLVtBq7L J0hgsKGHNUnssWj5dt3bYrHgzaWLlpW3ikdRd67Nf0c1zOEgKHNEozrtRKiLLy+3bIiFk0CHImac 1zeqLlhjrG3OmIsIjxc1Vbc0+E+z6Unco474oSGf+D1DO+Y= </ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </ds:Signature> I know to parse SAML, i need to validate the signature. I tried this: public bool VerifySignature() { X509Certificate2 certificate = null; XmlDocument doc = new XmlDocument(); XmlElement xmlAssertionElement = this.GetXml(doc); doc.AppendChild(xmlAssertionElement); // Create a new SignedXml object and pass it // the XML document class. SamlSignedXml signedXml = new SamlSignedXml(xmlAssertionElement); // Get signature XmlElement xmlSignature = this.Signature; if (xmlSignature == null) { return false; } // Load the signature node. signedXml.LoadXml(xmlSignature); // Get the certificate used to sign the assertion if information about this // certificate is available in the signature of the assertion. foreach (KeyInfoClause clause in signedXml.KeyInfo) { if (clause is KeyInfoX509Data) { if (((KeyInfoX509Data)clause).Certificates.Count > 0) { certificate = (X509Certificate2)((KeyInfoX509Data)clause).Certificates[0]; } } } if (certificate == null) { return false; } return signedXml.CheckSignature(certificate, true); } It validates the signature of a SAML signed in .Net but not of this Java one.

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • Missing Operating System after trying to upgrade to Ubuntu 11

    - by Mauricio
    there! After trying to upgrade from Ubuntu 10.04 to 11, the upgrading process stopped when running and then I got an "out of disk, grub rescue" message when booting. After running Boot Repair, I got this results. Now I get "Missing Operating System" whent trying to boot. Bellow I show some results from some commands I gather from help foruns, but I still reached no solution. Could you please help me? Any enlightment will be very helpful! Disk Utility says "Disk has a few bad sectors". When trying to run the Self-test I get "FAILED (Read)" Here we have what Gparted says about the /dev/sda1 partition (ext4): Flags: boot Status: not mounted Warning: e2label: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda1Couldn`t find valid filesystem superblockUnable to read the contents of this filesystem! From sudo fdisk -lI got: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000e0596 Device Boot Start End Blocks Id System/dev/sda1 * 2048 607428607 303713280 83 Linux/dev/sda2 607430654 625141759 8855553 5 Extended/dev/sda5 607430656 625141759 8855552 82 Linux swap / SolarisDisk /dev/sdb: 320.1 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c3c41 Device Boot Start End Blocks Id System /dev/sdb1 * 63 625137344 312568641 c W95 FAT32 (LBA) " and fromsudo fdisk /dev/sda1I got fdisk: unable to read /dev/sda1: Inappropriate ioctl for device` From sudo mount /dev/sda1 /mntI got: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so From sudo update-grubI got: error: cannot read from `/dev/sda'. /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?).

    Read the article

  • Google analytics - drop in traffic

    - by Andy
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • Google analytics - drop in traffic

    - by user1001421
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • Algorithm to measure how "diffused" 5,000 pennies are in an economy?

    - by makerofthings7
    Please allow me to use this example/metaphor to describe an algorithm I need. Objects There are 5 thousand pennies. There are 50 cups. There is a tracking history (Passport "stamp" etc) that is associated with each penny as it moves between cups. Definition I'll define a "highly diffused" penny as one that passes through many cups. A "poorly diffused" penny is one that either passes back and forth between 2 cups Question How can I objectively measure the diffusion of a penny as: The number of moves the penny has gone through The number of cups the penny has been in A unit of time (day, week, month) Why am I doing this? I want to detect if a cup is hoarding pennies. Resistance from bad actors Since hoarding is bad, the "bad cup" may simply solicit a partner and simply move pennies between each other. This will reduce the amount of time a coin isn't in transit, and would skew hoarding detection. A solution might be to detect if a cup (or set of cups) are common "partners" with each other, though I'm not sure how to think though this problem. Broad applicability Any assistance would be helpful, since I would think that this algorithm is common to Economics The study of migration patterns of animals, citizens of a country Other natural occurring phenomena ... and probably exists as a term or concept I'm unfamiliar with.

    Read the article

  • Why does my root filesystem keep becoming read-only?

    - by Scott Severance
    I've lately been having an issue with my root filesystem becoming readonly. It happens some amount of time after boot. I don't know exactly when it happens, as I don't usually notice it until something such as suspending the computer or printing fails. It seems to be fairly random. Since most of my system is on that partition, I can't re-mount it without rebooting. After this happens, the system runs a fsck. Sometimes it prompts to fix problems; other times it apparently finds none. To troubleshoot, I've searched through the logs but found nothing relevant. This might be due in part to not knowing when the actual errors took place. The filesystem is apparently good to begin with, as when fsck runs its fixes it doesn't report any errors. I've scanned the disk with SpinRite. A while ago, SpinRite found and recovered from some bad sectors on the hard drive. I ran a level 4 scan (a thorough scan) after this probem appeared, but SpinRite found nothing. The SMART data reports that the disk is OK with 63 bad sectors. The number of bad sectors hasn't changed recently. I realize that the disk isn't in the best of conditions, and I have complete backups in case of catastrophic failure. Yet the lack of errors in the logs, combined with SpinRite's test results and the unchanged SMART data makes me think that this problem has some cause other than disk failure. Other than disk failure, what could cause my symptoms?

    Read the article

  • Layout Columns - Equal Height

    - by Kyle
    I remember first starting out using tables for layouts and learned that I should not be doing that. I am working on a new site and can not seem to do equal height columns without using tables. Here is an example of the attempt with div tags. <div class="row"> <div class="column">column1</div> <div class="column">column2</div> <div class="column">column3</div> <div style="clear:both"></div> </div> Now what I tried with that was doing making columns float left and setting their widths to 33% which works fine, I use the clear:both div so that the row would be the size of the biggest column, but the columns will be different sizes based on how much content they have. I have found many fixes which mostly involve css hacks and just making it look like its right but that's not what I want. I thought of just doing it in javascript but then it would look different for those who choose to disable their javascript. The only true way of doing it that I can think of is using tables since the cells all have equal heights in the same row. But I know its bad to use tables. After searching forever I than came across this: http://intangiblestyle.com/lab/equal-height-columns-with-css/ What it seems to do is exactly the same as tables since its just setting its display exactly like tables. Would using that be just as bad as using tables? I honestly can't find anything else that I could do. edit @Su' I have looked into "faux columns" and do not think that is what I want. I think I would be able to implement better designs for my site using the display:table method. I posted this question because I just wasn't sure if I should since I have always heard its bad using tables in website layouts.

    Read the article

  • Getting rank for keywords that I don't want to appear on my website [duplicate]

    - by Rober
    This question already has an answer here: Which keyword should I use. colors or colours or a combination of both? 2 answers One of my products has two names. One of them is what I consider correct and thus it is what I want to appear on my website. The other name is incorrect for me, so I would like to avoid it. But I know that many people will search my product using the "bad" name. How could I get the "bad" name indexed for my site on search engines even if nobody can read it there? Of course, I want to do it "legally" so that no engine will ban my site considering it as cloaking, black hat SEO, etc... EDIT: Having that "bad" name on my backlinks is not an option. For example I would perceive user reviews connecting my site to that word as a negative point. Maybe having my site as a search result for that word could be negative as well, but I think it is worth it.

    Read the article

  • ubuntu 10.04 logs itself out overnight

    - by Corey
    Every night when I leave work, I lock the screen via ubuntu's "power" button in the top right hand panel. When I come to work in the morning, I'm greeted with the log-in screen. This doesn't happen every night, but most. I'm running ubuntu 10.04 on a Dell inspiron. I've included some HW specs, and also dmesg output. Please let me know what other logs may be useful. thanks! Corey ~$ dmesg [20559.696062] type=1503 audit(1285957687.048:16): operation="open" pid=6212 parent=1 profile="/usr/bin/evince" requested_mask="::r" denied_mask="::r" fsuid=1000 ouid=0 name="/usr/local/lib/libltdl.so.7.2.2" [21127.951621] type=1503 audit(1285958255.300:17): operation="open" pid=6390 parent=1 profile="/usr/bin/evince" requested_mask="::r" denied_mask="::r" fsuid=1000 ouid=0 name="/usr/local/lib/libltdl.so.7.2.2" [291038.528014] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [291038.528025] render error detected, EIR: 0x00000000 [291038.528042] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 22973891 at 22973890) [291038.828014] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [291038.828023] render error detected, EIR: 0x00000000 [291038.828042] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 22973894 at 22973890) ~$ lspci -vv 00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03) Subsystem: Dell Device 02e1 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx- Latency: 0 Capabilities: <access denied> Kernel driver in use: agpgart-intel Kernel modules: intel-agp 00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) Subsystem: Dell Device 02e1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 27 Region 0: Memory at fe400000 (64-bit, non-prefetchable) [size=4M] Region 2: Memory at d0000000 (64-bit, prefetchable) [size=256M] Region 4: I/O ports at dc00 [size=8] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) Subsystem: Dell Device 02e1 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at feaf8000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 00001000-00001fff Memory behind bridge: 80000000-801fffff Prefetchable memory behind bridge: 0000000080200000-00000000803fffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA+ VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: feb00000-febfffff Prefetchable memory behind bridge: 00000000fdf00000-00000000fdffffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA+ VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB Controller: Intel Corporation N10/ICH7 Family USB UHCI Controller #1 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 23 Region 4: I/O ports at d880 [size=32] Kernel driver in use: uhci_hcd 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 19 Region 4: I/O ports at d800 [size=32] Kernel driver in use: uhci_hcd 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin C routed to IRQ 18 Region 4: I/O ports at d480 [size=32] Kernel driver in use: uhci_hcd 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin D routed to IRQ 16 Region 4: I/O ports at d400 [size=32] Kernel driver in use: uhci_hcd 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) (prog-if 20) Subsystem: Dell Device 02e1 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 23 Region 0: Memory at feaf7c00 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01) Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=32 Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA+ VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Capabilities: <access denied> Kernel modules: iTCO_wdt, intel-rng 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 19 Region 0: I/O ports at d080 [size=8] Region 1: I/O ports at d000 [size=4] Region 2: I/O ports at cc00 [size=8] Region 3: I/O ports at c880 [size=4] Region 4: I/O ports at c800 [size=16] Capabilities: <access denied> Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 4: I/O ports at 0400 [size=32] Kernel modules: i2c-i801 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Dell Device 02e1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 26 Region 0: I/O ports at e800 [size=256] Region 2: Memory at fdfff000 (64-bit, prefetchable) [size=4K] Region 4: Memory at fdfe0000 (64-bit, prefetchable) [size=64K] Expansion ROM at febe0000 [disabled] [size=128K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169 log$ tail -n 15 Xorg.0.log.old for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. (II) Power Button: Close (II) UnloadModule: "evdev" (II) Power Button: Close (II) UnloadModule: "evdev" (II) USB Optical Mouse: Close (II) UnloadModule: "evdev" (II) Dell Dell USB Entry Keyboard: Close (II) UnloadModule: "evdev" (II) Macintosh mouse button emulation: Close (II) UnloadModule: "evdev" (II) AIGLX: Suspending AIGLX clients for VT switch ddxSigGiveUp: Closing log

    Read the article

  • OpenRemoteBaseKey() credentials

    - by sgibbons
    I'm attempting to use powershell to access a remote registry like so: $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey("LocalMachine", $server) $key = $reg.OpenSubkey($subkeyPath) Depending on some factors that I'm not yet able to determine I either get Exception calling "OpenSubKey" with "1" argument(s): "Requested registry access is not allowed." Or System.UnauthorizedAccessException: Attempted to perform an unauthorized operation. at Microsoft.Win32.RegistryKey.Win32ErrorStatic(Int32 errorCode, String str) at Microsoft.Win32.RegistryKey.OpenRemoteBaseKey(RegistryHive hKey, String machineName) It seems pretty clear that this is because the user I'm running the powershell script as doesn't have the appropriate credentials to access the remote registry. I'd like to be able to supply a set of credentials to use for the remote registry access, but I can find no documentation anywhere of a way to do this. I'm also not clear on exactly where to specify which users are allowed to access the registry remotely.

    Read the article

  • Maximum number of memory segments that Notes can support has been exceeded

    - by Sagy
    hi All, I am using Domino.dll to access a NSF file in C#.NET 2.0 I am using multiple thread to access 4 NSF files at a time, its working fine for small NSF files, but if i try to access large NSF files i get the Out of Memory Exception and Maximum number of memory segments that Notes can support has been exceeded. This exception usually occurs when i access NotesDocument object from a large NSFVIewFolder in a while loop. I am releasing the instance of the NotesDocument by using the Marshal.ReleaseComObject(NotesDocument); still it throws the same exception. My goal is to access multiple NSF files at a time (MAX 4 NSF files at a time) for large NSF files (may be in GB). Kindly help me, if you got some solution. Thanks.

    Read the article

  • Getting selected row in inputListOfValues returnPopupListener

    - by Frank Nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Model driven list-of-values in Oracle ADF are configured on the ADF Business component attribute which should be updated with the user value selection. The value lookup can be configured to be displayed as a select list, combo box, input list of values or combo box with list of values. Displaying the list in an af:inputListOfValues component shows the attribute value in an input text field and with an icon attached to it for the user to launch the list-of-values dialog. The list-of-values dialog allows users to use a search form to filter the lookup data list and to select an entry, which return value then is added as the value of the af:inputListOfValues component. Note: The model driven LOV can be configured in ADF Business Components to update multiple attributes with the user selection, though the most common use case is to update the value of a single attribute. A question on OTN was how to access the row of the selected return value on the ADF Faces front end. For this, you need to know that there is a Model property defined on the af:inputListOfValues that references the ListOfValuesModel implementation in the model. It is the value of this Model property that you need to get access to. The af:inputListOfValues has a ReturnPopupListener property that you can use to configure a managed bean method to receive notification when the user closes the LOV popup dialog by selecting the Ok button. This listener is not triggered when the cancel button is pressed. The managed bean signature can be created declaratively in Oracle JDeveloper 11g using the Edit option in the context menu next to the ReturnPopupListener field in the PropertyInspector. The empty method signature looks as shown below public void returnListener(ReturnPopupEvent returnPopupEvent) { } The ReturnPopupEvent object gives you access the RichInputListOfValues component instance, which represents the af:inputListOfValues component at runtime. From here you access the Model property of the component to then get a handle to the CollectionModel. The CollectionModel returns an instance of JUCtrlHierBinding in its getWrappedData method. Though there is no tree binding definition for the list of values dialog defined in the PageDef, it exists. Once you have access to this, you can read the row the user selected in the list of values dialog. See the following code: public void returnListener(ReturnPopupEvent returnPopupEvent) {   //access UI component instance from return event RichInputListOfValues lovField =        (RichInputListOfValues)returnPopupEvent.getSource();   //The LOVModel gives us access to the Collection Model and //ADF tree binding used to populate the lookup table ListOfValuesModel lovModel =  lovField.getModel(); CollectionModel collectionModel =          lovModel.getTableModel().getCollectionModel();     //The collection model wraps an instance of the ADF //FacesCtrlHierBinding, which is casted to JUCtrlHierBinding   JUCtrlHierBinding treeBinding =          (JUCtrlHierBinding) collectionModel.getWrappedData();     //the selected rows are defined in a RowKeySet.As the LOV table only   //supports single selections, there is only one entry in the rks RowKeySet rks = (RowKeySet) returnPopupEvent.getReturnValue();     //the ADF Faces table row key is a list. The list contains the //oracle.jbo.Key List tableRowKey = (List) rks.iterator().next();   //get the iterator binding for the LOV lookup table binding   DCIteratorBinding dciter = treeBinding.getDCIteratorBinding();   //get the selected row by its JBO key   Key key = (Key) tableRowKey.get(0); Row rw =  dciter.findRowByKeyString(key.toStringFormat(true)); //work with the row // ... }

    Read the article

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Java - How to change context root of a dynamic web project in eclipse

    - by Yatendra Goel
    I have developed a dynamic web project in eclipse. Now I can access it through my browser using the following url: http://localhost:8080/MyDynamicWebApp Now I want to change the access url to http://localhost:8080/app I changed the context root from the project properties | Web Project Settings | Context Root But it is not working. The web app still has the access url as earlier. I have re-deployed the application on tomcat, re-started the tomcat and have done everything that should be done but the access url is the same as earlier. I found that there were no server.xml file attached with the WARfile. The how the tomcat is determining that the context root of my web app is /MyDynamicWebApp and is allowing me to access the application through that url

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >