Search Results

Search found 10748 results on 430 pages for 'disk encryption'.

Page 149/430 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Running ubuntu 10.04 without a laptop's primary display

    - by riteshmnayak
    I have an IBM thinkpad(R50e) whose display is broken. I would still like to use the laptop by connecting it to an external monitor and keyboard/mouse. This is what I did: Removed the hard disk from the broken IBM Put the hard disk in the working IBM and installed 10.04 on it. It booted fine and I installed many packages and stuff. I put the hard disk back into the broken display IBM thinking I could use it by connecting it to an external monitor that I own. Well, it turns out that while booting, the display shows up but because the display shifts from the VGA display to the primary display mid-boot, the laptop does not boot. Is there a way in which I can force the laptop to not use its primary display while booting. I looked at Randr and also grub.conf settings but nothing seemed to work. Please help!

    Read the article

  • Renaming an Invalid Filename in NTFS

    - by Sadeq Dousti
    Recently, I loaned my flash disk to one of my friends, who had Mac OS. He copied a file on it, whose name included a backslash (\). The flash disk is NTFS formatted. Windows does not allow such filenames, and neither opens the file, nor deletes it, nor lets me delete the file. There are naive approaches to this problem, like: Formatting the flash disk; Giving it back to my friend and asking to rename it; Loading into some live Linux and renaming it. However, I'm looking for something more clever, like a program that can do the trick under Windows. PS: There's a tool called NTFSWalker which can browse the MFT records of the NTFS, but is unable to make any changes to them.

    Read the article

  • Prevent Windows 7 from unpluging External HDD [migrated]

    - by marverix
    I have installed Windows 7 as my media server. I pluged in 500GB external HDD via USB. I have changed power plan to Best Performance and changed advenced power settings to never turn off HDD etc. I even yesterday wrote powershell script (create and delete folder on this disk) and I have added it to harmonogram to run every 5 minutes starting from system boot. And nothing! Disk after some time (I realy can't say when) is turning off and Windows show "Unknown Device" in Device Manager. Then only system reboot or disk reboot helps. Any ideas how to prevent Windows 7 from stopping my external HDD? Cheers!

    Read the article

  • How to Extending a logical volume in WMWare

    - by Mercer
    down vote favorite i have a CentOS 6.3 into my Virtual Machine. I have 2 Disk: Disk#1 = 18G Disk#2 = 20G [root@vm ~]# df -h Filesystem Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_system-lv_root 1008M 250M 708M 27% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 31M 154M 17% /boot /dev/mapper/vg_system-lv_home 504M 17M 462M 4% /home /dev/mapper/vg_system-lv_opt 2.0G 68M 1.9G 4% /opt /dev/mapper/vg_produits-lv_grid 6.9G 2.5G 4.1G 38% /opt/grid /dev/mapper/vg_produits-lv_oracle 6.9G 144M 6.4G 3% /opt/oracle /dev/mapper/vg_system-lv_tmp 2.8G 71M 2.6G 3% /tmp /dev/mapper/vg_system-lv_usr 2.5G 1.6G 799M 67% /usr /dev/mapper/vg_system-lv_var 2.0G 278M 1.6G 15% /var So i want to extend my /tmp and my /opt/oracle like this: 10Go in/tmp 13Go in /opt/oracle Thx.

    Read the article

  • Restoring factory recovery partition (samsung)

    - by user974896
    I have upgraded the HDD in a Samsung laptop and would like to restore the "Press F4 for recovery" partition. My new disk has three partitions, System, Windows, and Recovery. I did a dd of the recovery partition from the old HDD to the new. I also set the diag flag on the new HDD. F4 recovery still is not working. tldr: I mapped my new disk like my old disk with the exception of exact sizes. I did the recover partition from old to new via livecd. F4 recovery does not work

    Read the article

  • Mimicking Google's Persistent Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistent disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Database implementation question?

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Is it possible to create a Mirror or Stripe volume for the boot partition in Windows 2008/R2?

    - by Georgios
    Hello, I have a server with two identical disks and I have installed Windows Server 2008 R2 on C, which is a 60GB volume on Disk 0. Using the disk manager, I have attempted to create both a Mirror and Stripe volume in Disk 1 but every time I get the same error "No extents found in the plex". This error occurs after Windows has converted both disks to Dynamic. The fact that the manager allows me to attempt to do this would point to the fact that this is possible. However I have been unable to find any solutions to this error. Any ideas on how to solve this? Thanks Georgios

    Read the article

  • How to use HFS formatted pen drive in Windows 7?

    - by row-sun
    I recently used disk utility in my mac book pro to format my 8 GB pen drive to install OS X. After that I formatted my pen drive from disk utility as FAT32 so that I would be able to use it in windows. But in windows the pen drive does not show up. When I right click on my computer and click manage and then disk management, the pen drive is listed there, but it doesn't show up in the explorer and I cant use it. I tried to do many things but I'm still not being able to use it in windows though I can use it in Mac OS X. Could anyone help? Thanks.

    Read the article

  • Windows 7 System Image Restore to SSD - SSD not detected?

    - by steviey
    I recently bought a new SSD (OCZ Vertex 2 120GB) as a replacement for my laptop HDD . I created a System Image on an external drive using the Win 7 backup and restore tool and then restored this image to the SSD using the Win 7 disk. AHCI is enabled in Win 7 and the BIOS, but it seems that windows is not detecting the disk as an SSD because, in the defragger, the disk is still available for defragging. Prefetch and Superfetch were also still enabled. I have manually disabled scheduled defragging, prefetch and superfetch. I have also checked that TRIM is enabled using: fsutil behavior query DisableDeleteNotify and the result is '0' (it is). Performance seems ok, but not quite as fast as I expected. Was restoring from a win7 generated system image a mistake? Am I getting optimal SSD performance?

    Read the article

  • Occasional HDD Related Hangs

    - by diolemo
    I have an issue that seems to be disk related. Occasionally my system will block for 30 or so seconds and is mostly unresponsive. Those things already in memory requiring no disk access continue to function but those things requiring disk access (even small) hang. 30 seconds later everything is back to normal. I have 2 Samsung drives in RAID 0. I have 1 WD drive (has pagefile) I have an SSD used for caching (Intel Smart Response). This is a new install of Windows and the drives were fine in my old PC (~1 month ago). Has anyone experienced anything similar? Any suggestions to solve this one? Thanks in advance.

    Read the article

  • After deleting log files, Ubuntu server still saying there is no space

    - by Mark
    My Ubuntu server has stopped due to a lack of disk space. I deleted some log files which has grown huge very quickly. But df -h still shows I have no space left. When I run du -sh /* I can see that I should have plenty of disk space left after deleting the logs. I ran lsof +L1 and it brought up two files: /var/log/mail.log and /var/log/mail.err. These are two logs I had deleted. I restarted apache, postfix and mysql (mysql wont restart because of lack of disk space, it think) but still df -h shows no space.

    Read the article

  • Database implementation question? [closed]

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • Mimicking Google's Persistant Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistant disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Trouble loading an ISO

    - by crocyson
    I've created a boot disk and ISO image with paragon. I boot up the Virtual pc with the disk in and I get to my recovery page. When I am supposed to select an ISO image? My Virtual PC doesn't acknowledge any of my physical hard drives on my host PC. I've also tried copy/paste the files into my datastore and they do not show up. I've tried the option during the set up to start with the ISO but again I am not able to browse to my external hard drive that I have stored the ISO on. VMWare will acknowledge it and say that it is connected but I can't browse to it. Am I doing something wrong? I created the back up disk and ISO image on the external with paragon.

    Read the article

  • LVM vs RAID0 vs RAID "linear" - Combine 2 disks as one, data recovery?

    - by leto
    Hi there, given two 2TB USB external disks that have to be combined to one 4TB volume and formatted with one big Filesystem (XFS), I have a small question to ask. Does LVM provide better Data recovery, should one disk be unplugged/damaged by being able to recover the data of the still working disk or is everything lost? I would appreciate a solution where only the data of one disk is lost and I can recover the content of the other with the usual filesystem/lvm/raid tools. Is that possible with LVM or RAID "linear"? This is for storing unimportant files that can be retrieved from backup, but I want to save time :) Thank you in advance

    Read the article

  • recovering vm data from crashed XenServer install

    - by user58983
    We're using xenserver 5.6 from citrix to run our virtual machines. The box is old and creaky, and the hard drive went. (We had a super limited budget of $0 at the time). Turns out the only machine on there we actually needed, was the only one not backed up... anyway, we've replace the hard drive, and reinstalled xensever. however while trying to find our old vm disk images, i have no idea where they sit, and i'm beginning to think they're on a lvm partition? is there anyway to access the disk images from the old vm's on this hard disk, which is attached to the xenserver machine via a usb-sata converter?

    Read the article

  • Accessing a broken mdadm raid

    - by CarstenCarsten
    Hi! I used a western digital mybookworld (SOHO NAS storage using Linux) as backup for my Linux box. Suddenly, the mybookworld does not boot up any more. So I opened the box, removed the hard disk and put the hard disk into an external USB HDD case, and connected it to my Linux box. [ 530.640301] usb 2-1: new high speed USB device using ehci_hcd and address 3 [ 530.797630] scsi7 : usb-storage 2-1:1.0 [ 531.794844] scsi 7:0:0:0: Direct-Access WDC WD75 00AAKS-00RBA0 PQ: 0 ANSI: 2 [ 531.796490] sd 7:0:0:0: Attached scsi generic sg3 type 0 [ 531.797966] sd 7:0:0:0: [sdc] 1465149168 512-byte logical blocks: (750 GB/698 GiB) [ 531.800317] sd 7:0:0:0: [sdc] Write Protect is off [ 531.800327] sd 7:0:0:0: [sdc] Mode Sense: 38 00 00 00 [ 531.800333] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803821] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803836] sdc: sdc1 sdc2 sdc3 sdc4 [ 531.815831] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.815842] sd 7:0:0:0: [sdc] Attached SCSI disk The dmesg output looks normal, but I was wondering why the hardisk was not mounted at all. And why there are 4 different partitions on it. fdisk showed the following: root@ubuntu:/home/ubuntu# fdisk /dev/sdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdc: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00007c00 Device Boot Start End Blocks Id System /dev/sdc1 4 369 2939895 fd Linux raid autodetect /dev/sdc2 370 382 104422+ fd Linux raid autodetect /dev/sdc3 383 505 987997+ fd Linux raid autodetect /dev/sdc4 506 91201 728515620 fd Linux raid autodetect Oh no! Everything seems to be created as a mdadm software raid. Calling mdadm --examine with the different partitions seems to affirm that. I think the only partition I am interested in, is /dev/sdc4 (because it is the largest). But nevertheless I called mdadm --examine with every partition. root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 5626a2d8:070ad992:ef1c8d24:cd8e13e4 Creation Time : Wed Feb 20 00:57:49 2002 Raid Level : raid1 Used Dev Size : 2939776 (2.80 GiB 3.01 GB) Array Size : 2939776 (2.80 GiB 3.01 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 4c90bc55 - correct Events : 16682 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 00.90.00 UUID : 9734b3ee:2d5af206:05fe3413:585f7f26 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 2 Update Time : Wed Oct 27 20:19:08 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 55560b40 - correct Events : 9884 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc3 /dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 08f30b4f:91cca15d:2332bfef:48e67824 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Array Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 39717874 - correct Events : 73678 Number Major Minor RaidDevice State this 0 8 3 0 active sync 0 0 8 3 0 active sync 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 00.90.00 UUID : febb75ca:e9d1ce18:f14cc006:f759419a Creation Time : Wed Feb 20 00:57:55 2002 Raid Level : raid1 Used Dev Size : 728515520 (694.77 GiB 746.00 GB) Array Size : 728515520 (694.77 GiB 746.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 4 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2f36a392 - correct Events : 519320 Number Major Minor RaidDevice State this 0 8 4 0 active sync 0 0 8 4 0 active sync 1 1 0 0 1 faulty removed If I read the output correctly everything was removed, because it was faulty. Is there ANY way to see the contents of the largest partition? Or seeing somehow which files are broken? I see that everything is raid1 which is only mirroring, so this should be a normal partition. I am anxious to do anything with mdadm, in fear that I destroy the data on the hard disk. I would be very thankful for any help.

    Read the article

  • grub boot failed after upgrading to ubuntu 12.04 LTS

    - by user138021
    no such disk error occured. I tried to format and reinstall 12.04, the problem remained. I also repaired with boot-repair, the problem remained. http://paste.ubuntu.com/1224005/ is url of the details. server is dell R710, the bios is set to uefi and disk is raid0 gpt. I typed commands in grub rescue: ls (hd0, gpt2), `no such partition' is shown set, `prefix=((null),gpt2)/grub' is shown I don't know why /efi/ubuntu/grubx64.efi doesn't recognize disk. another strange thing is that there is only 1 file in /efi/ubuntu

    Read the article

  • OS X Leopard (10.5) Server -- Unwanted Emails

    - by David
    Our OS X Leopard (10.5) server is sending out an email EVERY DAY for a Disk Full Notification on an external HDD we use for Time Machine backups (which manages it's own disk space, so the disk full notifications are useless). In the Server Admin app, under Settings Notifications, there's this: which is clearly disabled. The email header and message: http://pastebin.com/V5PZ3Phk I've been told to check the mail.log, but it's mostly empty (except for a few logs in 2011 saying "Postfix mail system is not running". I've also been told to check the crontabs, but I'm not sure how to do that on OS X (or any OS for that matter). Can someone explain how to do this? Also, if there's any other suggestions on places/things/etc to look at/for, my ears are all open!

    Read the article

  • Unable to read DVD on laptop

    - by usabilitest
    I have a DVD disk that I created on an HP laptop. I believe it was setup to be accessed as a USB device. I do not have that laptop any longer and I still would like to access the files on that DVD from my current DELL laptop but it can not open the disk. My guess, the session nwas not closed on the disk and for some reason the new laptop can not read it. My question is, is there any way I can access the files using new laptop, of do I need to find similar HP to try to access the files and possibly close the session? Are there any utilities out there that could help me? Thanks.

    Read the article

  • Check SSD stability before returning it as broken?

    - by data_jepp
    I have the Transcend SSD 720 256GB 2,5" 7mm disk. And randomly I get disk IO in utorrent and large files copying to external will fail. I have removed it from my laptop since I need max stability. I use my laptop on exams. How can I really bench/make-it-sweat whilst connected to my desktop? I want to re-create some of the fails as I need proof if I am to return it. Firmware is updated. BIOS on laptop is updated. I could just completely nul the disk and then check for bad sectors, but is there anything else I could use for testing? Edit: It has 7 bad sector replacement operations. What does it mean?

    Read the article

  • Oracle Database 11g R2 támogatott SAP alatt is

    - by Lajos Sárecz
    Húsvét óta már SAP alatt is használható az Oracle Database 11g R2. Köztudott, hogy az SAP csak a Release 2-re ad ki támogatást, így ez most egy igazán örömteli hír az SAP felhasználóknak, hiszen az alábbi 11g R2 újdonságokat tudják alkalmazni SAP környezetben: • Advanced Compression opció (táblára, RMAN mentésre, expdp-re, Data Guard hálózatra) • Real Application Testing • Oracle Database 11g Release 2 Database Vault • Oracle Database 11g Release 2 RAC • Advanced Encryption táblaterekre, RMAN mentésekre, expdp-re, Data Guard hálózatra • Direct NFS • Deferred Segments • Online Patching Azaz például tömöríthetové válik az SAP adatbázisa, vagy az abból készített mentések. Az eddigi tapasztalatok szerint a tömörítés aránya adatbázistól függoen 2-4-szeres. Az adatbázis upgrade és minden egyéb adatbázis infrastruktúrát érinto változatatás kockázata jelentosen csökkentheto lesz a Real Application Testing alkalmazásával. A rendszergazdai szerepkörök szeparaláhatóvá válnak a Database Vault felhasználásával. A Real Application Clusters 11g R2 újdonságai is elérheto lesznek. A Transparent Data Encryption révén a táblaterek és a mentések titkosíthatók úgy, hogy az alkalmazás számára mindez transzparens, azonban a médiához közvetlenül hozzáférve nem lesznek visszafejthetok az adatok. Támogatott lesz a Direct NFS kliens, ezzel NFS elérési sebesség jelentosen javul. A Deffered Segments révén pedig a tábla szegmensek csak akkor kerülnek lefoglalásra, amikor adat kerül a táblába. Ez azért hasznos, mert általában alkalmazások telepítésekor létrejön minden tábla, azonban sok táblába nem kerül adat. Ezáltal mind a telepítés ideje, mind az adatbázis mérete csökkentheto. Az Online Patching pedig lehetové teszi a leállításmentes patch telepítést. Hát azt gondolom ezek vonzó lehetoségek, érdemes betervezni a közeljövobe az SAP rendszerek alatti adatbázis frissítését, hiszen a 10g verzió Premier Support idén nyáron lejár. Az upgrade-hez pedig mindenképp javaslom a Real Application Testing használatát, amivel az éles terhelés mellett teszthelheto teszt környezetben az upgrade. A Sun Oracle Database Machine és az Exadata sajnos még nem támogatott SAP alatt, mivel az ASM certifikáció még nem zárult le. A hírek szerint 2011 elejére várható, hogy ez megtörténik.

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >