Search Results

Search found 16940 results on 678 pages for 'disk drive'.

Page 219/678 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Beginner Geek: How To Change the Boot Order in Your Computer’s BIOS

    - by Chris Hoffman
    The boot order in your computer’s BIOS controls which device it loads the operating system from. Modify your boot order to force your computer to boot from a USB drive, CD or DVD drive, or another hard drive. You may need to change this setting when booting from another device, whether you’re running an operating system from a live USB drive or installing a new operating system from a disc. Note: This process will look different on each computer. The instructions here will guide you through the process, but the screenshots won’t look exactly the same. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Device being used by VxVM

    - by Onur Bingul
    If you are using vxvm, you may have issues when you try to unconfigure a disk root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0cfgadm: Component system is busy, try again: failed to offline:     Resource             Information       ----------------       -------------------------/dev/dsk/c1t3d0   Device being used by VxVM“cfgadm unconfigure” command fails here.The way to resolve this is to disable the disks path from DMP control. Since there is only one path to this disk, the “-f” (for force) option needs to be used:root@techsupport2 # vxdmpadm -f disable path=c1t3d0s2root@techsupport2 # vxdmpadm getsubpaths NAME         STATE[A]   PATH-TYPE[M] DMPNODENAME  ENCLR-NAME   CTLR   ATTRS================================================================================c1t6d0       ENABLED(A)   -          disk_0       disk         c1       -c1t3d0       DISABLED(M)   -          disk_1       disk         c1       -c1t0d0s2     ENABLED(A)   -          disk_2       disk         c1       -c1t1d0       ENABLED(A)   -          disk_3       disk         c1       -c3t47d0      ENABLED(A)   -          sun35100_0   sun35100     c3       -c3t47d1      ENABLED(A)   -          sun35100_1   sun35100     c3       -c3t47d2s2    ENABLED(A)   -          sun35100_2   sun35100     c3       -c3t47d3s2    ENABLED(A)   -          sun35100_3   sun35100     c3       -You can see the path now disabled from DMP.root@techsupport2 # cfgadm -c unconfigure c1::dsk/c1t3d0Now you can unconfigure the disk

    Read the article

  • Using virt-install to mount multiple cdrom drives/images

    - by Dana the Sane
    I would like to create a windows xp guest from the windows xp upgrade cd I have, along with one of a few full versions I have around. However, when I reach the stage in the installer where I am prompted to insert a full version cd, the installer can't find it, i.e.: Setup could not read the CD you inserted, or the CD is not a valid Windows CD.. Is there a work-around for this?, my Googling didn't uncover anything. I've tried various combinations of mounting .iso files and specifying disks, such as: $sudo virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 -c /dev/cdrom --check-cpu If I try to specify multiple cdrom drives, I receive an error: virt-install --accelerate --connect qemu:///system -n xpsp1 -r 2048 --disk ./vm/winxp_sp1.iso,device=cdrom --disk /dev/cdrom,device=cdrom --disk ./vm/windows.qcow2,size=12 --vnc --noautoconsole --os-type windows --os-variant winxp --vcpus 2 --check-cpu Starting install... ERROR IDE CDROM must use 'hdc', but target in use.

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Fix overlapping partitions

    - by Alex
    I have problem with overlapping partitions. GParted shows me all my disk as unallocated area, output of fdisk below: alex@alex-ThinkPad-SL510:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb4b9b90 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2457599 1227776 7 HPFS/NTFS/exFAT /dev/sda2 2457600 571351724 284447062+ 7 HPFS/NTFS/exFAT /dev/sda3 571342846 604661759 16659457 5 Extended /dev/sda4 604661760 625137663 10237952 7 HPFS/NTFS/exFAT /dev/sda5 598650880 604661759 3005440 82 Linux swap / Solaris /dev/sda6 571342848 598650879 13654016 83 Linux Partition table entries are not in disk order Do I understand correctly that overlapping partitions are sda2 and sda3 (sda2 and sda6 overlaps too, because sda6 is the first chunk of sda3, sda3 has type "extended")? Are sda2 and sda3 the cause of problem? How can i fix it without deleting partitions? My OS is Ubuntu 12.04, 64 bit. Thanks in advance.

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • Best approach to utilize RamDisk for Chrome?

    - by laggingreflex
    I use a lot of tabs and after a while less recently opened tabs take some time to become responsive, which I guess is because they're being un-cached to HDD as they're not required. So after creating a Ram-Disk I have two options, use --disk-cache-dir="G:/" switch to do what it does. Or what I'm currently doing: using a directory junction for "[...]\AppData\Local\Google\Chrome\User Data\Default" to move that entire folder over to Ram-Disk. I thought this would be better than just disk-cache but what do I know. Is it? As one can guess it'll be a pain saving/loading the Ram-Disk image each time I start chrome but if it really is better than the former approach I'll write a script or something.

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • How to "un-automount" external harddrive?

    - by Timon
    So I dualboot 12.10 and Win7. Both OSs are on the primary SSD while all commonly used data (documents, movies, music, profiles etc) is on a secondary NTFS-formatted HDD. Since I needed the NTFS drive to automatically mount in Ubuntu right at startup, I downloaded ntfs-config and set it to automount my NTFS drive. Problem is, I also accidentally told it to automount my external hard drive (which is also NTFS formatted). When booting up Ubuntu, it now checks for the presence of that drive every single time, which is getting annoying 'cause I don't always have it connected. I've tried un- and reinstalling ntfs-config, telling it to not automount the external HD, but to no avail. Any suggestions?

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • Ubuntu Server 12.04 LTS on Hyper-V 2012

    - by user137533
    I have the following scenario: Hyper-V 2012 server core installation. On top of this i created a virtual machine on which i tried installing Ubuntu Server 12.04 which should not have any compatibility issues according to what Microsoft and Ubuntu are saying (although it is not officially supported). I start, run the installation and everything is ok, no problems detecting the network device or the hard drive (unlike debian which didn't even detect the hard drive). Once the installation is complete it asks me to reboot, it unmounts the "dvd drive" and reboots. Once it tries to start again i get the following error: Boot failure. Reboot and Select proper Boot device or Insert Boot Media in the selected Boot device. It seems to not be booting up from the virtual hard drive. The hard drive is set up in SCSI mode, nothing mounted on the IDE controller (no iso image or anything else. Does anyone have any ideas on what i can do to solve this?

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a Windows XP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try? This is driving me crazy and I'm totally out of ideas?

    Read the article

  • Installing Windows 7 destroyed my dual boot setup

    - by ped
    I have a laptop on which I have two drives with separate Windows XP installs, one barebones for music production, the other "normal" Windows XP with Office etc. (unfortunately the bios won't give a boot disk choice). Normally I would be presented with two Windows XPs on booting. Selecting the second one would get me into the "normal" installation on disk 1 (C:). Selecting the first in boot order would give me D:\ (disk 2) with the barebones XP. However, I installed Windows 7 Home onto disk 1 (C:), but there were no dual boot options anymore, even though I installed DualBoot Pro and added Windows XP disk D:. The options now show up, but selecting Windows XP just turns into a reboot back to where I started.

    Read the article

  • Do I need to rebuild the array after putting in a new hot spare?

    - by Shade34321
    So my experience with RAID is minimal. So I figured I'd come and ask here. We have a 16 drive RAID system that have 15 drives in RAID 5 with a hot spare left over. Recently one of the drives in the RAID was giving errors so I cloned it over to the hot spare and put a new drive in it's spot. I made the new drive the hot spare as I was told. I was told to rebuild the array after putting in the new drive as a hot spare so I tried and wasn't able to. So my question is do I need to rebuild it and if so why did it tell me I couldn't. Thanks! UPDATE: So I've come back up to work and looked at the RAID and it pulled in the hot spare into the raid and kicked out another drive.

    Read the article

  • Getting FC11 to run under VMware Server, converted from physical machine

    - by Kristian
    I have a FC11 installation that I have converted to a VMware disk image to run om my VMware Server. I converted it with qemu-img, as the VMware Converter software apparently only converts Linux hosts to VMware Infrastructure servers. The disk image boots fine (grub is loaded and boots the kernel) but it seems like the disk is not found by the kernel, and the boot process stalls. Hotplugging USB devices work (the kernel prints debug information) and I'm able to press keys (Ctrl-Alt-Delete for instance). The VMware guest OS is set to RedHat Enterprise Linux 5 (32 bit), and I have tried both the LSI Logic, LSI Logic SAS and VMware Accelerated SCSI SCSI controllers, to no avail. I'm able to boot an installer disk and get into rescue mode and mount the filesystem, so my question is, what do I need to do to the guest kernel / initrd image to make it recognize the virtual disk?

    Read the article

  • Good Free Backup Tool - with provisos

    - by vaccano
    I have seen some Backup Questions around. But they are not quite what I am looking for. I would like to have a back up of my entire hard drive (to an external drive). I would like it to be the kind that has a base backup then just backs up the changes since the last backup. I would like it to be able to have a fully restorable image of my hard drive (not just key files). Lastly I would like it to be free (or super cheap). (The above requirements are important, but I will have to drop them if they up the price as my boss will not pay for them.) I have a Solid State Hard Drive 250 GB backing up to a 1TB external hard drive using Windows XP.

    Read the article

  • usb vs firewire for connecting two RAID 0 disks

    - by Arne
    I have a 2TB and a 4TB RAID 0 external drives (both have two physical hard drives in them). Both have a FW800, FW400, and USB port. My MacBook Pro has one FW800 port and two USB ports. I want to copy data from the 4TB drive to the 2TB drive. Is it better to A - connect both directly to the laptop, one with USB and one with FW800 or B - connect the 4TB drive to laptop with FW800 and the 2TB drive to the 4TB drive using a FW400 cable? Anyone have problems daisy-chaining RAID 0 disks using FW? Thanks!

    Read the article

  • Flexible virtualization infrastructure Design with libvirt

    - by Lessfoe
    I'm going to install a CentOS6 Server with Virtualization ( libvirtd ) capabilities on a DELL Server with Hardware RAID5 of around 6T of disk space ( It has 4x2T disks in a PERC700 RAID Controller ). I'm going then to install some guests which requires few resources except one that needs 500GB of disk space, 8/16GB of RAM and good performances. I was thinking about file images for guests storage but I'm not sure about the 500GB VM what needs good performances so that an LVM device could be better. So my question is what would be the best layout concerning: RAID setup ( RAID5, RAID1 + 1 disk for OS only. ) disk partitioning ( using the entire disk/ leave free space for future use and extending it with LVM ) guests storage management ( LVM devices or file images ( considering the 500GB VM that is performance demanding ) or mixed ) Where to put guests storage? /var/lib/libvirt/images or maybe in a custom dir separated from system /home/VMs Thanks in advance for any hint.

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a WindowsXP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try. This is driving me crazy and I'm totally out of ideas? Thanks

    Read the article

  • Why Clonezilla needs to reinstall grub?

    - by Pablo
    By default CloneZilla Live will reinstall grub after restore. In my particular case I am restoring a disk image to same phisical disk without any partitioning/size change. I am expecting to have exact same state as I had right after backup. However, reinstalling grub will render my system to unbootable. If I remove that option [-g] then everything is fine. My question is in what cases and why CloneZilla ever need to re-install grub when restoring disk image?

    Read the article

  • How to rescan and remount drives on Ubuntu Hardy or Jaunty?

    - by pts
    When I connect an USB drive to an Ununtu Hardy and Jaunty system, the system mounts the partitions found on the drive, and opens a Nautilus window for each mounted partitions. Within Nautilus, I am able to unmount partitions. What I need is a command or action which forces the system to rescan the available drives and partitions, and automount each not mounted partition, including those which I've manually unmounted from Nautilus. sudo /etc/init.d/udev restart or ... reload doesn't do this. As of now, I just unplug the USB drive, and commect it again, which will force a scan and a mount on that drive. But I want to do force the rescan and remount without unplugging anything, preferably without the user having the know device or drive names.

    Read the article

  • Ubuntu 12.10 + Windows 7 - No option to install alongside windows 7

    - by user1828314
    I have a 64-bit Windows 7 OS installed at the moment. I have used GPartEd to shrink the current Windows 7 partition on my 720GB HDD to 200GB. I have then made a new NTFS partition of 200GB which I will keep for later on as a shared drive between both Windows 7 and Ubuntu. So in GPartEd I now have 3 partitions which were all automatically there from the Windows 7 installation, I only shrank the 3rd one from the 698GB or so that it was to 200GB and created the 200GB for the shared drive. I first tried creating another 200GB partition at this stage to install Ubuntu too but when I burnt the DVD and loaded it, Ubuntu gave me no option to install alongside Windows, only the option to erase the entire disk and install Ubuntu on the blank drive...not what I want to do. So I tried installing it through clicking 'Something else', it downloaded all the install files but didn't install. I then had a lot of problems with getting the DVD drive to work and what not but now have this fixed so I can use Windows again. So now I've used GPartEd to delete the partitions so again I'm now left with the 3 partitions there which Windows 7 automatically installs and a 200GB NTFS partition I will later use as a shared drive. Booting up from the Ubuntu disc and again there is no option to install alongside Windows 7. How do I get it to do so? All I would like is Windows 7 and Ubuntu on a dual boot, with a 200GB NTFS partition to dump my work onto so that I can access it from both OS's. Thanks.

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >