Search Results

Search found 28880 results on 1156 pages for 'check disk'.

Page 64/1156 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Does a 3ware "ECC-ERROR" matter on a JBOD when I have ZFS?

    - by Stefan Lasiewski
    I have a FreeBSD 8.x machine running ZFS and with a 3ware 9690SA controller. The 3ware controller shows an ECC-ERROR with one of the disks: //host> /c0 show VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 279.39 GB SAS 0 - SEAGATE ST3300657SS p1 OK u0 279.39 GB SAS 1 - SEAGATE ST3300657SS p2 OK u1 931.51 GB SAS 2 - SEAGATE ST31000640SS p3 ECC-ERROR u2 931.51 GB SAS 3 - SEAGATE ST31000640SS p4 OK u3 931.51 GB SAS 4 - SEAGATE ST31000640SS /c0 show events shows no ECC errors in it's recent history. ZFS does not currently detect any errors. zpool status says No known data errors My question: Is this ECC-ERROR something that I need to be concerned about? According to the 3ware CLI 9.5.2 Manual, an ECC-ERROR means that the 3ware controller caught a read-error for one or more sectors on this drive. This sometimes occurs when a RAID array is recovering from a failed disk. I believe that ECC-ERRORS can also be detected when the 3ware Controller verifies each disk. None of the drives have failed and thus there was no drive rebuild, so I assume that 3ware discovered a bad sector when it ran it's weekly auto-verify scan of the disks. Is this a safe assumption? According to our logs, ZFS has not detected any bad sectors on this drive. ZFS can work around read errors -- if ZFS detects a bad sector on the drive, it will simply mark that sector as bad and never use it again. From the ZFS perspective one bad sector isn't a big deal, although it might indicate that the drive is starting to go bad.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Cannot access drive in Windows 7 after scandisk lockup, but can in safe mode....

    - by Matt Thompson
    I ran scandisk on my external USB drive due to the inability to delete a few files. Windows asked me if I wanted to unmount the drive before the scan, warning me that it would be unusable until the scan was finished, and I said yes. During the scan, my machine locked up, and I was forced to reboot the machine. When it came up, I was unable to access the drive, getting an error that "L:is not accessible, access is denied". Comupter Management sees the drive, and has the proper amount of disk space filled. I booted into safe mode, and can access the drive with no problems, and I noticed that in explorer, all the folders have locks on them. I booted back into windows, but still could not access the drive, getting the same error as above. Hovever, if I right click on the drive, select properties, and go to Customize, in the folder pictures ares, I select Choose File, and a window open up, that shows the root of the directory, with all the folder able to be accessed, but again, the icon is the folder icon with a lock on it. I can even copy files from the drive to another. So, the files are not gone, windows can obviously access the drive no matter what it thinks, so there has to be a problem with the flag windows put on the drive when it ran the original scan that failed. I was able to run a scan both in safe mode with no problems, and in windows. In windows, I received the cannot access error the first time I run scan disk on it, but if I try again, it works fine. Any ideas on how to clear the flag that windows set, so I can access the drive normally again?

    Read the article

  • PC only boots from Linux-based media and won't boot from DOS-based media

    - by Xolstice
    I have this problem where the PC only seems to boot from a floppy disk or CD if it was created as a Linux-based bootable media. If it was created as a DOS-based bootable media the system just freezes at the starting point of the boot process. I originally asked this under question 139515 for CD booting only, and based on the given answers, I was under the impression the problem was with the CD-ROM drive; however, I have since installed a newly purchased CD-ROM drive and the same freezing occurs. This then made me try the DOS bootable floppy disk approach and I was quite surprised that it exhibited the same freezing problem. I then tried try a Linux bootable floppy and everything booted from it without any issues. As I mentioned in my original question, the PC was booting just fine from the DOS-based bootable CD, and then it suddenly decides to pull this freezing stunt. I can't remember if I changed anything in the BIOS settings that may I have caused the problem, but I am wondering if that could be the case - it is currently using the Award Module BIOS v4.60PGMA. Can anyone help?

    Read the article

  • Quota, AD and C#

    - by Gnial0id
    At first, my mother tongue is not English, so I apologize for the possible mistakes. I'm working on a WS2008R2 server with an Active Directory and a web platform manages this AD with C# code. A group of users have to be able to create user accounts but during the procedure, a disk quota for this new account is (and have to be) created. As the "creator" must not be a member of the Administrators group, the access to the c/: disk is denied. So, I want to perform the File Server Resource Manager operations with C# code by an non-admin account. The code is correct, it works normally with admin account. So, the problem turns around the permissions on the hard drive. I've looked after help on the Internet, without success. It seems that quota delegation is impossible. Only admin can perform this. A colleague helped me a bit, and found the GPO "By pass traverse checking" on a forum but it doesn't seems to be the good way. Any help would be appreciate.

    Read the article

  • Copy all installed programs & files in a hard disk (which has 32 bit Windows 7) and clone/transfer it to another computer which has 64 bit Windows 7

    - by galacticninja
    I recently got a new PC which has a 64-bit Windows 7 installed. The current PC that I am using has a 32-bit Windows 7 installed. I would like to know if there is a software that can copy all my installed programs and files in the hard disk with the 32-bit Windows 7 PC and transfer it to the newer PC's hard disk which has a 64 bit version of Windows 7. This is essentially like "cloning" a hard disk but I would like to use a 64-bit OS in the target drive, instead of also using the 32-bit OS of the source drive. I would like to do this I can avoid reinstalling and reconfiguring my installed programs and files again on the new PC. If possible, I would like the new PC to work as it was in my previous PC, with the installed programs, configuration and files intact except that the OS is now 64-bit and the hard disk has a larger capacity. I have heard of programs that can clone a hard disk, but my concern is that the 32-bit Windows 7 OS will also be cloned to the new 64-bit PC. If it is not possible to transfer my installed programs and settings like the way I described, are there software that can make it easier to migrate my installed programs, their configurations and my files from a 32-bit Windows 7 PC to a 64-bit Windows 7 PC? Details: I have a SATA to USB connector/adapter to copy files in the current hard disk to the newer one. The two PCs are connected through LAN, so I can also transfer files through LAN. Both PCs only have one hard disk.

    Read the article

  • Dell PowerEdge T710, add a new hard disk, how to?

    - by user1340802
    I need to add a new hard disk to a PowerEdge T710 running on Vmware EXSI 4. this hard disk is a 'normal' desktop hard disk 1TB (that is it is not coming from Dell, I also have no rack for it to plug it inside any of the front bay) I would like to add this disk for a virtual machine needing space, the most easily as possible. I have find that there is an avaiable sata cable with its electric power, so may I just add the disk plugging these and using the empty 5"1/4 slot available under the CD drive (with a 5"1/4 - 3"1/2 bay adaptater) ? (even if this way it seems that i bypass the raid controller that own the front bay with racks)) that way i think could be easier than adding the disk to the already defined Raid (btw i am also not sure on how to do these but i would not risk to mess the already working things) what are the other operations that i would have to do to ? (sorry I am a real beginner on Vmware EXSI and PowerEdge management :/ i have seen that there is some management from Bios (CTRL+R as start up) so that the disk will be seen or initialize it. I am really not sure of the steps needed...) thank you, best.

    Read the article

  • HD working with IDE USB adapter but not recognised by bios

    - by Rajeeva
    I have a Windows XP Pentium III desktop with two hard drives. The first one has the OS and is luckily working. The second drive on the secondary master IDE channel few days back was unable to read some files and since then for some time it was failing and reviving intermittently and now it is always showing as failed on the IDE channel When the HD was intermittenly failing, I was able to copy some data from it to the other drive - also during that time if the system was running and the hard disk failed at that time, the system froze and then i had to reboot. then I got a new 80 gb hdd similar (same make - seagate barracuda) to the earlier failing one, a new data cable for the drive and an IDE to USB adapter. the new hard drive i installed in the previous drive's place (secondary master), formatted it and it worked for 1 day and then it also failed - simultaneously i connected the old hd to the IDE/USB adapter and i could view all the data - some of that data i was able to back up from the old hd to the new hd before the new hd failed the new hd i have tried connecting on the primary channel as the slave disk but when i do that then the bios does not detect either the OS drive or the new drive and the system does not boot surprisingly, the older (previously failed) hd and the new hd are both working fine on the usb channel with the IDE/USB adapter. i have ruled out any problem with the secondary channel since the dvd rom i was earlier using as primary slave have now connected to secondary master and it works fine. i am really confused by this behavior on my system. please can anybody try to solve this for me. thanks.

    Read the article

  • 750Gig Hard Drive shows full with only 315Gigs used

    - by Chris Kelly
    I have a Win7 laptop with a 750Gig C: drive. It came partitioned with 714Gig usable from manufacturer. I installed programs, music files, etc up to 285 gigs. As of a few weeks ago it showed 285 Gigs. Two weeks of house guests later and it shows HD is full. I deleted some files but it still shows 652 Gigs on this drive while there are only 285 Gigs on drive. Relevant details: I am Administrator on laptop and have fair knowledge of what I am doing. I did not restore from backup, restore from mirror, upgrade HD's or anything else that would have touched the partition structure. Just daily use as imaging machine and web. I have checked partitions under disk administrator - no change, still partitioned with 714Gigs usable. Have looked through computer C drive by hand showing Hidden files and folders - no change. I have used JDisk Report to double check - it shows I have only 285 Gigs on C drive. I triple checked with TreeSize run as Administrator and it also shows 285 Gigs on C drive - yet Windows 7 still shows almost full. I used Windows 7 Utilities to Check for Disk Errors, and Defragged the drive. No errors shown and no change after Defrag.

    Read the article

  • mysql disk io keeps increasing ... is that normal?

    - by trustfundbaby
    So I've been trying to figure out this disk IO problem I have been having with my linode VPS. Over the last day or two I've just left watch -n1 pidstat -d running in a console window and the output looks like this: Monitoring it over the last few days, I've noticed that my problem lies with the init, searchd, and mysql processes. Searchd is sphinx and all its indexes are on disk, so disk io there is inevitable (apparently). What I can't understand is why the disk reads (kB_rd/s) for mysql refuse to stabilize and just keep going up. It started out at 154 yesterday and is up to what you see in that screen shot. but disk writes (kB_wr/s) have remained pretty constant the entire time. My VPS only has 768MB RAM, my mysql db has a size of about 220MB and after running mysqltuner.pl and reading a bit about it, I've been advised to set my innodb_buffer_pool_size to 220MB but I simply cannot afford to do that ... I have it up to 150MB. My question is twofold. Why does the init process have that much disk reading to do? Why is mysql doing so much disk reading?

    Read the article

  • Linq where clause with multiple conditions and null check

    - by SocialAddict
    I'm trying to check if a date is not null in linq and if it isn't check its a past date. QuestionnaireRepo.FindAll(q => !q.ExpiredDate.HasValue || q.ExpiredDate >DateTime.Now).OrderByDescending(order => order.CreatedDate); I need the second check to only apply if the first is true. I am using a single repository pattern and FindAll accepted a where clause ANy ideas? There are lots of similar questions on here but not that give the answer, I'm very new to Linq as you may of guessed :) Edit: I get the results I require now but it will be checking the conditional on null values in some cases. Is this not a bad thing?

    Read the article

  • Autotools automatic invocation of lcov after 'make check'

    - by disown
    I have successfully set up an autotools project where the tests compiles with instrumentation so I can get a test coverage report. I can get the report by running lcov in the source dir after a successful 'make check'. I now face the problem that I want to automate this step. I would like to add this to 'make check' or to make it a separate goal 'make check-coverage'. Ideally I would like to parse the result and fail if the coverage falls below a certain percentage. Problem is that I cannot figure out how to add a custom target at all. The closest I got was finding this example autotools config, but I can't see where in that project the goal 'make lcov' is added. I can only see some configure flags in m4/auxdevel.m4. Any tips?

    Read the article

  • JSF : Better way to check for existence of <h:message for="id"/>

    - by user552809
    I have a form in which validation error message needs to be displayed below the input elements. The error needs to be highlighted by showing an error bubble around the error message and the input text. To achieve this, I need to check for the existence of h:messages for individual elements. I am able to check for the existence of global error messages as follows <h:panelGroup rendered="#{not empty facesContext.messages}"> </h:panelGroup> How I can check the same for specific client id (say first name). So something like faceContent.messages("creditCardNo") A solution I have currently is to create a custom resolver but was wondering if there is a better solution.

    Read the article

  • Deserialized xml - check if has child nodes without knowing specific type

    - by AndyC
    I have deserialized an xml file into a C# object and have an "object" containing a specific node I have selected from this file. I need to check if this node has child nodes. I do not know the specific type of the object at any given time. At the moment I am just re-serializing the object into a string, and loading it into an XmlDocument before checking the HasChildNodes property, however when I have thousands of nodes to check this is extremely resource intensive and slow. Can anyone think of a better way I can check if the object I have contains child nodes? Many thanks :)

    Read the article

  • JScript JSON Object Check

    - by George
    I'm trying to check if json[0]['DATA']['name'][0]['DATA']['first_0'] exists or not when in some instances json[0]['DATA']['name'] contains nothing. I can check json[0]['DATA']['name'] using if (json[0]['DATA']['name'] == '') { // DOES NOT EXIST } however if (json[0]['DATA']['name'][0]['DATA']['first_0'] == '' || json[0]['DATA']['name'][0]['DATA']['first_0'] == 'undefined') { // DOES NOT EXIST } returns json[0]['DATA']['name'][0]['DATA'] is null or not an object. I understand this is because the array 'name' doesn't contain anything in this case, but in other cases first_0 does exist and json[0]['DATA']['name'] does return a value. Is there a way that I can check json[0]['DATA']['name'][0]['DATA']['first_0'] directly without having to do the following? if (json[0]['DATA']['name'] == '') { if (json[0]['DATA']['name'][0]['DATA']['first_0'] != 'undefined') { // OBJECT EXISTS } }

    Read the article

  • Can't check more than one RadioButton across multiple items in a Treeview

    - by Mike Johnston
    I’m using a TreeView control to present a list of Questions. Using the Prism.DataTemplateSelector, I'm loading a View (.xaml file) that represents a single Question into each node in the TreeView. In the View for that question is a ListBox containing RadioButtons (one for each item in a Picklist object that the ListBox is bound to). The radio buttons work as expected for the question, but when I check a RadioButton on another node/question in the TreeView, the check for the button in the Question I was editing before disappears. In other words, I'm only able to check one RadioButton in the whole list of Questions/Items bound to the containing TreeView. How do I group the RadioButtons in the ListBox to the scope of the single question instead of all the questions in the TreeView.

    Read the article

  • TFS 2008 ignores team project check-in settings

    - by JoshEarl
    I'm trying to set up our TFS 2008 instance to require that projects build before they can be checked in. I have created a check-in policy using the out of the box "Builds" policy, but I'm still able to check broken projects in after mangling the code and attempting to build the project. We're a small shop, and TFS was originally set up with our team's Active Directory group listed as TFS admins. Is this the problem? Do check-in policies apply to TFS admins? Any other suggestions?

    Read the article

  • How to check if canvas objects overlap each other

    - by ?????? ???????
    I'm trying to check if two objects (e.g. a rectangle and a triangle) on a HTML5 canvas are overlapping each other. Currently I can only check that by looking at the screen (having set globalCompositeOperation='lighter'). My first idea would have been to scan all over the canvas if the "lighter" (compare code snippet above) color exists in the canvas. But therefor I would have to look at every single pixel which was rather costly for what I need. Is there a (better) alternative to automatically check if they are overlapping? Best regards.

    Read the article

  • Changing the value of datagridComboBoxColumn on checking/unchecking of check boxes in datagridcheckb

    - by MD
    I have a wpf data grid,where there are two data template columns One of which ahs check box as the data template adn teh other has combo box as the data template. Now my requirement is,i need to disable few of the options in the combo boxes depending on the check box checked or unchecked for each individual rows... With the cod etaht i ahve tried,i am able to change teh values of teh combo boxes,but it changes for the whole column and not fro individual rows.. Please let me kneo how to determine the combo boxes for the corresponding check boxes in a particular row.

    Read the article

  • Check if a symlink has changed

    - by BCS
    I have a daemon that, when it's started, loads its data from a directory that happens to be a symlink. Periodically, new data is generated and the symlink updated. I want a bash script that will check if the current symlink is the same as the old one (that the daemon started with) and if not, restart the daemon. My current thought is: if [[ ! -e $old_dir || $(readlink "$data_dir") == $(readlink "$old_dir") ]]; then echo restart ... ln "$(readlink "$data_dir")" "$old_dir" -sf else echo no restart fi The abstract requirement is: each time the script runs, it needs to check if a symlink on a given path now points to a something other than it did the last time and if so do something. (The alternative would be to check if the data at the path has changed but I don't see that being any cleaner.) My questions: Is this a good approach? Does anyone have a better idea? Where should I put $old_dir?

    Read the article

  • Check if exists, if so, update by 1++, if not insert

    - by Scarface
    Hey guys quick question, I currently have an insert statement $query= "INSERT into new_mail VALUES ('$to1', '0')"; where fields are username, and message_number Currently what I would do to check if the entry exists, is do a select query then check the number of rows with mysql_num_rows (php). If rows==1 then I get the current message_number and set it equal to $row['message_number']++1. Then I update that entry with another query. Is there an easier way to do all this in just mysql with just one query (check if exists, if not insert, if so update message_number, increase by 1)?

    Read the article

  • Check If Stored Procedure Returns Value

    - by Eric
    Hello all, I am using Linq 2 Sql in VS 2010, and I have the following stored procedure to check a username and password ALTER PROCEDURE dbo.CheckUser ( @username varchar(50), @password varchar(50) ) AS SELECT * FROM Users Where UserName=@username AND Password=@password The problem I'm having is that it throws an exception if the username and password are incorrect. I'd like to perform a check to see if there is a return value, rather than using try/catch to determine whether the procedure returned a value. Should I do this check in code (C#)? Or is there a way to do it in SQL? Thanks.

    Read the article

  • .htaccess Permission denied. Unable to check htaccess file

    - by Josh
    Hi, I have a strange problem when adding a sub-domain to our virtual server. I have done similar sub-domains before and they have worked fine. When I try to access the sub-domain I get an 403 Forbidden error. I checked the error logs and have the following error: pcfg_openfile: unable to check htaccess file, ensure it is readable I've searched Google and could only find solutions regarding file and folder permissions, that I have checked and the solution isn't solved. I also saw problems with Frontpage Extensions, but that's not installed on the server. Edit Forgot to say that there isn't a .htaccess file in the directory of the sub-domain

    Read the article

  • postfix check warm me that some file differ

    - by Nicolas BADIA
    If I run postfix check on my debian squeeze server, I get this: postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_nisplus-2.11.3.so and /lib/libnss_nisplus-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_files-2.11.3.so and /lib/libnss_files-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_compat-2.11.3.so and /lib/libnss_compat-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_hesiod-2.11.3.so and /lib/libnss_hesiod-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_nis-2.11.3.so and /lib/libnss_nis-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_dns-2.11.3.so and /lib/libnss_dns-2.11.3.so differ Somebody know a solution to fix this ?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >