Search Results

Search found 20484 results on 820 pages for 'small projects'.

Page 735/820 | < Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >

  • Apache refusing to change DocumentRoot

    - by mingos
    I've installed Zend Server CE 5.1.0 on Windows 7 Ultimate 64 bit in its default location, meaning the path to my htdocs is C:\Program Files (x86)\Zend\Apache2\htdocs. Not something that I would like to type each time I check out a project from SVN in Eclipse or something. I'd like to set the DocumentRoot to a different folder, namely D:\www. What I've done I edited conf/httpd.conf, with the significant lines being: DocumentRoot "D:\www" <Directory "D:\www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> Include conf/extra/httpd-vhosts.conf I edited conf/extra/httpd-vhosts.conf to add a virtual host: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot D:\www ServerName localhost ServerAlias localhost SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN localhost </VirtualHost> <VirtualHost *:80> DocumentRoot D:\www\UmbraCMS ServerName umbracms.local ServerAlias umbracms.local SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN umbracms.local </VirtualHost> I edited C:\Windows\System32\drivers\etc\hosts to add this line: 127.0.0.1 umbracms.local And I also added a PHP project to D:\www\UmbraCMS. And restarted Apache. Actually, I restarted the computer, too, just in case. What's supposed to happen After typing http://umbracms.local/ in the browser's address bar, I want to see my PHP project launch, obviously. What's actually happening No matter whether whether I type http://umbracms.local/ or http://localhost/, I'm taken to the test zend page, located in C:\Program Files (x86)\Zend\Apache2\htdocs\index.html, as if neither DocumentRoot was changed nor name-based virtual hosting worked. Interestingly, when I put another project in C:\Program Files (x86)\Zend\Apache2\htdocs\bugraid\ and then, in the browser, typed http://localhost/bugraid, the project actually opened, or at least tried to, as it completely ignored the project's .htaccess file. Extra considerations Zend Server's Apache version is 2.2.16, PHP version is 5.3.0 I've installed MySQL CE 5.5.13 separately, and it works, both from command line and via MySQL Workbench. I have XAMPP installed, but none of its components are started up. It's got its own install of Apache 2.2.17 and MySQL 5.5.1. PHP version is 5.3.5 (I think). Question Have you had a similar situation before? What else might need taking care of in order to have Zend Server's Apache use D:\www as document root for my PHP projects?

    Read the article

  • In search of a good audio player for Ubuntu 9.10

    - by Joe Casadonte
    If this should be marked Community Wiki, please let me know. I'm switching from XP to Ubuntu, and I have been very disappointed with the selection of media players available. I'm primarily interested in an audio player, but integrated video and library management is OK, too. My criteria: Must be able to play audio CDs (I'm shocked how many apps this does away with, right away) Must be able to play MP3 & WAV; OGG, SHN, FLAC are all bonuses Repeat and Shuffle modes are a must FreeDB / GraceNote through a proxy is a must (if it can read a PAC file, that would be awesome) It needs to be really small, e.g. skinnable or an applet Ability to execute a playlist is a plus Gapless MP3 playback a plus I'm running Gnome, but I'm not totally adverse to a KDE app. Command-line only is also a viable option. Some that I've tried: RhythmBox - probably the best of the lot that I've tried; I don't like its mini mode (doesn't show the song being played) and I can't figure out how to get it to hit FreeDB/GraceNote through a proxy Songbird - can't play CDs, playlist management is atrocious Banshee Jajuk Maybe a couple of more. Thanks! UPDATE I tried out VLC, Amarok and Songbord (again). VLC I eventually got to work (I had some kind of bad configuration). It seemed way more involved than I was looking for out of a music player, and in general more geared to video than audio. I couldn't fathom its library management, which I think it has; maybe it doesn't, and that's why I couldn't figure it out. Amaork looked very promising but the library management was not to my liking, and the way it handled a playlist with both MP3 and WAV is inexplicable at best. I did like some aspects of the UI, but not enough to keep it. Songbird is very finicky, but I like the library management. Sort of. It kept telling me my Watch folder was invalid, even thought it clearly was accessible. Playlist management is bizarre, and the message that it was deleting source files whenever I deleted a playlist had me too worried to keep using it. Had it been able to play CDs, maybe I would have persevered. Audacious, while a bit odd at times, does seem to do what I want. If it had a library manager, I wouldn't have bothered trying any of the others. Thanks for the help, everyone!

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • Google Apps For Business, SSO, AD FS 2.0 and AD

    - by Dominique dutra
    We are a small company with 22 people in the office. We had a lot of problems with e-mail in the past so I decided to change over to Google Apps for Business. It is the perfect solution for us, except for one thing: I need to be able to control the access to the mailboxes. Only users inside the office, authenticated to AD, or users authenticated to our VPN can connect to gmail. From what I've read it is possible using the SSO (Single Sign On) solution provided by Google - but i am having some trouble finding consistent information about it. First of all, our infrastructure: Windows Server 2008 R2 Active Directory, one domain only. Kerio Control for QoS and VPN. That's about it on our side. On Google Apps' side, I have one account, and 03 domains that my users use to log in. The main domain has most of the users, but the are a couple of people that login using one of the subdomains. I have a 03 domains because I run mail for 03 companies and wanted all to be in within the same control panel. Well, I found some guides on the internet but none of them cover the AD FS installation part. I've read somewhere that I needed to download AD FS 2.0 directly from Microsoft.com, because the one that came with Windows Server was a old version. I downloaded it (adfsSetup.exe) and tried to install but got an error, saying that I needed a Windows Server 2008 Sp2 for that program. My Windows Server 2008 is R2. I really need some help here, this is very importand, I dont want to have to pay $1000 for a SSO solution when i have an AD set up. Can someone please point me out to the right direction? Where can I find an AD FS 2.0 setup compatible with R2 would be a good start, or the one that came with r2 is already the 2.0 version. After the initial setup, there are some guides on the internet about the Google Apps part. It seems to be really easy. I also tried adding AD FS role, but there are a bunch of options wich I have no idea what means, and I coudn't find any guide covering that on the internet. I dont have a lot of experience with Windows Server, but I have a company wich is certificated and provide us with support. I can ask for their help in the later setup, but I dont think ADFS is a very common thing to deal with.

    Read the article

  • How is DNS used by individual processes?

    - by atroon
    When resolving FQDNs or machine names to IP addresses on my local network (mycompany.internal) I can use dig on the command line (linux/mac) or nslookup (windows) to query the configured server and get a response. But trying to enter the FQDN or even just the machine name in a ping command or in a web browser results in 'Unknown Host' or DNS errors. Here's a sample, this one from the Mac: mac:~ atroon$ dig server.mycompany.internal ; <<>> DiG 9.6.0-APPLE-P2 <<>> server.mycompany.internal ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5219 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.mycompany.internal. IN A ;; ANSWER SECTION: server.mycompany.internal. 1200 IN A 172.16.254.36 ;; Query time: 0 msec ;; SERVER: 172.16.254.8#53(172.16.254.8) ;; WHEN: Wed Dec 16 11:39:15 2009 ;; MSG SIZE rcvd: 55 mac:~ atroon$ ping server.mycompany.internal<br> ping: cannot resolve server.mycompany.internal: Unknown host I cannot for the life of me figure this one out. The DNS server is a SBS 2003 box which handles AD, some file/print, etc for a small company network. This issue happens to me about three times a week, and when I'm connected to the local network directly, the same switch as the server even. I can make any connection I want with IP addresses, I just can't make DNS work. Additionally, at the same time I'm experiencing this, other users are fine, which makes me think it's a problem on my Mac. But what sort of problem? How can dig send a query and get a reply, and ping say 'unknown host'? I'm posting here vs. serverfault because I think this is a local problem not a server problem...but if anyone can point me at the server, I guess we'll head down the street a domain or two.

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Impact of the L3 cache on performance - worth a dual-processor system?

    - by Dan Nissenbaum
    I will be purchasing a new high-end system, and I would like to have a better sense of whether a dual-processor Xeon system (I am looking at the new, high-end Xeon E5-2687W) might, realistically, provide a noticeable performance improvement due to the doubling of the L3 cache (20 MB per CPU). (This is in addition to the occasional added advantage due to the doubling of cores and RAM.) My usage scenario is, roughly, that I have many background applications running at any time - 3 or 4 data compression/backup applications, a low-impact web server, one or two virtual machines at any given time (usually fairly idle), and perhaps 20 utility programs that utilize a noticeable (but small) portion of the CPU cores. In total, when I am not actively using the computer, about 25% of the total CPU power is utilized in my current i7-970 6-core (12 thread) system. When I am doing routine work, the CPU utilization often exceeds 50%, and occasionally hits 75%-80%. The Xeon E5-2687W is not only a second-generation i7 (so should improve performance for that reason), but also has 8 cores (16 threads), rather than 6 cores. For this reason, I expect to run into the 75% CPU range even less frequently. Nonetheless, the ability to double the cores and the RAM is a consideration. However, in the end, I believe this decision comes down to whether the doubling of the L3 cache will provide a noticeable improvement. There are many benchmarks, and a lot of discussion, regarding CPU power. However, I find very little discussion of L3 cache utilization, and how increases in the L3 cache (such as doubling it with dual processors) affect performance. For example: If there are only two processes running, but each benefits from a large L3 cache (such as might be the case for background processes that frequently scan the file system), perhaps the overall system performance might noticeably improve with dual CPU's - even if only a single core is active on each CPU - due to each process having double the effective L3 cache. I am hoping that someone has a sense of the benefits of increasing (or doubling) the L3 cache size. Note: the CPU I am considering (the Xeon E5-2687W) has 20 MB L3 cache, so a system with dual CPU's would have 40 MB L3 cache.

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • Radeon HD4850 serious issues when using DirectX 10

    - by ricsmania
    Hello, I have a problem with my video card. Whenever I run a DirectX 10 game, it works for a few seconds (10 or so) and then starts displaying nothing but big polygons. I have tested this with Crysis and Resident Evil 5, both have the same problems. The same games running under DirectX 9 work fine, except for some small black squares once in a while. I have the following specs: Asus P7P55D LE Intel Core i5 750 Sapphire Radeon HD4850 1GB 2x2GB Patriot Viper II Sector 5, DDR3 1600 MHz OCZ Stealth X Stream 500SXS 500W At first I thought it could be the video card overheating (it has stock cooling), but the game crashes even when it's running at 50 degrees C, and it's never been higher than 70. I also thought it could be the PSU, but as far as I know 500W is enough for this computer, especially because I haven't overclocked anything. My OS is Windows 7 X64 and I am using Catalyst 10.10, but I have also tried many older versions with no success. I don't think there is a problem with the card itself, or else it wouldn't run DirectX 9 games I believe. I have spent many hours searching for a solution but I couldn't, so any help is appreciated. Thank you. EDIT: I did some further investigation about the problem, and it seems taspeotis was right, it might be related to memory. I slightly underclocked the memory from 993 to 965 MHz and the problem went away completely. Both the black squares using DirectX 9 and the weird polygons using DirectX 10. I was using RE DirectX 10 Benchmark, as it consistently crashed around the same point, and now I can play the full benchmark with no artifacts at all. Unfortunately, the underclock has an obvious hit in performance. Although it's not critical, it's definitely noticeable. So, if the video memory test software showed no erros, but the card needs an underclock to work, what might be the problem? Temperature? Voltage? By the way, I couldn't find what the default voltage for this card is. And what is a good software to try and increase it? I tried Ati Tray Tools but it has a bug that increases the clock speed dramatically whenever I change something in the Overclock tab, so I'm afraid it might fry my card. Worst case scenario, if I don't find I solution I will try to slightly increase the GPU clock to compensate for the memory clock. Thank you again.

    Read the article

  • Searching For a Desktop Security Software to harden Windows machines, anybody?

    - by MosheH
    I'm a network administrator of a small/medium network. I'm looking for a software (Free or Not) which can harden Windows Computers (XP And Win7) for the propose of hardening standalone desktop computers (not in domain network). Note: The computers are completely isolated (standalone), so i can't use active directory group policy. moreover, there are too many restriction that i need to apply, so it is not particle to set it up manual (one by one). Basically what I’m looking for is a software that can restrict and disable access for specific user accounts on the system. For Example: User john can only open one application and nothing else -- He don’t see no icon on the desktop or start menu, except for one or two applications which i want to allow. He can't Right click on the desktop, the task-bar icons are not shown, there is no folder options, etc... User marry can open a specific application and copy data to one folder on D drive. User Dan, have access to all drives but cannot install software, and so on... So far ,I've found only the following solutions, but they all seems to miss one or more feature: Desktop restriction Software 1. Faronics WINSelect The application seems to answer most of our needs except one feature which is very important to us but seems to be missing from WINSelect, which is "restriction per profile". WINSelect only allow to set up restrictions which are applied system-wide. If I have multiple user accounts on the system and want to apply different restrictions for each user, I cant. Deskman (No Restriction per user)- Same thing, no restriction per profile. Desktop Security Rx - not relevant, No Win7 Support. The only software that I've found which is offering a restriction per profile is " 1st Security Agent ". but its GUI is very complicated and not very intuitive. It's worth to mention that I'm not looking for "Internet Kiosk software" although they share some features with the one I need. All I need is a software (like http://www.faronics.com/standard/winselect/) that is offering a way to restrict Windows user interface. So if anybody know an Hardening software which allows to set-up user restrictions on Windows systems, It will be a big, big, big help for me! Thanks to you all

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Getting an boot error when starting computer

    - by Rob Avery IV
    I was in the middle of watching a movie on Netflix, then suddenly everything started crashing. First, explorer.exe closed down, then Google chrome. I had multiple things running in the background (Steam, Raptr, etc.). Individuality, each of those apps closed down also. When they did, a small dialog box popped up for each of them, one at a time, saying that it was missing a file, it couldn't run anymore, or something similar to that. It also had some jumbled up "code" with numbers and letters that I couldn't read. Ever since then, everytime I turn my computer on, it will run for a few seconds and give this error "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". No matter how many times I try to reboot it, it always gives me the same error. A day later after this happened I was able to start the computer, but before it booted, it told me that I didn't shut down the computer properly and asked how I wanted to run the OS (Run Windows in Safety Mode, Run Windows Normally, etc.). Once I logged, everything went SUPER slow and everything crashed almost instantly. The only thing I opened was Microsoft Security Essentials and only got in about two clicks before it was "Not Responding". Then, after that the whole computer froze and I had to restart it. Now, it's back to saying what it originally said, "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". I built this PC back in February 2012. Here are the specs: OS: Windows 7 Ultimate CPU: AMD 8-core GPU: Nvidia GTX Force 560 Ti RAM: 16GB Hard Drive: Hitachi Deskstar 750GB I'm usually very good taking care of my PC. I don't download anything that's not from a trusted site or source. I don't open up any spam email or such or go to any harmful websites like porn or stream movies. I am very clean with the things I do with my PC and don't do many DIFFERENT things with it. I use it pretty often especially for video games and doing homework in Eclipse. Also, good to note that I don't have any Norton or antisoftware installed. I have Microsoft Security Essentials installed but never did a scan. Thanks!

    Read the article

  • PDF has garbled text when copy pasting

    - by ngm
    I'm trying to copy and paste text from a PDF file. However, whenever I paste the original text it is a huge mess of garbled characters. The text looks like the following (this is just one small extract): 4$/)5=$13! ,4&1*%-! )5'$! 1$2$)&,$40! 65))! .*5)1! -#$! )/'8*/8$03! (4/+$6&4;0!/'1!-&&)0!*0$1!.9!/,,)5%/-5&'!1$2$)&,$403!5'!+*%#!-#$! 0/+$!6/9! -#/-! &,$4/-5'8! 090-$+! 1$2$)&,$40! .*5)1!1$25%$! 1452$40! /'1! &-#$4! 090-$+! 0&(-6/4$! %&+,&'$'-0! *0$1! .9! /,,)5%/-5&'! 1$2$)&,$40!-&1/97!"#$!+5M!&(!,4&1*%-!)5'$!/'1!,4&1*%-!1$2$)&,$40! 65))! .$!+*%#!+&4$! $2$')9! ./)/'%$13! #&6$2$43! -#/'! -#$!+5M! &(! &,$4/-5'8!090-$+!/'1!/,,)5%/-5&'!1$2$)&,$40!-&1/97! )*+*+, C<88,?>8513AG<5A14, I've tried it in both Adobe and Foxit PDF readers. I did a 'Save as text' in Adobe Reader and the resultant text file is the same garbled text. Any ideas how I can get this text out non-garbled? (Other than manual typing... there's a lot of text to extract.)

    Read the article

  • How should I use LVM with Ganeti?

    - by javano
    I am building a small Ganeti cluster on some low end hardware (I only have the resources given sadly). I am confused as to the use of LVMs with DRBD. I have two instances and three nodes. What I want is instance1 replicated between node 1 & 2, and instance2 replicated between nodes 3 & 2 (so node2 is doing nothing, except waiting for either node1 or 3 to fail, is it is the secondary node for both instances). This is because node2 is a lower hardware spec than 1 and 3, so I just want it as an hot-spare. How can I achieve this? I don't want instance1 being replicated to node3 for example, nor instance2 replicated to node1. Nodes 1 & 2 have /dev/sda5 which is 150GBs (for example). Nodes 2 & 3 have /dev/sda6 which is also 75GBs (for example). Using just nodes 1 & 2, after looking at the Ganeti docs I would; vgcreate my-vg Next I would create the cluster via gnt-cluster VG = "my-vg". It is here I believe that I am missing some knowledge. I believe that what I need to do is create the same Logical Volume on nodes 1 & 2 in Volume Group "my-vg", that solely consists of /dev/sda5 and call it "lv1". Then create an Logical Volume on nodes 2 & 3 the solely consists of /dev/sda6 in "my-vg" that is called "lv2". When creating instance1 I would then use "-vg=lv1 -n node1:node2", and when creating instance2 I would use "-vg=lv2 -n node3:node2". I breifly had a go at this today and I'm dubious if this will be possible. When trying to create instance2, "lv2" wont exist on node1 (the cluster master) so I don't believe it will allow the instance creation. Could I create a 1kb parition (/dev/sda6) on node1 and put it into a LV called "lv2" or is that too flakey? Is this set up possible? Thank you.

    Read the article

  • inews failed: "No colon-space in "X-MS-TNEF-Correlator:"

    - by wolfgangsz
    We run a news server for our engineering teams, which is also linked to the code repositories (so that all engineers can subscribe to any changes in the repos or just the projects they are interested in). On quite a regular basis (several times a day) I (as the sysadmin for that server) receive bounces from innd with the above as the first line. The news server simply rejects these messages and the articles don't get posted. Here is an example: inews failed: inews: cannot send article to server: 441 437 No colon-space in "X-MS-TNEF-Correlator:" header inews: article not posted -------- Article Contents Path: aminocom.com!ctaylor From: [email protected] (Cameron Taylor) Newsgroups: amino.qa.reports Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_A2AB95742ADD524795C13EDE8F8CCD201A798C0Eukswaex01_" MIME-Version: 1.0 Subject: [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** Message-ID: Date: Thu, 9 Sep 2010 16:15:16 +0000 X-Received: from uk-swa-ex02.aminocom.com (uk-swa-ex02.aminocom.com [10.171.3.10]) by theoline.aminocom.com (8.14.3/8.13.8) with ESMTP id o89GF8tx019494 for ; Thu, 9 Sep 2010 17:15:08 +0100 X-Received: from uk-swa-ex01.aminocom.com ([10.171.3.9]) by uk-swa-ex02 ([10.171.3.10]) with mapi; Thu, 9 Sep 2010 17:15:18 +0100 X-To: QA Reports X-Thread-Topic: [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** X-Thread-Index: ActQOjBdms0CSJsORNSxRIMSZ4H3Ow== X-Accept-Language: en-US, en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: X-Auto-Response-Suppress: DR, OOF, AutoReply --_000_A2AB95742ADD524795C13EDE8F8CCD201A798C0Eukswaex01_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable SQA Test Report [QA REPORT] MDK 400 release 3.4.33 **PRE-RELEASE** Status .... (rest of the message is not important) And yes, quite clearly this header doesn't have anything after the colon. The man page for innd doesn't specify why it rejects these messages, it just says it rejects them. So far I have found out these headers are linked to messages in RTF format (coming from Outlook clients), where normally the formatting information would be stored in a winmail.dat attachment. The clients all use MS Exchange 2010 servers to send their mail (identified above as uk-swa-ex02.aminocom.com) which forwards the message to the news server. Does anybody know what advice I need to give these users to avoid their articles getting bounced? Or can I change the behaviour of innd? Or do I need to filter these headers out before innd processes the articles?

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • Secure, efficient, version-preserving, filename-hiding backup implemented in this way?

    - by barrycarter
    I tried writing a "perfect" backup program (below), but ran into problems (also below). Is there an efficient/working version of this?: Assumptions: you're backing up from 'local', which you own and has limited disk space to 'remote', which has infinite disk space and belongs to someone else, so you need encryption. Network bandwidth is finite. 'local' keeps a db of backed-up files w/ this data for each file: filename, including full path file's last modified time (mtime) sha1sum of file's unencrypted contents sha1sum of file's encrypted contents Given a list of files to backup (some perhaps already backed up), the program runs 'find' and gets the full path/mtime for each file (this is fairly efficient; conversely, computing the sha1sum of each file would NOT be efficient) The program discards files whose filename and mtime are in 'local' db. The program now computes the sha1sum of the (unencrypted contents of each remaining file. If the sha1sum matches one in 'local' db, we create a special entry in 'local' db that points this file/mtime to the file/mtime of the existing entry. Effectively, we're saying "we have a backup of this file's contents, but under another filename, so no need to back it up again". For each remaining file, we encrypt the file, take the sha1sum of the encrypted file's contents, rsync the file to its sha1sum. Example: if the file's encrypted sha1sum was da39a3ee5e6b4b0d3255bfef95601890afd80709, we'd rsync it to /some/path/da/39/a3/da39a3ee5e6b4b0d3255bfef95601890afd80709 on 'remote'. Once the step above succeeds, we add the file to the 'local' db. Note that we efficiently avoid computing sha1sums and encrypting unless absolutely necessary. Note: I don't specify encryption method: this would be user's choice. The problems: We must encrypt and backup 'local' db regularly. However, 'local' db grows quickly and rsync'ing encrypted files is inefficient, since a small change in 'local' db means a big change in the encrypted version of 'local' db. We create a file on 'remote' for each file on 'local', which is ugly and excessive. We query 'local' db frequently. Even w/ indexes, these queries are slow, since we're often making one query for each file. Would be nice to speed this up by batching queries or something. Probably other problems that I've now forgotten.

    Read the article

  • Can't Move Windows to 2nd Monitor without Left Mouse and Cntl Key

    - by John C
    I have 2 very frustrating problems that maybe someone can help me with: I have 2 monitors (different sizes and resolutions) setup with the "Extended" monitor Win7 setup. My problem is this = I can not "move" a window from my Primary Monitor (larger and higher resolution on right side in front of me) to my Secondary 2nd monitor (smaller and lower resolution) with just selecting the title bar with the left mouse button and dragging it to the left. Windows 7 "snaps" it back to the left Primary Monitor when the window is physically in the 2nd window area as I'm holding the left mouse button. I can prevent this problem - by holding down the Cntl Key with the Left Mouse button, but this is extremely annoying to me. Also I typically "lose" focus if I try typing input on the 2nd monitor. Typing is erratic with regard to keystroke accuracy from my keyboard translated into input on the 2nd screen. No problem with typing input on the primary left monitor. I find this extremely annoying in Windows 7 and turning off the "snap" feature via the Control panel does NOT work for me. Win7 stubbornly refuses to move my selected window to my 2nd monitor without me "forcing" Win7 to do this with the Cntrl Key. Please tell me this is not a Win7 feature. Also on my system - Windows Key + Shift, Left arrow Key (pressed together) or the same combo with The Right arrow Key - don't do anything whatsoever. Widows Key with "+" however does maximize current window across both monitors, and I can "restore" it with Windows Key and "-" back to original monitor and size. I have tried various solutions including changing the resolutions of one or both of my monitors and sometimes "temporarily helps" but reverts back to the problem. Also if I swap the logical (not physical) layout so that I tell Win7 the monitors are setup in a reserved situation (Large monitor on the left, and small on the right) - this also sometimes helps for awhile - and is very strange and awkward to work with "backwards". But all of these solutions stop working. The only solution that consistently works for "moving" the screens is to hold the Cntrl Key down as I'm moving window with the left mouse selected on the title bar. Even that however, doesn't prevent the loss of typing focus for me on the 2nd monitor - while at the same time the typing on the 1st monitor is fine. Any help on moving my window screens from one monitor on my 2nd monitor without having to press the Cntrl key while holding down my left mouse button with be appreciated. Also any help on gaining typing "focus" into my 2nd screen with be helpful too. Thanks - John

    Read the article

  • Why does writing a file to an NFS share send a COMMIT operation to the NFS server?

    - by Antonis Christofides
    I have a Debian squeeze (2.6.32-5-amd64) which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 192.168.1.75:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS-mounted directory goes at more than 100 MB/s.) I examine the server statistics and I see that whenever a file is written, it is "committed" (this also happens with NFSv3): root@debianvboxtest:~# mount -t nfs4 192.168.1.75:/mydir /mnt root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 10 4% 1 0% 1 0% root@debianvboxtest:~# echo 'hello' >/mnt/test1056 root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 11 4% 2 0% 2 0% Now in the RFC, I read this: The COMMIT operation is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. I don't understand why the client commits. I don't think that the "echo" shell built-in command runs fsync; if echo wrote to a local file and then the machine went down, the file might be lost. In contrast, the NFS client appears to be sending a COMMIT upon completion of the echo. Why? I am reluctant to use the async NFS server option, because it would apparently ignore COMMIT. I feel as if I had a local filesystem and I had to choose between syncing every file upon close and ignoring fsync altogether. What have I understood wrong?

    Read the article

  • Windows Server 2008 Software Raid 5 - Data integrity issues

    - by Fopedush
    I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago after copying a large file to the disk that there was an integrity issue with that file - it was a ~12GB file that I had downloaded via uTorrent. After moving it to the raid array, I used uTorrent to relocate the download location, and performed a re-check so I could seed it from that location. The recheck found that only 6308/6310 chunks of the copied file were intact. My next step was to write a quick powershell script that would copy files to the array, while performing a SHA1 hash of the original and resultant files and comparing them. Smaller files (100-1000MB) copied over just fine. When I started copying larger data (~15GB), I found that the hash check failed about 2/3rds of the time. The corrupt files had very, very small inconsistencies - less than .01%. I further eliminated the possibility of networking or client issues by placing this large file on the C:\ of the server, and copying it repeatedly from there to the array, seeing similar results. Copying the data via explorer, powershell, or the standard windows command prompt yield the same results. None of the copies fail or report any problems. The raid array itself is listed as healthy in disk management. After a few experiments, I shut down the server and ran memtest overnight. No errors were detected. A basic run of chkdsk found no problems, but I did not use the /R flag, as I was unsure how that might affect a software raid-5 volume. I next ran Crystal Disk Info to check the smart data on the drives - but found that CDI only detected 5 out of 7 of the disks in the array. I have no idea why. Nevertheless, CDI shows the following "caution" flags on a single one of the drives: 05 199 199 140 000000000001 Reallocated Sectors Count C5 200 200 __0 000000000001 Current Pending Sector Count Which is a little bit alarming, but I don't really know what to do with the information. I hardly feel like one reallocated sector could be causing this. At this point, I'm looking for some guidance on what to do next. I need to determine the cause of this issue, but I'm hesitant to run chkdsk /R or any bootable disk health checkers because I'm afraid they might break the array. I've considered triggering a re-sync of the array, but I'm not actually sure how to do that without doing something silly like manually dropping a disk and then restoring it. Any advice that could help me ferret out the precise cause of this issue would be greatly appreciated.

    Read the article

< Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >