Search Results

Search found 5490 results on 220 pages for 'quick'.

Page 153/220 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • Command prompt hangs/freezes/crashes sporadically

    - by Leonard Challis
    I'm finding it very difficult to Google this, I don't seem to be able to find anyone with the same issue and I don't know enough about the Windows operating system to troubleshoot. The machine(s) we are seeing the problem on are Windows 7 (professional) both 64bit and 32bit. The problem is with the command prompt freezing up, seemingly randomly. When it does freeze nothing will bring it back to life (i.e. keypress) and it's nothing to do with Quick Insert mode either. It doesn't seem to be when I run standard commands, such as cd, dir, etc, but when I run different programs from the command line. The annoying thing is that sometimes the prompt will freeze and at other times it won't, using the same program/command in the prompt. To add to the frustration, one of my colleagues who had the same problem seems to not have experienced it for a few days now (we're pretty heavy on the command line). It's not a VPN/RDP thing as suggested in other questions and forum posts, as I've seen this both locally and remotely. I thought it was to do with the return code signifying an error or some error state in the program, i.e.: C:\Users\leonardc>mysql -u lalala ERROR 1045 (28000): Access denied for user 'lalala'@'localhost' (using password: NO) but this isn't always the case either. In fact the above command hasn't crashed the shell before. Elevating the prompt to run as Administrator doesn't seem to have any bearing on the problem either. Disabling my anti-virus doesn't have an effect either. Update: I tried the same commands in PowerShell, but I still get the same problem, it will freeze at random times (more often than not, as with the command prompt, but not always). It's not the same as command prompt in the fact that one might work while the other doesn't, but then the next time I try run the same command in both it will suddenly be different again.

    Read the article

  • Apache2 Segmentation fault with wsgi_module

    - by a coder
    Apache 2.2.3 is running as an existing web server under RHEL 5. Attempting to set up Trac using wsgi_module. RHEL 5 ships with python 2.4, so in order to use the current version of Trac (1.0) I needed to install it with easy_install-2.6. Trac works with the default mod_python, however users strongly encourage not using this module as it is officially dead. Using RHEL's package manager, I downloaded/installed python26-mod_wsgi.so. I backed up the httpd.conf, then made the following additions: LoadModule wsgi_module modules/python26-mod_wsgi.so #...# WSGIScriptAlias /trac /www/virtualhosts/trac/deploy/cgi-bin/trac.wsgi <Directory /www/virtualhosts/trac/deploy/cgi-bin> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Next I moved trac.conf to trac.conf.bak (contains mod_python calls). I tested the configuration using: apachectl configtest Syntax is OK. So I reloaded the server config using: service httpd reload At this time, all virtualhosted sites stopped responding. I restored my backup copy of httpd.conf, reloaded the server config, and the virtualhosted sites are being served again. A quick look at the httpd error_log shows: [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28282): Initializing Python. [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28280): Attach interpreter ''. [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1817): proxy: grabbed scoreboard slot 0 in child 28283 for worker proxy:reverse [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1836): proxy: worker proxy:reverse already initialized [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1930): proxy: initialized single connection worker 0 in child 28283 for (*) [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28283): Initializing Python. [Mon Oct 08 10:20:04 2012] [notice] child pid 28249 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28250 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28251 exit signal Segmentation fault (11) There are many similar lines, this is just a snip of the log file. Suggestions on what could be going on to cause the Segmentation faults?

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • Slow boot for OS and external devices

    - by Derek Van Cuyk
    I have been having this problem intermittently but as of yesterday, it has become more consistent. It originally started when I rebooted my PC at home and the OS (Windows 8) sat in a loop appearing to do nothing while loading. I figured since this was a new installation, that something may have just become corrupted and I decided to reinstall. So I tried to boot off of the thumb drive which had the installation iso and encountered pretty much the same issue. Same with the DVD drive. So, I rebooted once again and left it to load the entire night just to see if it ever would and sure enough this morning, Windows had finally loaded. Authentication had the same roblem albeit not quite as long (took about 5 minutes to authenticate). However, once I was in, everything appeared to be working fine and as quick as normal with the exception of when I tried to scan the C drive for any errors, which ran unbearably slow (45 minutes and before I left for work and was not finished scanning a 64GB SSD drive). I mention that I have had this issue but never when loading the OS. Before it occurred when trying to install windows 7 from a different DVD drive than the one I have now. It took me about 3 hours to do it since I had to wait sometimes 30+ min for each step to finish processing. Does anyone have an idea as to what can cause this? I am assuming it is the motherboard since it is responsible for communication with all the devices I'm having issues with but I cannot find anyone else who has had a problem like this and don't want to drop more money on a MB if it isn't the problem. Hardware: Motherboard: Asus M4A78T-E Socket AM3/ AMD 790GX/ Hybrid CrossFireX Hard Drive: Kingston SSDNow V+180 64GB Micro SATA II 3GB/S 1.8 Inch Solid State Drive SVP180S2/64G Optical Drive: Samsung Blu-Ray Combo Internal 12XReadable and DVD-Writable Drive with Lightscribe SH-B123L/BSBP Thanks, Derek

    Read the article

  • CPU-adaptive compression

    - by liori
    Hello, Let assume I need to send some data from one computer to another, over a pretty fast network... for example standard 100Mbit connection (~10MB/s). My disk drives are standard HDD, so their speed is somewhere between 30MB/s and 100MB/s. So I guess that compressing the data on the fly could help. But... I don't want to be limited by CPU. If I choose an algorithm that is intensive on CPU, the transfer will actually go slower than without compression. This is difficult with compressors like GZIP and BZIP2 because you usually set the compression strength once for the whole transfer, and my data streams are sometimes easy, sometimes hard to compress--this makes the process suboptimal because sometimes I do not use full CPU, and sometimes the bandwidth is underutilized. Is there a compression program that would adapt to current CPU/bandwidth and hit the sweet spot so that the transfer will be optimal? Ideally for Linux, but I am still curious about all solutions. I'd love to see something compatible with GZIP/BZIP2 decompressors, but this is not necessary. So I'd like to optimize total transfer time, not simply amount of bytes to send. Also I don't need real time decompression... real time compression is enough. The destination host can process the data later in its spare time. I know this doesn't change much (compression is usually much more CPU-intensive than decompression), but if there's a solution that could use this fact, all the better. Each time I am transferring different data, and I really want to make these one-time transfers as quick as possible. So I won't benefit from getting multiple transfers faster due to stronger compression. Thanks,

    Read the article

  • How do I boot [embedded] linux from sd card?

    - by Brandon Yates
    I am hacking together a quick embedded linux system on a DM816x evm board. Previously I have been using TFTP and NFS to load my kernel and root filesystem to the board. I am now trying to switch over to loading everything from an SD card. I have my card partitioned such that uBoot and my kernel image are in one partition, and my rootFS in another partition. At power-on, Uboot starts correctly and successfully launches the kernel. However, the kernel is unable to mount the root file system. It appears that it doesn't recognize any SD (mmc) cards. It gives this error message. VFS: Cannot open root device "mmcblk0p2" or unknown-block(2,0) Please append a correct "root=" boot option; here are the available partitions: 1f00 256 mtdblock0 (driver?) 1f01 8 mtdblock1 (driver?) 1f02 2560 mtdblock2 (driver?) 1f03 1272 mtdblock3 (driver?) 1f04 2432 mtdblock4 (driver?) 1f05 128 mtdblock5 (driver?) 1f06 4352 mtdblock6 (driver?) 1f07 204928 mtdblock7 (driver?) 1f08 50304 mtdblock8 (driver?) Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0) I feel like I'm missing something fundamental here. Why does it not recognize the root device I am trying to load from? Here is my uBoot boot script that is running: setenv bootargs console=ttyO2,115200n8 root=/dev/mmcblk0p2 rw mem=124M earlyprink vram=50M ti816xfb.vram=0:16M,1:16M,2:6M ip=off noinitrd;mmc init;fatload mmc 1 0x80009000 uImage;bootm 0x80009000

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • USB Device Not Recognized

    - by Franky Chanyau
    Ok this one gets a little bit complicated but bare with me :D A client brought her computer in to be fixed about a week ago, she says she tried charging a new phone she bought from china and immediately after her usb keyboard and mouse stopped working (typical). I had a quick look at it but because I did not have time, I did a simple system restore and it seemed as if the issue was fixed. I promptly sent it back to her but a few days back she called saying that the issue has returned. Turns out the computer was riddled with some virus that also corrupted her XP install so I had to format the whole thing(yes I tried repairing). I hoped that the format would fix the keyboard and mouse issue but the whole thing has escalated and the computer will throw the "USB Device not recognized" error when I plug anything into the many usb ports it has. I have installed all the drivers (including the chip set drivers) for the pc and even tried the unplugging from the power for a while trick, still no luck. I am sure it is not a hardware issue, but may be wrong. This is way over my head. Can anyone help? Computer: HP Compaq DC7100, Intel Pentium 4, 512mb RAM OS: Windows XP Professional SP2

    Read the article

  • sharing a folder between linux and windows over the internet

    - by valya
    Hello Currently my job is to make websites with Django. I use many things like virtualenv, PIL, etc. The problem is, I can't stand Linux on my desktop. I like it on servers, It's greate to use it over the SSH. But for desktop? No way. But for the development Linux is quite essential. Of course almost everything is ported to Windows, but it's not as simple to use as in Linux. For example, Windows shell is awful in comparison with Linux. So I've tried Cygwin, but it's too damn slow. Every time django dev server reloads, it tooks almost 20-30 seconds. In comparison, then using "native" python on Windows or Linux, it reloads instantly. Even worse, Cygwin makes all my system very slow. I've been thinking about it and have thought up a way to go. I can share a folder with my application with some Linux box. The devserver and everything will run on that box, while I'll be happy editing files and running the browser on my Windows 7. SSH shell is much quickier and handy than Cygwin. Currently there are no Linux boxes in my home network (except for my android phone :) but I have several VDS boxes with Debian. So, how do I share a Windows folder with VDS box? I can't rely on my desktop IP but I can rely on the VDS's one. I need sharing to be as quick as possible (well, 2-3 seconds ping is OK) and "native" for both systems, so I could use a folder like a normal folder in both Windows and Linux.

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • Lucid Lynx login issue

    - by Bart Silverstrim
    Recently upgraded from Karmic to Lynx. Upgrade seemed to go well, no noticeable issues. I logged in, and my window manager wasn't starting. An application would appear, but sans control buttons and border, so figured the windows manager needed to be given a swift kick. Opened a web browser and a quick google had me run "metacity --replace &" and everything popped up. I re-ran the Compiz configuration tool to enable my rotating desktop cube to the way I liked it, and had to reconfigure my desktop switcher to the right number of desktops (although the first time I ran it, it crashed on the panel and reloaded...odd, but once it relaunched it seemed fine.) Today I installed updates, rebooted and logged in for the second time since my upgrade. Again, the window manager was dead, and my compiz settings were gone, and the workspaces were set back to four (and when I clicked on the preferences to change them, it crashed on the panel and reloaded again). Resetting everything made things look somewhat normal again. I'm guessing it'll work until I reboot again. Googling around isn't turning up similar complaints about Lucid Lynx and the window manager. Before I go deleting preference files, anyone else know of this kind of issue and what could be done about it? Or should I start taking the stab in the dark approach of deleting preference files hoping one of them is corrupt or has something unsupported in it that's throwing LL for a loop?

    Read the article

  • Being a more attractive job candidate - Certs XOR Degree

    - by Zephyr Pellerin
    I'm currently working in an IT position, where I do helpdesk stuff, and predominantly security related issues/consulting (In the loosest sense of the term) In-House and for Service-Contract clients (as the only/acting CCSP [I guess I should say only person with Cisco experience] in my organization). I've professionally written Kernel Mode drivers for a gaming company. Among other things that I'm proud to put on a resume. I think of myself as very reasonably qualified as a System Administrator, With excellent Cisco experience, among other things I think would make a good addition to almost any IT staff in need of a new employee. However, Something has always tripped me up - Human Resources. Let me explain, I decided to skip the university route - I'm immensely glad that I did, The computer science graduates that I've met and work with rarely know much of anything about Computers (Until they gain some 'real' experience), Even when asked about Theoretical Computing fundamentals they can rattle something off about Turing Completeness but rarely do they understand the mathematical underpinnings. In short, I think instead of going to college, I'd rather pick up some real world experience. However, Apparently, Employers rarely think the same way. A quick perusal of jobs through the standard job search engine yields nothing short of a conspiracy to exclude anyone without 'A Bachelors Degree in Computer Science or Equivalent'. Interviews I've had in the past have almost always been entangled with - 1. My Age (Which I can't really change) and 2. Lack of Degree. Employers frequently disregard the CCNA/CCSP, The experience I've gained through internships, My extensive experience in x86 assembly and C, among so many other things I like to think are valuable to employers - In lieu of the fact that I don't have a piece of paper. So, AS AN EMPLOYER - Is it even worth working on my CCIE? Or should I pad my resume with certifications that are easier to acquire (Like CISSP, MSCE, Network+, etc.). Or should I ditch the whole idea and head back to get a Mathematics or CS degree?

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • OpenVZ Can't initialize containers after install

    - by Tonino Jankov
    I have installed OpenVZ on centos 6 on a dedicated server. I followed quick installation guide on openvz wiki. After installing thru yum, I don't know why, but grub.conf wasn't automatically updated to accomodate new kernel, so I had to do it manually. I edited grub.conf, added openvz kernel and rebooted - it went fine. Server went up into openvz kernel and it worked, it started openvz service byitself. But after I created a container, added IP to it and attempted to start it, I couldn't. Here is the output from the shell: [root@cloud2 ~]# vzctl start 86 Starting container ... Container is mounted Container start failed (try to check kernel messages, e.g. "dmesg | tail") Container is unmounted [root@cloud2 ~]# dmesg | tail [ 1973.401596] CT: 86: failed to start with err=-105 [ 2107.113850] Failed to initialize the ICMP6 control socket (err -105). [ 2107.155523] CT: 86: stopped [ 2107.155543] CT: 86: failed to start with err=-105 [ 6348.282184] Failed to initialize the ICMP6 control socket (err -105). [ 6348.330348] CT: 86: stopped [ 6348.330361] CT: 86: failed to start with err=-105 [45184.024002] Failed to initialize the ICMP6 control socket (err -105). [45184.072086] CT: 86: stopped [45184.072099] CT: 86: failed to start with err=-105 [root@cloud2 ~]# I don't know what is wrong. I tried different templates, debian 6, centos 6, i386, amd64, but the issue is the same. What is the problem?

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • SBS 2003 boot stalls at acpitabl.dat

    - by John
    I have a SBS 2003 server running for 3 year without any problems, and few days ago it freezes during the boot. System is using two 500 Gb drives in RAID1 (Intel Matrix 7.5) After trying to load in safe mode, boot stops on acpitabl.dat. First idea was that there is a problem with RAID altough disk status was OK, and RAID status was Rebuild. I tried to boot with each drive, and one gives me the same problem, and the other drive is failing to load. Took both drives out, and checked it on a different machine. One drive is dead, other is without any problems. Returned the good drive back in SBS 2003 with changed status to Degraded, but the problem is still the same. I also have a clean SBS 2003 copy installed on this drive (previous installation), which loads smooth and quick. So, I believe the main problem is this installed version of SBS 2003. Did not make any hardware changes, did not make any updates (not sure about any automatic windows updates lately). Since there are tons posts about this problem, and no clear solution, I am trying to figure how to repair SBS 2003 installation, since there are some installed programs on this installation which I cannot re-install without additional issues.

    Read the article

  • Can I format a Veritas cluster shared volume from windows?

    - by spaghettidba
    We have a Microsoft Failover Cluster with dynamic disks managed by Veritas Storage Foundation. Today the sysadmins added a new disk for SQL Server but the cluster size on the volume was wrong, so I issued a quick format to change it. The disk volume failed, the SQL Server group failed as well and the cluster became unresponsive. After some minutes I managed to fail over to a passive node. The SAN admins say it's my fault because I shouldn't have formatted the disk from the Windows format applet, but I should have used Veritas Enterprise Administrator instead. Can a format operation bring offline a whole cluster group this way? Relevant error messages: From the eventlog: The cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor. From the cluster.log ERR [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Control(STORAGE_GET_DISK_INFO_EX) to resource 'NameOfTheDiskGroup' timed out.' Veritas Documentation: Excerpt from Symantec's documentation: Note: Before manually creating the resource, you must format the cluster-shared volume with NTFS using the VEA GUI and mount it on the node where you are trying to create the resource. Does this mean the disk cannot be formatted from Windows? I don't read it that way. For the record, I formatted many disks using the Windows applet in the past and nothing bad happened.

    Read the article

  • Can't pin modified shortcuts to the Windows 7 task bar

    - by Coder
    I have a shortcut to a .bat file which I pin to the task bar using a workaround by using another icon and this seems to work. Now I make a copy of that shortcut, point it to a different .bat file, rename it, and I can't pin this one to the task bar. I have to find some other new unused icon to pin, pin it, then modify it manually. The other problem this causes is that windows seems to track which icons were pinned even if they are modified after the fact. As such, if I use media player as my dummy icon, pin it, then alter it's name and shortcut to point to a .bat file, I can't re-pin windows media player and if I select unpin from the windows media player, it unpins my shortcut to my .bat file. I can't believe how ridiculous this is. Is there a way to pin anything I want to the taskbar (ie. .bat file in my case) that does not cause problems like this? Is there an easy way I can copy an existing shortcut and modify it and re-pin it to the taskbar? The reason I want to copy it is because I start a .bat file (in particular git bash) and I set properties on the window like quick edit, increase the screen buffer and set it's position and size manually. I don't want to have to do this to every single icon I want to pin since they will be identical aside from the shortcut url.

    Read the article

  • Tool to Save a Range of Disk Clusters to a File

    - by Synetech inc.
    Hi, Yesterday I deleted a (fragmented) archive file only to find that it did not extract correctly, so I was left stranded. Fortunately there was not much space free on the drive, so most of the space marked as free was from the now-deleted archive. I pulled up a disk editor and—painfully—managed to get a list of cluster ranges from the FAT that were marked as unused. My task then was to save these ranges of clusters to files so that I could examine them to try to determine which parts were from the archive and recombine them to attempt to restore the deleted file. This turned out to be a huge pain in the butt because the disk editor did not have the ability to select a range of clusters, so I had to navigate to the start of each cluster and hold down Ctrl+Shift+PgDn until I reached the end of the range (which usually took forever!) I did a quick Google search to see if I could find a command-line tool (preferably with Windows and DOS versions) that would allow me to issue a commands such as: SAVESECT -c 0xBEEF 0xCAFE FOO.BAR ::save clusters 0xBEEF-0xCAFE to FOO.BAR SAVESECT -s 1111 9876 BAZ.BIN ::save sectors 1111-9876 to BAZ.BIN Sadly my search came up empty. Any ideas? Thanks!

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Configure IIS Web Site for alternate Port and receive Access Permission error

    - by Andrew J. Brehm
    When I configure IIS to run a Web site on Port 1414, I get the following error: --------------------------- Internet Information Services (IIS) Manager --------------------------- The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) However, as according to netstat the port is not in use. Completely aside from IIS, I wrote a test program (just to open the port and test it): TcpListener tcpListener; tcpListener = new TcpListener(IPAddress.Any, port); try { tcpListener.Start(); Console.WriteLine("Press \"q\" key to quit."); ConsoleKeyInfo key; do { key = Console.ReadKey(); } while (key.KeyChar != 'q'); } catch (Exception ex) { Console.WriteLine(ex.Message); } tcpListener.Stop(); The result was an exception and the following ex.Message: An attempt was made to access a socket in a way forbidden by its access permissions The port was available but its "access permissions" are not allowing me access. This remains after several restarts. The port is not reserved or in use as far as I know and while IIS says it is in use, netstat and my test program say it is not and my test program receives the error that I am not allowed to access the port. The test program ran elevated. The IIS Site is running MQSeries, but the MQ listener also cannot start on port 1414 because of this issue. A quick search of my registry found nothing interesting for port 1414. What are socket access permissions and how can I correct mine to allow access?

    Read the article

  • OS X borked; need to backup outside of Time Machine

    - by rlbgator
    Quick Background: iMac G5 (the white one; 4 years old?) Running Leopard 10.5.something. Time Machine started failing on me; and every time I touch the Finder, things beachball like crazy. Booting from install disk then using Disk Utility to "Repair Disk" also fails. I'm left with the conclusion that I have a corrupt file somewhere important, that's (i) keeping TM from working and (ii) messing with basic functionality. I am not (yet) savvy enough in OS X to know what logs to look in, or how to decipher them - but 'corrupt file' seems to be the likely case, based on my readings of apple.com forum threads. So I think I need to backup, outside of Time Machine, then install fresh OS X on a new drive (or maybe SpinRite the current drive?). I'm able to put a (non-Time Machine) external USB drive on, so I dragged all 3 Users' folders to that... am I done backing up? Am I going to have a massive Permissions problem, trying to put things back together after a re-install? Thanks for reading.

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >