Search Results

Search found 7570 results on 303 pages for 'doug hope'.

Page 197/303 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Eclipse Indigo freezes on 'Open Type' search

    - by NickGreen
    When I'm trying to search for a Java class with Ctrl-shift-T (Open Type popup), Eclipse freezes when I'm typing 1 character. It usually takes about 8 seconds to 'unfreeze', but sometimes it won't come back at all.. When it freezes, I see that the eclipse process takes about 1Gig of mem and the CPU is about 100%! I've tried creating a new workspace, adjusting the eclipse.ini (perm size, different memory values), starting with -clean and at last reinstall the whole IDE. Nothing helps.. My eclipse.ini: -startup plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.100.v20110505 -product org.eclipse.epp.package.jee.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 768m --launcher.defaultAction openFile -vmargs -server -Dosgi.requiredJavaVersion=1.5 -Xmn128m -Xms1024m -Xmx1024m -Xss2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelGC -Djava.library.path=/usr/lib/jni I'm using the following plugins: JRebel and m2e. I'm desperate for a solution because this problems causes me a great deal of time loss. System: Ubuntu 12.04 LTS 64 bit, 4GB mem, Intel core i7 860 @ 2.8 Ghz. Hope somebody knows a solution. Thank you for your time.

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • Stack Managed Switches over a distance

    - by Joel Coel
    We have several buildings with stacked switches, where the distance between the stacked units is considerable... separate floors, or at opposite ends of a hallway. They are 3Com switches that stack using cat6 cabling. These switches are coming up on 12 years old now, and as I look around at replacements it seems no one supports this scenario any more. Stacking switches want to use fiber links (it more for me to run and terminate the fiber stacking cables than to purchase the switch) or other custom cables that seem only intended to jump up to the next unit in a rack. What have others done to support stacking over a distance? I'm considering breaking up the stacked switches into separate managed entities and just bridging from the root switch in the buildings, but I'd really like to avoid that for what I hope are obvious reason. The closest thing I've found are from netgear that use hdmi cables for the stacking connection... I could try to support that by running an additional cat6 line and re-terminating both links into a single hdmi port, but I have concerns over that approach as well.

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

  • Vista: window focus problem

    - by GateKiller
    Sorry for the vague title but this one is hard to explain so bear with me please. I'm using Windows Vista at work for web development and sometimes when I Click or Alt-Tab to background window, the window will get focus but it will not be brought to the front. In order to bring the window to the front, I have to click on the applications border (when the resize cursor appears) and the window will then jump to the front. I've had this problem for about a year now and it happens at least a dozen times a day, but it doesn't do this all the time - seems random. I hope I have explained the issue fully (and you've understood it) and would appreciate any constructive answers or comments to solve this problem. Example: If I Alt-Tab from Google Chrome to Notepad and this problem randomly occurs, Google Chrome will remain in front of Notepad, however, I will be able to type text into Notepad while the window is behind Google Chrome. Clicking on Notepad's content area will not bring it to the front but clicking it's window border will. Video Exampe http://vimeo.com/19388998 In this video, I clicked from Google Chome to UltraEdit and chrome stayed in font, but as you can see, I can still type in UltraEdit. I'm starting to believe that this could be a bug in Google Chrome so I'll continue to watch if this between other applications.

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • ASUS laptop doesn't charge/use the battery after reinstalling Windows 7

    - by Stan
    I've done a clean install of Windows 7 x64 on an ASUS X501A laptop. The battery is detected and shows in the system tray as "plugged in, charging". However the charge level stays at 76% and if the AC cord is plugged out the laptop turns off. The laptop does not turn on without being plugged in either. Everything worked perfectly prior to reinstall. I've tried: Downloading and installing all the ASUS drivers, including the ATK ACPI driver Checking the BIOS - there do not seem to be any battery-related settings Flashing the BIOS to the latest version Uninstalling Microsoft ACPI-Compliant Control Method Battery in device manager as suggested on the internet Full power discharge/ATX reset as suggested by ASUS support: remove mains power charger, remove battery, press and hold power button for 10 seconds, reconnect battery and mains and turn on I have a feeling all this may have something to do with the EFI BIOS that comes on the laptop. During the reinstall I had to delete all partitions and start anew, because the Windows installer complained about the improper order of GPT partitions. The EFI System Partition was recreated by the installer, and I am guessing that it may be missing the particular ACPI driver needed to make the battery work. I've tried researching this, but could not come up with any useful info. I am hoping someone here may know a bit more about this and maybe help me understand what's going on and how to fix it. Barring that, I'll have to re-image the drive off an identical ASUS laptop with stock install and hope it fixes things.

    Read the article

  • Why domain.com appears as theplant.com when hosted on hostgator?

    - by silow
    I have a script that's supposed to detect the url of its caller website. If the caller is another website, it should give something like http://callersite.com. I'm using this line of php code (though I suspect this won't matter for sysadmins) gethostbyaddr($_SERVER['REMOTE_ADDR']) I'm testing with a caller site that's hosted on hostgator. What I'm noticing though is that I don't get callersite.com, I get something like 1a.12.12ab.static.theplanet.com. I don't know what theplanet.com is and why I'm not getting caller site.com. Also what do I need to do to really get the domain of the site making a call to my script? -- Thanks for the explanation. Some have advised I use $_SERVER['HTTP_REFERER'] but it's not what I'm after. My script acts as an API. Another website makes a curl request to it and gets an output and later on presents it to the user. So http referrer gives false since the caller site.com is making a direct call to me. So any hope?

    Read the article

  • Network switches for LAN party

    - by guywhoneedsahand
    I am working on setting up the network for a small LAN party (less than 16 people). Most of them do not have wireless cards in their rigs, so I need to set up some way for everyone to a) play LAN games and b) access the internet. The LAN party will probably take place in my basement, where I have enough space. However, the basement is not wired up with the router which is actually on the floor above. I make a cantenna a while back that can boost the wireless performance of my computer significantly. How can I use this to provide internet and LAN to guests? My hope was that I could use a switch like this http://www.newegg.com/Product/Product.aspx?Item=N82E16833181166 for the LAN - but how can I give people access to the internet? Is there such thing as a network extender / 16-port switch? Obviously, the internet performance doesn't need to be super stellar, because the games will be using LAN - so I am looking to provide some usable internet for web browsing, and very high speed LAN for games. Thanks!

    Read the article

  • Windows 2008 R2 DHCP Overlapping Scopes

    - by Buska
    We are trying to troubleshoot a scope overlap problem. We have multiple device types we wish to give all different ranges of a 16 bit subnet. IE. X device we wish to give 192.168.2.1-192.168.2.254/16, Y devices we wish to give 192.168.3.1-192.168.3.254/16. We are trying to accomplish this by creating different scopes and using the 60 class identifier. The problem is DHCP won't allow us to give these scopes with 16 bit masks because of the potential overlap. We aren't overlapping the address pool so why does DHCP care and can we work around this? If this isn't possible, how can i assign specific ranges by device type without creating multiple scopes? Any thoughts would be helpful. UPDATE: Entire Scope is 192.168.0.0/16 Gateway is 192.168.1.1/16 Device Hardware A - 192.168.20.1-192.168.20.254/16 Device Hardware B - 192.168.26.1-192.168.26.254/16 Device Hardware C - 192.168.85.1-192.168.85.254/16 We tried to setup multiple scopes for each device type (A,B,C) but couldn't specify a 16 bit mask as Scope A could technically overlap Scope B even thought our start and end addresses don't. I hope this makes more sense. Thanks for your thoughts.

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • Microphone doesn't work

    - by mandy
    I'm having a trouble with my built in microphone. Even if I use headphone with mic, it doesn't really work. The weird thing is, if I clap the green lines of the speaker icon jumps, but if I speak it doesn't. I have also tried some recordings, but I can not hear myself and adjusting the volume didn't help at all. I tried to restore it, still no change. I have updated it in the device manager, but it said there that it's up to date and the devices are working properly. Until I decided to recover the whole system (wherein I pressed zero and switch on button) to my surprise, the settings became different, most programs were deleted, even my files. It's like it was formatted and I'm so sad that the mic was not fixed. I really don't know what to do now. My laptop model is Toshiba satellite m840. I want to it return to its settings/set up just before I Recover the system and bring back all the programs that we're installed and of course, most of all, to fix my microphone so I can use Skype again and other video calling application.. I hope someone could help me. Thanks a lot!

    Read the article

  • Changing the prompt in telnet

    - by wim
    With some help from people on here, I was able to set a custom prompt in an ssh session (thanks!). Now I need to do the same in telnet, but I'm not sure of what syntax I could use for that. Basically the telnet prompt is just a > character, I need to modify it to something I can more reliably detect in automation jobs. Hope this makes sense. From inside telnet, trying to escape that command with a bang like !PS1=spam and !PS2=eggs did not change it. wim@wim-acer:~$ ssh [email protected] -i ~/.ssh/guest_nopassphrase -t "export PS1='Sending a custom prompt \w \$ '; exec sh" Sending a custom prompt ~ $ set HOME='/var/tmp' IFS=' ' LOGNAME='guest' PATH='/sbin:/usr/sbin:/bin:/usr/bin' PPID='1128' PS1='Sending a custom prompt \w $ ' PS2='> ' PS4='+ ' PWD='' SHELL='/bin/sh' TERM='xterm' USER='guest' Sending a custom prompt ~ $ telnet localhost <snip> Entering character mode Escape character is '^]'. > !set CONSOLE='/dev/ttyp0' HOME='/var/tmp' IFS=' ' LOGNAME='root' PATH='/sbin:/bin:/usr/sbin:/usr/bin' PPID='546' PREVLEVEL='N' PS1='\w \$ ' PS2='> ' PS4='+ ' PWD='/var/tmp' RESPAWN_COUNT='1' RESPAWN_LAST='0' RESPAWN_MAX='5' RESPAWN_TIME='5' ROOTDEV='/dev/sla1' RUNLEVEL='5' SHELL='/bin/false' TERM='linux' USER='root' > > Connection closed by foreign host Sending a custom prompt ~ $ Connection to 192.168.1.124 closed. wim@wim-acer:~$

    Read the article

  • Changing the current URL but serving content from another (same domain) - ProxyPass?

    - by zigojacko
    I've been banging my head against the wall with this for months now so I hope someone on here will be able to finally advise what is needed for this. I have some URL's like this:- domain.com/category/subcat/filter/brand And I wish to rewrite the URL's to:- domain.com/category/brand-subcat Content loads fine at the first URL, I just want to show it at a different URL - is URL masking the correct term for this? I have a RewriteRule in .htaccess that should do this job as far as I believe:- RewriteRule ^([a-zA-Z]+)/([a-zA-Z]+)/filter/([a-zA-Z]+)$ $1/$3-$2 This isn't actually modifying the URL at all though on a Magento website (mod_rewrite is enabled and plenty of other rewrites are working from the same .htaccess). So firstly, I want to know is what I am trying to achieve definitely possible? If so, what is this process even called? Secondly, does this need to be handled using ProxyPass and then use a [P] flag with the rewrite rule? I assume the Apache server doesn't have mod_proxy enabled currently because when I add a [P] flag, the URL returns a 403 forbidden error with the full server path for the current URL. Please could anyone kindly advise what on earth I need to do to achieve this?

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • proxy pass domain FROM default apache port 80 TO nginx on another port

    - by user10580
    Im still learning server things so hope the title is descriptive enough. Basically i have sub.domain.com that i want to run on nginx at port 8090. I want to leave apache alone and have it catch all default traffic at port 80. so i am trying something with a virtual name host to proxy pass to sub.domain.com:8090, nothing working yet and go no idea what the right syntax could be. any ideas? most of what i found was to pass TO apache FROM nginx, but i want to the do the opposite. LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so <VirtualHost sub.domain.com:80> ProxyPreserveHost On ProxyRequests Off ServerName sub.domain.com DocumentRoot /home/app/public ServerAlias sub.domain.com proxyPass / http://appname:8090/ (also tried localhost and sub.domain.com) ProxyPassReverse / http://appname:8090/ </VirtualHost> when i do this i get [warn] module proxy_module is already loaded, skippin [warn] module proxy_http_module is already loaded, skipping [error] (EAI 2)Name or service not known: Could not resolve host name sub.domain.com -- ignoring! and yes, the app is working (i have it running on port 80 with another subdomain) and it works at sub.domain.com:8090

    Read the article

  • Cannot get at data in my NAS

    - by Ben
    I've got a bit of an issue that I'm hoping you can help me with. I have an Iomega ix4 as my NAS. This runs Linux and each drive in the box has 2 partitions: one for the OS and RAID info, and the second for the actual data. I had it configured as RAID5. Recently one of the drives failed. At this point all of the data was available, it was just reporting a failed drive. I had a drive of the same capacity (although not the exact same spec) which I swapped in place of the failed drive. It recognised it, and started to rebuild the data protection. So far so good ... or so I thought. The next day, after data protection had finished reconstructing, the NAS was telling me that 4 new drives had been added, and wanted confirmation to overwrite the data. Obviously I declined to do this. I swapped the failed drive back in again, in the hope that it would return to its previous state of the data being accessible, but one failed disk. However it didn't - it still tells me that the NAS has 4 new drives in it. I am hopeful that the actual data is untouched, so what I need to do is get it to rebuild the RAID without touching the data on the disks. I have ssh access, and have run stuff like mdadm --examine to see what I can find. The mdadm.conf file has no entry in the "definitions of existing MD arrays" section. I have not run any actual rebuilding commands as yet, because this is entering an area which I am out of my depth in. Please can someone advise the best way of getting my data? Thanks.

    Read the article

  • pfSense routing between two routers with shared network

    - by JohnCC
    I have a network set-up using two pfSense routers arranged like this:- DMZ1 WAN1 WAN2 DMZ2 | | | | | | | | \___ PF1 PF2___/ | | | | \___TRUSTED___/ Each pfSense router has its own separate WAN connection, and a separate DMZ network attached to it. They share a common TRUSTED LAN between them. The machines on the trusted network have PF1 as their default gateway. PF1 has a static route defined to DMZ2 via PF2, and PF2 has a static route to DMZ1 via PF1. There is NAT to the WAN but internal networks (DMZ1/2 and TRUSTED) use different RFC1918 subnets. I inherited this arrangement, and all used to work fine. I made a config change to PF1 (relating to multicast), and machines on DMZ2 suddenly could not talk to TRUSTED. I rolled the change back, but the problem persisted. What I guess you'd hope would happen is that TCP packets would go DMZ2 - PF2 - TRUSTED and on return TRUSTED - PF1 - PF2 - DMZ2. That's the only way I can see it would have worked. However, PF1 drops the returning packets. I've verified this using tcpdump. I've worked around this by adding static routes to DMZ2 via PF2 to the servers on TRUSTED, but some devices on there do not support static routes so this is not ideal. Is there way to make this arrangement work decently, or is the design inherently flawed? Thanks!

    Read the article

  • Lost Windows 7 boot after EasyBCD with EFI

    - by drent
    I've got a Lenovo Y580 with a 64GB SSD and a 1TB HDD setup using GPT and setup to boot from (U?)EFI. I was trying to get my Linux Mint installation on the Windows boot manager using EasyBCD (I didn't realise EFI but it wiped my boot partition/loader and I cannot seem to get Windows back (and I still can't get a bootable Linux Mint). Using the System Recovery utility, Startup Repair can't "see" windows (it might be because I'm using a 7 Pro disk to recover Home Premium?). In command prompt, Bootrec tools don't do anything and bootsect can't run because it says that it's for BIOS only and I've booted with EFI. I can see the EFI data on the 200mb SSD partition using diskpart but I don't know how to add Windows back onto whatever bootloader I have/need. At the moment the only options I can see are: Do a fresh install of Windows and hope that the setup remains as fast as the default one (the SSD is some kind of cache for Windows but I can't quite see how it works given that the rest of the SSD is unpartitioned space). This seems like overkill given that Windows was working fine til EasyBCD deleted it. Try forcing BIOS mode and see if that somehow magically fixes things Try converting from GPT to MBR to try and use the bootrec/bootsect tools (and maybe back again) which seems like a really bad idea. Anyone have any ideas?

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Reusing Raid 5 Drive?

    - by User125
    We have two servers (ML530 G2 and DL380G2) w/ identical HP 10K RPM SCSI drives w/ a raid 5. One is decommissioned and the other will be decommissioned shortly. However, one of the drives on the production server had a drive failure. My hope was to take one of the drives from the decommissioned server and pop it into the production server. Both are running RAID 5. I broke the array on the decomm. server. To my knowledge, that should have wiped out all the volume and partition information. However, I do not know if it is safe to then take a drive from the decomm'ed server and replace the failed drive. Will the existing array see it as a replacement drive, wipe it and rebuild? Or will it fail because it was used in an array before. Are there any remnant data that resides on the drives after deleting a raid 5 array? These servers are 10-15 years old, so we're just trying to keep them alive until we decommission it. I'm not looking to pay a premium to find a vendor that still sells replacement drives for this system.

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • Server format & Reinstall while keeping Server & domain ID

    - by Chris
    Hi Everyone, I want to reinstall my 2008 R2 server from scratch, due to multiple Active Dir issues. I have only 1 server running AD and a spare machine to use if necessary. Is there a way to save just the user accounts and the domain SID, so that I can start with a clean server that uses the same name as before? I can reassign file security, but I do not want to have to rejoin all the users to a new domain. Also all users are mapped to folders on the server. What I hope to do is a clean install of the server without having to mess with the users machines. can someone please tell me the procedure to accomplish this? any help appreciated! Thanks guys, but I could be here all day telling you every error I am getting. can we please keep this to the question of how to do a reinstall and keep the same SID? I just want to start over without having to rejoin all the clients to a new domain. Is there such a tool that can backup the Server SID and the AD domain name so that I could restore them, without restoring any other data? I might not be using the correct terminology here, but hopefully you understand what I am asking. Thanks

    Read the article

  • I need to build a mini computer lab with 6 PCs

    - by chiurox
    Hello everyone, This weekend I need to build a mini computer lab with 5-6 PCs. The purpose is for a small computer class that will be taking place. They're mostly going to be doing office and daily kind of stuff, so nothing high performance. However, I hope it will last at least for 2 more years. I already have some parts for 2 PCs, one is a P4 3.0 and another is Celeron 2.4GHz, both socket 478. For the other 4 PCs, I'm wondering if I should buy individual parts and put it all together or get workstations like those Dell Optiplex with P4s. To put things into perspective, I am not currently in the US. I'm in South America and prices here are ridiculously. Another really important thing is that I need to share an internet connection between a total of 8 PCs at this place. Right now I only have a crappy wireless router, should I get another one? Or go with switch hub? I'm not experienced in this matter. Thanks guys!

    Read the article

  • Apache2 403 permission denied on Ubuntu 12.04

    - by skeniver
    I have a sub-directory in my /var/www folder called prod, which is password protected. It was all working fine until I asked my server admin to help me set up allow all access to one particular file. Now the entire folder is just giving me a 403 error. This is the sites-enabled file: <VirtualHost *:80> ServerAdmin [email protected] # Server name ServerName prod.xxx.co.uk DocumentRoot /var/www/prod <Directory /var/www/prod> Options Indexes FollowSymLinks MultiViews +ExecCGI Includes AllowOverride None Order allow,deny AuthType Basic AuthName "Please log in" AuthUserFile /home/ubuntu/.htpasswd Require valid-user </Directory> <Directory /var/www/prod/xxx/cgi-bin/api.pl> Allow from All Satisfy Any </Directory> ScriptAlias /xxx/cgi-bin/ /var/www/prod/xxx/cgi-bin/ ErrorLog ${APACHE_LOG_DIR}/prod.xxx.error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/prod.xxx.access.log combined </VirtualHost> Now he's unsure why this is blocking me out completely. No permissions have been changed, but this is the /var/www/ folder: 4 drwxr-xr-x 2 root root 4096 Jan 3 21:10 images 4 drwxr-sr-x 4 root www-data 4096 Mar 31 14:47 jslib 4 drwxr-xr-x 7 root root 4096 Jun 2 13:00 prod When I try to visit http://prod.xxx.co.uk, I don't get asked for the password; I just get 403'd I hope I've given enough information... Anyone able to spot something he can't?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >