Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 575/903 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • How do I stop my IIS App Pool making a request to wpad.mydomain.com?

    - by Programming Hero
    As part of some performance troubleshooting, I've monitored the slow startup of a "cold" App Pool (one without an active worker process) in IIS. When using a built-in account, the App Pool starts in sub-second time. When using a custom local account the App Pool takes 30+ seconds to start processing requests. The service appears to be making requests to wpad.mydomain.com, an address it does not have access to, which causes it to wait 30 seconds for a response before eventually timing out. As a workaround, I've added the hostname to the server's hosts file, to direct the traffic to the local machine, which returns much faster (1-2 seconds). What do I need to do to stop IIS making this request when this identity is used for the App Pool?

    Read the article

  • Where can I store driver files so that Windows sees them when it 'searches automatically' for them?

    - by qroberts
    I am in the process of creating a few generic images and I have downloaded all the drivers for all the models of machines we use here. I can extract these drivers to any location but I am not sure where Windows looks when it is searching for drivers. Is there a driver store somewhere in Windows that it searches through? These images will be created for: Windows XP, Windows 7 x86/x64 Are the locations different between Windows XP and Windows 7? Are they different if the OS is x64 based? I know Windows likes to differentiate 32/64bit software all over the place, not sure if they do the same with driver stores.

    Read the article

  • How can I speed up boot on one of my machines?

    - by Korneel Bouman
    I have a Gateway all in one machine (2 gig Intel Core 2 Duo T7250 dual core processor, 2 gig RAM - full specs) on which I installed 10.10. Once it has booted it's fine, but it takes forever to boot. This is what happens: 1. Boot starts with cursor flashing for about 10-15 seconds 2. Cursor disappears for 1.5 - 2 minutes 3. Cursor reappears, blinks a few seconds more, boot finishes in another 10 seconds 4. Login screen I have another machine with marginal better specs that boots up in no time (basically the above minus the two minute delay). Things I've done: enabled verbose mode for grub nothing is showing until after 2 minute pause. checked syslog last message before pause is a message from alsa saying the process is already running (or something similar... going from memory here...) It could be something sound related as the built in speakers are not working (sound card is recognized though and headphones work). Anyway, it's not the end of the world, but it's annoying and I'd like to know what's going on... Many thanks, and let me know if more info is needed.

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

  • Hard disk failure. Can I recover my "move"d folders?

    - by Doug
    I am in the process of moving all my files from an old laptop to new one. I just moved 11gb of data from my old laptop to a hard drive (external) and then upon moving it out to the new hard drive, the hard drive is getting a CRC (Data Error (Cyclic Redundancy Check). Now I am looking for a solution to recover the files that I moved on my old laptop (not the external). I understand they they are just marked for potential overwriting to free up space. I was getting ready to test out GetDataBack, but it says to install it on a healthy windows and use the recover-needed drive as an external. However, I don't want to turn off my computer without first getting the okay since it is in a "moved" state. Please help! What can I do to recover the Moved files. I haven't touched the computer since it has been moved. What can I use to recover them?

    Read the article

  • BackupPC back-up has not been finished in 12 hours(!)

    - by chronoz
    I installed BackupPC toda on a server and set it to do a back-up 12 hours ago... while it's been backing up since, it seems very very slow and it's not completed yet. It's just backing up a testserver with a total disk usage of 1.8GB. What could cause the back-up process to be so slow? rsnapshot always worked wonderfully fast, but I want to improve my back-up solution. df shows that the size on the back-up disk is actually still increasing.

    Read the article

  • batch file to disable network share on Windows XP

    - by Robb
    Loosely related to this question Network Share causing Cygwin to run slowly after 'ls', I'd like to write a little batch file that I can execute to disconnect the host from any network shares and subsequently another batch file to reconnect. Ideally, this would be something that I can execute from a PuTTY terminal, SSHed into the box running cygwin. I'm pretty sure the batch files can be written easily, but I don't know about executing them from a PuTTY terminal. Regardless, I'd still like the batchfiles anyways. For the sake of simplicity my process would be: Log into server via PuTTY Run batch files to disconnect shares Do what I need to do Run batch files to reconnect shares Exit session, closing PuTTY

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

  • Running a service as root

    - by kovica
    I have a java program that I use to automate the process of creating VPN settings for clients. The program calls couple of bash scripts, create and copies files around. I have to run it under root user because the whole VPN config is under /etc/openvpn. For this directory I need root privileges. On the same machine I have Glassfish application server and it will call the mentioned Java program. Glassfish is run under non-root user. What is the best, most secure way of running a program as a root user of course without entering a password if I run it via sudo?

    Read the article

  • mongodb replication: no primary elected

    - by Max
    I have three servers with mongod installed on it running as a replication set. Suddenly the two secondories became unavailable (the mongod process died) - I think because they were too stale. The problem is that the original PRIMARY is now the SECONDARY and my application doesn't work because it can't connect to a PRIMARY. I mean, in which way does that help me? If the replica set can't do failover?! Am I missing something? Furhtermore I am asking myself why did the SECONDARIES die / why are they too stale? What can I do about it? FYI: My database is quite big (40GB on disk).

    Read the article

  • Windows 2003 Server - File Permissions

    - by nickstan
    I have a Windows 2003 web server with a tree of folders that contains around 100GB of small images. I need to update the permissions on this folder to add a new user with access. I tried to do this by right clicking on the folder and adding the new user but the process never completed. I left it running for around an hour but it started to heavily impact the peformance of the server. Is there any other way to change these folder permissions without affecting server performance? Many Thanks Nick

    Read the article

  • Windows-Vista events: Diagnostics-Performance: How-To read this information?

    - by Ice
    Hi, i am wondering how long the bootprocess needs and looking in %SystemRoot%\system32\eventvwr.msc /s. Some entries marked as critical like : Starting needs : 184707ms, sometimes 211855ms or 269767ms Some Errors like: This process does many diskactivities and lower the performance of Windows: Filename : ntoskrnl.exe Why are the startups marked as critical? Are the values normal on a Dell Precision M90 (Intel Centrono Dual CPU, 2GB RAM, 80 GB Disk)? Some entries marked as Errors showing the time for shutdown, but there are some like this one i printed in this question, what is the meaning of this?

    Read the article

  • How to install a new TFS checkin policy on a TFS 2010 server?

    - by rayrayrayraydog
    We've recently upgraded our TFS server to TFS 2010 from 2008. We've been researching a couple new add-on checkin policies we want to install. The only problem is that all documentation I can find on adding new policies to the server appears to be specific to TFS 2008 or earlier. Those steps involve adding new keys in the registry which do not exist on our 2010 TFS server. Does anybody know where the process to install new checkin policies on a TFS 2010 server so they can be applied to Team Projects is documented? Thanks!

    Read the article

  • How to dual boot Ubuntu 12.10 and Windows XP sp3 on Dell Dimension 8250 desktop using 2 hard drives

    - by user106055
    I'd like instructions to dual boot Ubuntu 12.10 and Windows XP (sp3) on my desktop Dell Dimension 8250 (this is old and has 1.5 GB RAM which is maximum). I will be using 2 hard drives. Windows XP is already on a 120 GB drive and and Ubuntu 12.10 will go on a separate 80 GB hard drive. Both drives are IDE using a 80 conductor cable where the 40 pin blue connector connects to the motherboard. The middle connector is gray and is "normally" used for slave (device 1) and the black connector at the very end of the cable is meant for the master drive (device 0) or a single drive if only one is used. First, I do not wish the XP drive to have its boot modified by Ubuntu in any way. It should remain untouched...virgin. Let me know where the XP drive and the Ubuntu drive should be connected based upon the cable I've mentioned above, as well as jumper settings for both during the whole process. I'm just guessing, but should I remove the XP drive and put the empty Ubuntu drive in its place and install Ubuntu? By the way, I already have made the DVD ISO disk. For your information, the BIOS for this machine is version A03. When I tap F12 to get to the boot menu, I have the following choices: Normal (this will take me to a black screen with white type giving me the choice to boot to XP or to my external USB backup recovery drive) Diskette Drive Hard-Disk Drive c: IDE CD-ROM Drive (Note that if the CD Drive is empty, it will then go to the DVD drive) System Setup IDE Drive Diagnostics Boot to Utility Partition (This is Dell's various testing utilities) Thank you in advance for your help. Guy

    Read the article

  • How to monitor RAM usage for Hyper-V VMs ?

    - by Mac
    A bit of context first : on Windows 2008 Standard x64 with 8Gb RAM, I have 5 VMs running which should take up 1664Mb RAM (3*256Mb+384Mb+512Mb). There is nothing else running on this server except the basic OS components (this not a Core installation). I know that each VM will use more RAM on the host than what has been configured in Hyper-V. But when I run the task manager, it says 6.7Gb used ! If I sum up the RAM used by each process in the task manager (showing all users processes), I get to something around 1Gb... So : how can I check how much RAM each VM is really using on the host (it does not seem to be available via task manager) ? Note that I am aware of the fact that my problem could be unrelated to VM RAM usage, but I would still very much like to know how to do this.

    Read the article

  • Is Clonezilla a good option for a daily batch-file-based backup of a Windows XP PC?

    - by rossmcm
    Having just been through the process of rebuilding a Windows XP desktop machine when the disk died, I'm anxious to make it a lot less painful. I didn't lose any data, but reinstalling everything took ages. Clonezilla seems to be a highly mentioned free backup tool. How easy would it be to implement the following: a nightly unattended backup of the desktop's disk image to another network machine (or a second drive in the machine), hopefully with compression. restore from that image using USB boot media. so that if I come in to work and find the hard drive has tanked, it is just a matter of replacing the dead drive with a new one, booting from the USB stick, choosing the image to restore, and then finding something else to do for an hour or two. When it is finished I would hopefully be back to where I was.

    Read the article

  • How you choose your first job as a programmer? [on hold]

    - by sliter
    For Brief I am a recently graduated CS student. I am looking for a job these days, but I have no idea what kind of software development jobs I like(embedded system,web development or else...). And I am looking for your advice. Here is a little more While I was a student, I had an one year internship experience as a system engineer in a semi-conductor company where I wrote Linux driver, tuned system performance, etc.. I was happy about this experience as it allowed me to deepen my understanding of the operating system and different low level things. And I thought "Em, I will continue in the embedded area after I graduate". At the end of my study, I am doing an another internship in web development, both front-end and back-end. And I also enjoys a lot the process of learning new things and making it work (Backbone, Node, socketio, etc..). Now, when I am looking for a software development position, I do not know what to apply! All I know is that I want a job which allows me to keep up with the trends instead of repeating. But besides this, I've no idea what specific type of job I want to do. Turn back to embedded system? Continue with web development? Change to other promising areas(data mining)? All these development positions makes no big difference to me. But I think this is not good and I need some criteria at choosing. So I am looking for advice and I would really appreciate if you can share your experience.

    Read the article

  • How do I make a Live USB without destroying my Windows computer?

    - by user71089
    Unlike every other post I have read, let me begin by saying that I am TERRIFIED of destroying my Windows 7 home system in the attempt of making a bootable Ubuntu Thumb Drive. I very specifically DO NOT want to attempt configuring a dual boot system. What I am seeking is a portable operating system on a thumb drive, through which I can run my COREL software via VirtualBox-4.1.16-78094-Win. I want to be able to use my own software on my work computer without any worries about screwing up the host system anywhere I go. Supposedly, I just made a bootable Thumb drive, having successfully loaded the ISO for ubuntu-12.04-desktop-i386 using Pendrivelinux's Universal-USB-Installer-1.9.0.1 However, all that I get is a Thumb Drive that wants to install Unbutu onto my PC's HDD. I am not finding any clear path to this end in the posts I am reading. Like it or not, Windows is a fact of life for me. The goal is to be able to use my software on my work PC without doing anything intrusive that will cost me my job. If I have a meltdown on my PC at home trying to make this happen, it will be nearly as bad. Is this even Do-Able? Can the process be made clear? Thank You, Ubuntu World ...!

    Read the article

  • about Linux read/write only permissions

    - by Bimal
    My question looks similar to another thread: Linux directory permissions read write but not delete Here, I want to create a directory where I can give the permissions like: A user can create/upload any files. A user can re-upload and overwrite the files. A user cannot remove the file anymore. I am on CentOS 5.5, basic user only. How can I do that? Or is there any third party software that can be installed to do this? Or, create a new process which will lock the permissions right after a new file is uploaded via ssh?

    Read the article

  • How much ram to be able to convert large (5-6MB) jpegs? [closed]

    - by cosmicbdog
    I've got a project where we want to be processing large jpegs (5-6MB) with apache and php (using GD library). My understanding is that the server converts the image into a BMP making it quite ram heavy and currently we're unable to do it with our 1gb of memory. Here's the error we get: Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 17408 bytes) How much ram should we be looking at running with to process images of this size? Edit: As Chris S the purist highlighted below, my post is apparently vague. I am doing the most basic and common manipulation of an image, say turning it from a 4352px x 3264px jpg of 5mb in size, to a 900px x 675px file.

    Read the article

  • Upgrade PHP v5.3.3 to v5.3.4

    - by Ty01
    I currently have PHP v5.3.3 installed and configured. Everything is working perfectly, but I would like to keep PHP up to date and upgrade to v5.3.4. Can someone please describe the usual upgrade process for PHP when compiling manually? For example: is it just as easy as downloading the newest source, uncompressing it, compiling it using the same (or comparable) options that were used on the last version? Is there anything that has to be removed or changed from the previous version installed? I'm clueless. Please help!

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • How to maintain VPS server?

    - by clorz
    Assuming I have no experience in running them, what would be called a good maintenance routine for a VPS server running mail server and LAMP with a couple of sites. I've got one for quite a while now, but was doing what I feel is right without any guidance. It's ubuntu server and the only thing I do is ssh in there once a month and apt-get update, apt-get upgrade. Last year it suggested to update the distro, which I did. Waded through a bunch of diffs, broke mail server in the process and fixed it later on. So it turned out fine. Was this a right thing to do or should I stick with the old version just updating the packages? Is there a difference in the routine if it will be Fedora?

    Read the article

  • Red Hat 6.5- sysctl -w net.ipv6.conf.default.accept_redirects=0

    - by kjbradley
    I am in the process of writing a Red Hat 6.5 Kickstart disc with hardened security. I have run a program to determine where the weaknesses are in my system, and apparently there is a medium severity problem by accepting IPV6 redirects. When I implement the following line in my post script in my kickstart, I can't access any websites externally with wget, or ssh/scp in from my computer. sysctl -w net.ipv6.conf.default.accept_redirects=0 Is there a workaround to this so that the system will still be hardened but I will be able to access systems that are external?

    Read the article

  • What maintenance is required for a Postfix setup?

    - by JonLim
    I've taken a look at the setup and configuration process for a Postfix server, planning to use it for just sending emails out from my server. So far, I have these steps: Setup Postfix Configure Postfix Install DKIM Set SPF records Tune for performance Debug Seems rather straightforward. However, I was just wondering: are there any actions I should be taking for periodic maintenance of my Postfix setup? Thanks! EDIT: Also, just curious, how long would this entire setup ideally take? 30 - 60 minutes? More?

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >