Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 402/976 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • How to make IE 9 stop reading all the fonts every time?

    - by Mehrdad
    Process Monitor showed me that IE 9 accesses every time it loads on my system, which makes it have a 1- to 2-second delay every time it loads. (I tested this by removing my fonts -- it loaded much more quickly.) It gets a little annoying, because it's the best handler I have for MHT files, so I don't want to switch to something else. Is there any way to make it avoid doing that? (The "Hide Fonts" feature in Windows 7 doesn't work.)

    Read the article

  • Cancel/Kill SQL-Server BACKUP in SUPSPENDED state (WRITELOG)

    - by Sebastian Seifert
    I have a SQL 2008 R2 Express on which backups are made by executing sqlmaint from windows task planer. Several backups ran into an error and got stuck in state SUSPENDED with wait type WRITELOG. How can I get these backup processes to stop so they release resources? Simply killing the processes doesn't work. The process will stay in KILL/ROLL for a long time. This didn't change for several hours.

    Read the article

  • New Exchange 2010 CAS cannot find domain controllers

    - by NorbyTheGeek
    I am experiencing problems migrating from Exchange 2003 to Exchange 2010. I am on the first step: installing a new 2010 Client Access Server role. The Active Directory domain functional level is 2003. All domain controllers are 2003 R2. The only existing Exchange 2003 server happens to be housed on one of the domain controllers. It is running Exchange 2003 Standard w/ SP2. IPv6 is enabled and working on all domain controllers, servers, and routers, including this new Exchange server. After installing the CAS role on a new 2008 R2 server (Hyper-V VM) I am receiving 2114 Events: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Topology discovery failed, error 0x80040a02 (DSC_E_NO_SUITABLE_CDC). Look up the Lightweight Directory Access Protocol (LDAP) error code specified in the event description. To do this, use Microsoft Knowledge Base article 218185, "Microsoft LDAP Error Codes." Use the information in that article to learn more about the cause and resolution to this error. Use the Ping or PathPing command-line tools to test network connectivity to local domain controllers. Prior to each, I receive the following 2080 Event: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Exchange Active Directory Provider has discovered the following servers with the following characteristics: (Server name | Roles | Enabled | Reachability | Synchronized | GC capable | PDC | SACL right | Critical Data | Netlogon | OS Version) In-site: b.company.intranet CDG 1 0 0 1 0 0 0 0 0 s.company.intranet CDG 1 0 0 1 0 0 0 0 0 Out-of-site: a.company.intranet CD- 1 0 0 0 0 0 0 0 0 o.company.intranet CD- 1 0 0 0 0 0 0 0 0 g.company.intranet CD- 1 0 0 0 0 0 0 0 0 Connectivity between the new Exchange server and all domain controllers via IPv4 and IPv6 are all working. I have verified that the new Exchange server is a member of the following groups: Exchange Servers Exchange Domain Servers Exchange Install Domain Servers Exchange Trusted Subsystem Heck, I even put the new Exchange server into Domain Admins just to see if it would help. It didn't. I can't find any evidence of Active Directory replication problems, all pre-setup Setup tasks (/PrepareLegacyExchangePermissions, /PrepareSchema, /PrepareAD, /PrepareDomain) completed successfully. The only problem so far that I haven't been able to resolve with my Active Directory is I am unable to get my IPv6 subnets into Sites and Services Where should I proceed from here?

    Read the article

  • Can mod_fcgid maintain a hard-minimum number of available appserver processes?

    - by user9795
    ...and if so, how? I'm using Apache2 + mod_fcgid to serve a perl Catalyst application, on a box that I own, and I'd like for mod_fcgid to maintain a minimum number of spun-up processes ready to go. The docs say that FcgidMinProcessesPerClass only enforces a minimum number of processes that will be retained in a process class after finishing requests How do I get apache to start up with a certain number of appserver subprocesses on an idle server without using artificial load to get there?

    Read the article

  • Diagnostic high load sys cpu - low io

    - by incous
    A Linux server running Ubuntu 12.04 LTS with LAMP has a strange behaviour since last week: - cpu %sys higher than before, nearly equal %usr (before that, %sys just little compare with %usr) - IO reduce by half or 1/3 compare with the week before I try to diagnostic the process/cpu by some command (top/vmstat/mpstat/sar), and see that maybe it's a bit high on interrupt timer/resched. I don't know what that means, now open to any suggestion.

    Read the article

  • am i properly setting this up correctly? [closed]

    - by codrgii
    i'm having a problem with mod_security. I have installed it, but i am not sure on how to make the rules for it, i want the rules to prevent all major attacks like cross site scripting, remote file inclusion etc i'm using mod security 2.6.5, apache 2.2 with php 5.3.10. i went to this site http://www.gotroot.com/mod_security+rules but i am not sure how to set up the rules or which one to use, or how i add them properly in httpd.conf, would someone please explain the process and also recommend rules for someone in my position?

    Read the article

  • How much does HDD cache matter with Linux softraid?

    - by Jawa
    I'm in a process of renewing/expanding my disk sets, but not quite sure what kind of disks to get, cache-wise. What difference does disk cache amount of 16/32/64MB do, in capacities of, say, 1/1.5/2TB SATA disks? The disks will be used in a webapp server and in a media workstation, with Linux's softraid in raid-1/raid-5 configurations. Note, that as both purposes are purely for a hobby, the pricetag for a dozen of disks is a big issue.

    Read the article

  • Regarding Unix Move Command

    - by user38993
    I need to write an Unix Shell Script tran.sh that moves the csv input files from /exp/files folder to /exp/ready directory. The csv input files are written to /exp/files folder by an FTP server whose behavior I cannot trivially change. In tran.sh shell script I need to ensure before doing a move of that csv input file from /exp/files directory no longer any other process is writing to the file. How can I do it.

    Read the article

  • Leopard Macbook very slow after waking up from sleep / cron?

    - by yairchu
    Problem: Occasionally, my Macbook becomes very slow after waking up from sleep I open Activity Monitor and notice some processes like makewhatis are taking 100% CPU I kill the process[es] and then everything works fine again Questions: My guess is that these processes are cron jobs. Is that correct? Is it ok to kill them? Is there a way to make this problem not happen? Is this fixed on Snow Leopard? I'm using Leopard (10.5.8) on a MacBook5,1

    Read the article

  • Make Excel 2007 open hyperlinks in Firefox

    - by skypecakes
    In Excel 2007, when I click a hyperlink, it opens in IE. I'm running XP Professional SP3. Firefox is set as my default browser. Links in Word and Outlook open in Firefox. But Excel opens them in IE. Anyone know how to fix it? Edit: Process Explorer shows the command line for IE to be "C:\Program Files\Internet Explorer\IEXPLORE.EXE" -Embedding Thanks!

    Read the article

  • amavisd Net server pid file already exists after system crash and startup

    - by Simiyu
    Hi all, whenever i have an unclean shutdown, which is often due to power failure most of the time i get problems with amavis starting up. The error amavisd Net server pid_file already exists for running process comes when i start it under debug mode, so i always have to delete the amavisd.pid and amavisd.lock files manually before it starts. Is there a way i can stop this from happening or get a way to delete the files during reboot in the case of an unclean shutdown. Thanks

    Read the article

  • Apache - suExec - FastCGI - PHP = seciruty issue

    - by Jari V.
    I installed Apache with FastCGI (mod_fastcgi), suExec and PHP on my local development box. Working perfectly, expecting one thing. Let's say I have two users: user1 - /home/user1/public_html user2 - /home/user2/public_html I discovered a serious security hole in my configuration: I can include a file from user2 web root in user1 file. How to prevent? Any tips? php-cgi process is running under correct user.

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • monitoring services, CPU, memory remotely on a Windows server machine

    - by ToastMan
    I'm looking for a tool that is able to (remotely) monitor CPU and Memory in a Windows server but most importantly, which service/process is using it. Or-- is it possible to monitor a specific running service? We got a server that freezes on regular basis and we're trying to find the culprit without using a local debugger. Would be great if the monitoring software came with an agent that we can install on the remote clients for maximum accuracy. Any suggestions are very much appreciated.

    Read the article

  • Explorer is missing half tray icons in XP

    - by Ither
    Hi, lately and with no explanation half of the tray icons disappear every time I start up XP SP3. I use Process Explorer (procexp.exe) to look for the missing processes and they still there. When i kill and restart explorer.exe, the tray is complete again. I don't know how to diagnostic or repair the problem. Any suggestions? Thanks in advance.

    Read the article

  • How do I quickly switch to/from the front panel speakers in Ubuntu?

    - by Jephir
    I have speakers attached to my front panel sound output that I switch to frequently. Currently the process is to open Terminal, type "alsamixer", scroll over to "Front Panel", and press "M" to activate it. Although this doesn't seem like much, it's a hassle when switching between outputs frequently. Are there any faster alternatives, such as a button that can be placed on the GNOME panel or a shortcut key that can be used?

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • Squid Log Rotation and Sarg

    - by beakersoft
    We have just setup squid as our proxy, and i was going to use Sarg to analyze the log files. I had initially set the Squid logs to rotate everyday so they dont get huge. The problem is i cant see an option in the squid config to read a folder full of squid log files (say *.log). Is there an easy way to do this or am i going to have to write a bash script or something to process them all into one before i get squid to read it? Cheers Luke

    Read the article

  • Magic key in Linux Kernel

    - by Masi
    What is the purpose of the following command? sudo echo t > /proc/sysrq-trigger I run it, but I can see no difference in the magic key and its output at dmesg. Trigger suggests me that the databases of sysrq are involved in the process.

    Read the article

  • Two VHosts Use Same DocumentRoot, PHP Not Working on Second VHost

    - by thegrip
    I'm helping maintain an e-commerce site that is run on Magento. This site is an outlet for our wholesale customers. We have recently decided to open a second store to reach out to our Retail customers. We decided to set up another website inside of our magento store so that we can share the products across both stores. I'm in the process of setting up this new site on the server, but have run into an issue. I've set up the second vhost for the new retail site, and I've made the DocumentRoot for this vhost the same as for the wholesale site, so we can use one magento application for both sites. This is where the error occurs. When I browse to the new store it triggers a download of the index.php file. So I know the DocumentRoot directive is working, but it seems like PHP is being broken in the process. I'm using plesk to manage the server. I've made sure that PHP is turned on in both vhosts and still get the same issue. Does this sound like a problem of PHP breaking, or is it possible my vhost.conf file is set up incorrectly? (Although the vhost is managed by plesk and appears correct) Any help will be much appreciated. EDIT: Here's the vhost configs (generated by plesk): <VirtualHost IPADDRESS:80> ServerName domain.com:80 ServerAdmin "[email protected]" DocumentRoot /var/www/vhosts/domain.com/subdomains/tk/httpdocs CustomLog /var/www/vhosts/domain.com/statistics/logs/access_log plesklog ErrorLog /var/www/vhosts/domain.com/statistics/logs/error_log <IfModule mod_ssl.c> SSLEngine off </IfModule> <Directory /var/www/vhosts/domain.com/subdomains/tk/httpdocs> <IfModule mod_php4.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/domain.com/subdomains/tk/httpdocs:/tmp" </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/mkdesigngroup.com/subdomains/tk/httpdocs:/tmp" </IfModule> Options -Includes -ExecCGI </Directory> Include /var/www/vhosts/domain.com/subdomains/tk/conf/vhost.conf And here's what I've added in vhost.conf (which is included by plesk): DocumentRoot /var/www/vhosts/domain.com/subdomains/dev/httpdocs -grip

    Read the article

  • Installing and configuring Zend Framework 2 server-wide [Ubuntu] and test driving ZendSkeletonApplication

    - by kinologik
    I'm trying to have ZF2 installed for all my subdomains at once (Ubuntu 12.04). ZF2 just launched its first stable version, so I wanted to install it on my development server and finally get my hands dirty with it. I downloaded ZF2 and unzipped the files in /var/ZF2/ (which now contains Zend/[all components]). I then edited /etc/php5/apache2/php.ini and added the path to the ZF2 files: include_path = ".:/var/ZF2" I then downloaded the ZendSkeletonApplication and unzipped it in /var/www/skeleton. I know it is suggested to composer.phar to install ZF2 application, but: I don't want to make a local installation of ZF2... I want to make a server-wide installation be able to use my Zend components on all my domains/subdomains on my development server. Before using any automatic installation process, I'd really like to understand that process by doing it manually at first. Obviously, something goes wrong when I fire ZendSkeletonApplication, and I get the following when hit the following URL: http://www.myDevServer.com/skeleton/public/ Fatal error: Uncaught exception 'RuntimeException' with message 'Unable to load ZF2. Run `php composer.phar install` or define a ZF2_PATH environment variable.' in /var/www/skeleton/init_autoloader.php:48 Stack trace: #0 /var/www/skeleton/public/index.php(9): include() #1 {main} thrown in /var/www/skeleton/init_autoloader.php on line 48 I have skimmed through the docs, tutorials and the like, but there are no straight forward answer to this kind of configuration. In the official doc, in the (very short) installation chapter, I see a reference to adding an include path in PHP. But no example... http://zf2.readthedocs.org/en/latest/ref/installation.html Once you have a copy of Zend Framework available, your application needs to be able to access the framework classes found in the library folder. Though there are several ways to achieve this, your PHP include_path needs to contain the path to Zend Framework’s library. But then, when I get to the "Getting Started" chapter, it's all composer.phar and nothing else... http://zf2.readthedocs.org/en/latest/user-guide/skeleton-application.html I'm no sysAdmin, just a Zend enthusiast. I'm pretty sure this PEBKAC problem might be obvious for those who already got in ZF2 previous betas. Thanks for helping my out. EDIT: Problem was resolved, thanks to Daniel M. Just setting up ZF2_PATH in httpd.conf was all that was needed. SetEnv ZF2_PATH /var/ZF2 I also removed the include_path reference in php.ini and everything works just fine. So I have no idea why Zend suggested to include it there in their official docs.

    Read the article

  • Where is the Camera Codec feature available in Windows 8?

    - by Rowland Shaw
    If you try to install the Camera Codec pack for Windows 7 on Windows 8, you get an error: This version of the Microsoft Camera Codec Pack is not compatible with Windows 8 or Windows Server 2012. You can get the codec pack through Windows Update on Windows 8. However, I cannot see anywhere on Windows update that would suggest I can download this, even as an optional update? Is it just the case that it is not yet live, as everything filters through the RTM process, or is it hidden away as something else?

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >