Search Results

Search found 5237 results on 210 pages for 'lightweight processes'.

Page 164/210 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • win 7 something is causing excessive disk usage, maybe chrome?

    - by camcam
    On my Win 7 computer, this happens: I work for several hours without problems, also using Chrome Suddenly, just after refreshing a page in Chrome, disk starts being used excessively (here I can close Chrome or not, no matter, the disk won't stop) Disk diod is on all the time and I hear it running like crazy, all computer is a little slowed down It last 5-10 minutes In the meantime, I go to Windows Task Manager and observe what processes are using disk and turn them off one by one - but no success in stopping the excessive disk usage After approximately 10 minutes everything stops I go to Chrome (or re-open it) and refresh the page with mixed results - sometimes the whole process repeats immediately, sometimes not Basically, it is almost always Chrome refreshing random page that starts the excessive disk usage, but killing Chrome process does not stop the disk. Going to the same page in Firefox is not causing problems. Windows Search is turned off. I would like to know what is really happening. Perhaps there is a utility which would allow me to see which process is really using the disk, so that I can disable the service ? (not chrome, because killing chrome does not change anything) or even better, perhaps there is a way to fix it?

    Read the article

  • In APC+PHP, how much RAM is too much? Is it okay to set apc.shm_size to many GB?

    - by Jeremy Clarke
    On our server we have a LOT of RAM for our traffic levels (16GB). The HTTP processes regularly eat up all CPU and need to be restarted without even getting close to using swap memory, so I'm looking for ways to spend RAM to ease the load on Apache (and/or help the seperate MySQL server which may be breaking Apache). I have many WordPress installs on the HTTPD instance so APC sometimes uses as much as 900MB of ram (according to the apc.php charts). Just in case I have apc.shm_size set to 1600MB which is more than it needs but not more than I can spare. This means there is usually lots of extra RAM available to APC but also very little turnover and fragmentation is never more than 1%. Is this dangerous? Should I be slimming down APC to less than 1GB just on principle? Should I be expecting some turnover within APC in the name of bringing it's overall footprint down? Having so much memory devoted to APC means that in top/htop every single httpd process shows ~1.9GB in the VIRT memory column. Obviously this is shared memory and not used per-process, but could it be hurting our server? NOTE: The problem with the server remains unclear but the effect is that about 60 times a day all 8 CPU's fill up to 100% and everything stops working until Monit sees that Apache is broken and restarts it (Monin also saves the MySQL server). I'm not sure if APC is even part of the problem but I'm trying to optimize everything just in case.

    Read the article

  • "What happens?" server performance monitor

    - by AlexAtNet
    Hello! After reviewing some thread about server monitoring software I end up with a simple question: Which of the server monitoring tools should I use for automatic detection of "abnormal" situations with recommendations on how to fix them? I look for software that checks the system performance after installation and calculate some average load values (memory, CPU, etc). And when something happens (CPU load is increased to 20%) then it tries to detect a reason for this. If it is apache, it should check for access logs. If mysql, it should check mysql logs and tell me what happens. It this is because some user decodes a lot of images, I'd like to know which command is executed, when and user name. The same for disk usage, memory, number of processes, threads and so on. Ideally, this software should periodically checks the system and report problems: errors in PHP error log, outdated packages, security vulnerabilities. In other word I'm looking a software that will keep my simple Debian/Apache/PHP/MySQL server without forcing me to monitor the charts every day. I hope that such program exists. Thanks, Alex

    Read the article

  • task blocked for more than

    - by Manuel Sopena Ballesteros
    I have a webserver with the configuration below: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system My issue is I am getting kernel panic quite frequently. These is a list of some processes blocked I could see from the console: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd this is my sar logs Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to fix it. dmesg does not show any relevant information. I don't see any bottleneck in sar, any idea what can I check next? thank you very much

    Read the article

  • IIS virtual location path

    - by Worp
    Sorry in advance, for this seems to be a very noobish question and should be easily fixable, yet I can't work it out: I am setting up windows authentication for a website running on IIS 8.5: The Yii framework of the website takes input like http://localhost/mywebsite/index.php/site/loginand processes it, using the "login" action of the "site" Controller. I flipped on URL Rewrite to have nicer URLs, leaving the URL with /mywebsite/site/login. Now I need to set up windows authentication for this location. Very specifically ONLY for this very exact location. Only the /site/login location of mywebsite needs to have authentication. Every other location needs to have anonymous. Since it is a "virtual location", I don't know how to do it. I can set up win-auth for files, directories, virtual directories, etc. but not for virtual locations that do not map to any file but only to a Controller/Action in Yii. The working counterpart in Apache is but i can't "translate this into IIS". I have read that IIS does have the "~" symbol but I just couldn't make it work yet. Could it be used to achieve authentication on a location basis? I have looked around virtual directories as well, which seem to simply be a kind of "symlink" to actual folders on the harddrive. Can they be used differently to "create a virtual folder in a location that doesn't really exist to manage its properties"? Help is much appreciated.

    Read the article

  • Apache config: Permissions, Directories and Locations

    - by James Murphy
    I'm trying to get my head around apache configuration to fix a problem I'm having but after a few hours I've decided to ask here. This is what I've got at the moment: DocumentRoot "/var/www/html" <Directory /> Options None AllowOverride None Deny from all </Directory> <Directory /var/svn> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Directory /opt/hg> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Location /hg> AuthType Digest AuthName "Engage HG" AuthDigestProvider file AuthUserFile /opt/hg/hgweb.users Require valid-user </Location> WSGISocketPrefix /var/run/wsgi WSGIDaemonProcess hg processes=3 threads=15 WSGIProcessGroup hg WSGIScriptAlias /hg "/opt/hg/hgweb.wsgi" <Location /svn> DAV svn SVNPath /var/svn/repos AuthType Basic AuthName "Subversion" AuthUserFile /etc/httpd/conf/users require valid-user </Location> I'm trying to get my head around how it's all laid out and how directories relate to locations/etc For /hg I get asked for a password but to /svn I get a 403 forbidden... the error I get is: [client 10.80.10.169] client denied by server configuration: /var/www/html/svn When I remove the entry it works fine.. I can't figure out how to get it linking to the /var/svn directory

    Read the article

  • Is it okay to use an administrator account for everyday use if UAC is on?

    - by Valentin Radu
    Since I switched to Windows 7 about 3 years ago, and now using Windows 8.1, I have become familiar with the concept of User Account Control and used my PC the following way: a standard account which I use for every day work and the built-in Administrator account activated and used only to elevate processes when they request so, or to ”Run as administrator” applications when I need to. However, recently after reading more about User Account Control, I started wondering if my way of working is good? Or should I use an administrator account for every day work, since an administrator account is not elevated until requested by apps, or until I request so via the ”Run as administrator” option? I am asking this because I read somewhere that the built-in Administrator account is a true administrator, by which I mean UAC doesn't pop up when logged in within it, and I am scared of not having problems when potential malicious software come into scene. I have to mention that I do not use it on a daily basis, just when I need to elevate some apps. I barely log in into it 10 times a year... So, how's better? Thanks for your answers! And Happy New Year, of course! P.S. I asked this a year ago (:P) and I think I should reiterate it: is an administrator account as safe these days as a standard account coupled with the built-in Administrator account when needed?

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • Windows 7 hangs with 100% disk activity but only when online

    - by jeremy
    I have the same problem as seemingly many other people here, and I think we might all be experiencing the same issue: a compatibility issue in Windows 7 between hard drive and network controller or drivers. I've tried firmware updates of my entire board, wiping my drive and reinstalling from scratch. And yet the problem persists, which suggests it is an operating system error, as the hard drive checks out 100% physically. Additionally, the only time it does not occur is when in safe mode WITHOUT networking. With networking, there are spikes in disc access every so often and a huge flow of processes accessing the disc simultaneously that literally "stick" the disc, and physically jolting my computer unsticks it. Again, this has been tested for hours in a professional service environment, and without network access on, things are fine. As soon as there's network access available, the disc access occasionally cranks up to 100% and sticks everything. I'm using Microsoft Security Essentials, but this also happened under Norton, then McAfee. Again, this happened again after a complete wipe, so the likelihood of malware causing it seems low. I don't visit unsecure sites anyway, as far as I know. This, to me, narrows it down to a Windows 7 process that is somehow repeatedly corrupted, perhaps a corrupt .dll or driver, causing a conflict at the operating system level and temporary hard drive failure. I would encourage anyone who knows more about this stuff (which is probably most people!) to take a shot at this one, and I would encourage anyone else with a sticking hard drive in windows 7 64-bit to check on whether it occurs during safe mode without networking.

    Read the article

  • How to connect to DB2 when the password ends with '!' in Windows

    - by AngocA
    I am facing a problem to use the DB2 tools when using generic account with a generated password which ends with the Bang sign '!' to connect to DB2 database. I am not allowed to change the password because it is already used by other processes. I know the user is valid and I can connect to the database with its credentials, but not from all db2 tools. When using the Control Center it is okay. When using the Command Editor (GUI) or the Command Windows, I got this error message: connect to WAREHOUS user administrator using ! SQL0104N An unexpected token "!" was found following "<identifier>". Expected tokens may include: "NEW". SQLSTATE=42601 Let's say that my password is: pass@! I am trying to use c:\>db2 connect to sample user administrator using "pass@!" or c:\>db2 connect to sample user administrator using pass@! And it both cases I got the same error message. I could change the way I connect but it is not useful for me, for example: c:\>db2 connect to sample user administrator Enter current password for administrator: But I cannot use it from a batch file easily. I would like to know how can I connect from the Command Editor, in order to use this user from the Graphical Tools. BTW, I know that the Control Center is deprecated.

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • NFS-shared file-system is locking up

    - by fredden
    Our NFS-shared file-system is locking up. Please feel free to ask any questions you feel relevant. :) At the time, there are a lot of processes in "disk sleep" state, and the load averages on our machines sky-rocket. The machines are responsive on SSH, but our the majority of our websites (apache+mod_php) just hang, as does our email system (exim+dovecot). Any websites which don't require write access to the file-system continue to operate. The load averages continue to rise until some kind of time-out is reached, but for at least 10-15 minutes. I've seen load averages over 800, yet the machines are still responsive for actions which don't require writing to the shared file-system. I've been investigating a variety of options, which have all turned out to be red-herrings: nagios, proftpd, bind, cron tasks. I'm seeing these messages in the file server's system log: Jul 30 09:37:17 fs0 kernel: [1810036.560046] statd: server localhost not responding, timed out Jul 30 09:37:17 fs0 kernel: [1810036.560053] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:37:17 fs0 kernel: [1810036.560064] lockd: cannot monitor node2 Jul 30 09:38:22 fs0 kernel: [1810101.384027] statd: server localhost not responding, timed out Jul 30 09:38:22 fs0 kernel: [1810101.384033] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:38:22 fs0 kernel: [1810101.384044] lockd: cannot monitor node0 Software involved: VMWare, Debian lenny (64bit), ancient Red Hat (32 bit) (version 7 I believe), Debian etch (32bit) NFS, apache2+mod_php, exim, dovecot, bind, amanda, proftpd, nagios, cacti, drbd, heartbeat, keepalived, LVS, cron, ssmtp, NIS, svn, puppet, memcache, mysql, postgres Joomla!, Magento, Typo3, Midgard, Symfony, custom php apps

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • 404 not found error for virtual host

    - by qubit
    Hello, In my /etc/apache2/sites-enabled, i have a file site2.com.conf, which defines a virtual host as follows : <VirtualHost *:80> ServerAdmin hostmaster@wharfage ServerName site2.com ServerAlias www.site2.com site2.com DirectoryIndex index.html index.htm index.php DocumentRoot /var/www LogLevel debug ErrorLog /var/log/apache2/site2_error.log CustomLog /var/log/apache2/site2_access.log combined ServerSignature Off <Location /> Options -Indexes </Location> Alias /favicon.ico /srv/site2/static/favicon.ico Alias /static /srv/site2/static # Alias /media /usr/local/lib/python2.5/site-packages/django/contrib/admin/media Alias /admin/media /var/lib/python-support/python2.5/django/contrib/admin/media WSGIScriptAlias / /srv/site2/wsgi/django.wsgi WSGIDaemonProcess site2 user=samj group=samj processes=1 threads=10 WSGIProcessGroup site2 </VirtualHost> I do the following to enable the site : 1) In /etc/apache2/sites-enabled, i run the command a2ensite site2.com.conf 2) I then get a message site successfully enabled, and then i run the command /etc/init.d/apache2 reload. But, if i navigate to www.site2.com, i get 404 not found. I do have an index.html in /var/www (permissions:777 and ownership www-data:www-data), and i have also verified that a symlink was created for site2.com.conf in /etc/apache2/sites-enabled. Any way to fix this ? Thank you.

    Read the article

  • SSD for swap on Ubuntu server

    - by grs
    Currently I am reading SSD reviews and I wonder how much exactly I will benefit if I move the 24 GB swap from 7200rpm HDD to SSD. Does anyone implemented swap space on SSD? Is this generally good idea? On a side note: I read that ext4 has much better performance if the journal is on SSD. Anyone with such a setup? Thanks! Edit: Here I will answer the questions posted: Occasionally, relatively rare I am hitting the swap. I know what the swap is for and that is better to get more RAM. When the server begins to swap its performance degrades (not a surprise). The idea is if I have few memory hungry processes running, to improve the overall system performance at that time, using SSD for swap, instead of slower rotational media. At the end - I want to be able to login faster and check the server state during swapping, instead of waiting on the login prompt. And of what I see SSD is cheaper per GB than RAM. Would I have better server performance during swapping (as rare it is) using SSD compared to HDD? Where 10k or 15k rpm HDDs would rate in this scenario? Thank you all for your quick and prompt answers!

    Read the article

  • How to autorun wpa_supplicant on Debian startup

    - by The Electric Muffin
    I'd like to run wpa_supplicant -D wext -i wlan0 -c /etc/wpa_supplicant.conf on Debian startup (runlevels 2-5). I found some vague instructions from a related question that said to put a script in /etc/init.d/ and then symlink to it from the apropriate /etc/rcRUNLEVEL.d/ directories. However, I noticed that there are already some files named "wpasupplicant" that probably run at startup: /etc/network/if-down.d/wpasupplicant /etc/network/if-post-down.d/wpasupplicant /etc/network/if-pre-up.d/wpasupplicant /etc/network/if-up.d/wpasupplicant They all are symlinks to the same script, /etc/wpa_supplicant/ifupdown.sh. It has a comment at the beginning saying it "[...] allows ifup(8), and ifdown(8) to manage wpa_supplicant(8) and wpa_cli(8) processes running in daemon mode." However, the closest it gets to calling wpa_supplicant itself is (in functions.sh): WPA_SUP_BIN="/sbin/wpa_supplicant" [snip] start-stop-daemon --start --oknodo $DAEMON_VERBOSITY \ --name $WPA_SUP_PNAME --startas $WPA_SUP_BIN --pidfile $WPA_SUP_PIDFILE \ -- $WPA_SUP_OPTIONS $WPA_SUP_CONF [snip] start-stop-daemon --stop --oknodo $DAEMON_VERBOSITY \ --exec $WPA_SUP_BIN --pidfile $WPA_SUP_PIDFILE Does that mean it's safe to make an init.d script for wpa_supplicant, and if so what would it look like? General info: Debian Squeeze (5.0) official wpasupplicant package (v0.6.10-2.1) The full contents of my system's functions.sh and ifupdown.sh are here (dependent, of course, on my system's uptime—it's a five-year-old laptop that greatly enjoys overheating): functions.sh ifupdown.sh

    Read the article

  • All of the NTFS hard links damaged, where are hardlinks stored and how to recover them?

    - by String Xu
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\windows\system32\PSHED.DLL C:\Windows\System32\hal.dll C:\Windows\System32\kdcom.dll C:\Windows\System32\clfs.sys C:\Windows\System32\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." Background: 1. Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • All of the NTFS hard links disappear, where are hardlinks stored on disk and how to recover them?

    - by Osiris
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\\windows\\system32\\PSHED.DLL C:\\Windows\\System32\\hal.dll C:\\Windows\\System32\\kdcom.dll C:\\Windows\\System32\\clfs.sys C:\\Windows\\System32\\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." **Background:** Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • Win7 'locking' process/files/folders?

    - by Dynde
    I've had a fair bit problems with sometimes files/folders/processes being 'locked' by Windows. The weird thing is, it's not like the traditional sense, I think, where tools like UnlockIT and wholockme would work. It seems that just giving it a little often helps - making me think it could either be the HDD, the memory, or something in Windows. A scenario: I go into a folder - don't open anything at all, go back up, cut or drag-move the folder to someplace else, it says "Action can't be completed because the folder or a file in it is open in another program". Waiting sometimes 20 seconds sometimes a little longer, and I can move it. Another scenario is deleting a bunch of files in a folder, and it appears that everything is gone, but then suddenly after a few seconds an .exe file pops back up, and I can't delete it. Waiting a few minutes, then pressing refresh and it's gone. I have the strangest feeling that there's a problem with either HDD or memory. I already tried disabling Windows indexing service with no luck. Does anyone have any ideas? EDIT: I should say, that I have a very fast system, 16 GB DDR3 RAM, i7-2600k CPU, SSD main HDD, so I really should not be experiencing any sort of problems, where one might say that it's "reasonable" for the system not to respond right away. Edit2: And I updated SSD firmware a couple months ago, so it shouldn't be bad release FW either

    Read the article

  • pfsense 2.0 traffic priority - set full priority for single host

    - by Waxhead
    I have a network with several computers all on the same network and since I have very limited bandwidth I would like to prioritize traffic almost like a CPU scheduler prioritize processes. Example: Computer A: Used for webstuff: YouTube, downloads, news, emails etc. Computer B: Transferring files over HTTP Computer C: Transferring files over ftp, rsync whatever What I would like to do is to give A up to for example 90% of the available bandwidth IF A requires it. The leftovers (10%) is divided between B and C (5% each if both is busy) If A is not utilizing all bandwidth then of course B and C should share the full bandwidth (50% each as long as both are maxing out their bandwidth). All computers are on the same network (192.168.1.0 - 192.168.1-10 for example). Appreciate if anyone could shed some light on how I should set up my network to achieve this. To be honest I actually need a step by step guide on how I should set this up. Network setup: (ADSL modem configured in bridge mode (1500kbps/300kbps)) [ADSL modem (bridge)]<-[pfsense2.0]<-[switch]<-[Computer A,B,C...etc]

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

  • How to have PHP and mod_wsgi python app on the same domain?

    - by Lazik
    I am using apache with mod_wsgi (python3) on ubuntu 12.04. I have a python app (bottle) which is at www.mysite.com/ In my python app I have routes like www.mysite.com/abbb?q=blab I would like a path www.mysite.com/forum to resolve to a php app (simple machine forums) Ideally I would like to use apache to handle the forum part and pass it to php (instead of coding it in the python app). Don't know if it's possible. I'm new to this, I have read https://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#The_Apache_Alias_Directive but I don't understand how to use it. Here is my apache conf for the mod_wsgi app, I don't know how to specify the PHP portion. <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >