Search Results

Search found 39047 results on 1562 pages for 'process control'.

Page 592/1562 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • What is the 'best practice' for installing perl modules on Solaris/OpenSolaris?

    - by AndrewR
    I'm currently in the process of writing setup instructions for some software I've written that is implemented as a set of Perl modules. Having done this for various flavours of Linux, I'm now doing the same for Solaris/OpenSolaris (v10 only). Part of the setup process is to make sure that dependent Perl modules are installed. This has been pretty easy on Linux as the Perl modules I require tend to be within the distro's packaging system (eg yum install perl-Cache-Cache). This is not the case on Solaris so I'm working on setup instructions that use the CPAN module to fetch dependent modules (eg perl -MCPAN -e 'install Cache::Cache'). This works ok but there are known problems with modules that require things to be built with a C compiler. The problem is that the C Makefile generated assumes you're using Sun's compiler and uses command-line options not understood by gcc, which you may be using instead. Consulting teh Internetz has thrown up a number of solutions to this: Install and use Sun's compiler Use the perlgcc wrapper script Edit the makefiles by hand (yuk) All of these work. My question to those more familiar with Solaris than me is: Is one of these the 'best' or 'most commonly used' method?

    Read the article

  • Virus cleanup + dying drive = XP Automatic Updates crashing in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • Browser says "Waiting for www.xyz.com" for a very long time

    - by Phil
    When I load my website (hosted with Ipage). The browser often takes an incredible long time saying "Waiting for www.xyz.com ..." before any elements of the site actually appear. After this "Waiting for" process, the text, images and everything else actually load quite fast. I contacted my host with my tracert result and they said they optimized my website database and increased the memory available to PHP on my account to 64 Mb.They also said they have checked the issue by accessing my website and found that it is loading fine without any slowness. It seems to be a temporary issue. Please try to access your website with different browser and network. I tried different browsers and networks but this "Waiting for" process always takes too long. My website is http://www.surreyextra.com/ . It's Wordpress and BuddyPress. I'm in the UK while Ipage host is placed in the USA, can this potentially be a problem? I have tried a number of optimizations, like minifying my CSS and JS files and use catching but the problem hasn't improved. So is it my host's fault, should I contact them again?

    Read the article

  • Migrating to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows 2003 Domain Controllers. I am trying to replace them with Windows 2008 R2. The 2003 servers are named DC01 and DC02. The 2008 R2 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller that runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I ran dcpromo on DC1 using the advanced option and added it successfully to my existing domain. It's roles are GC, DNS and Active Directory Domain Services. I transferred The PDC Emulator, RID Pool Manager, and Infrastructure Master roles to DC1. The Schema Master and Domain Naming master are still on DC01. The first issue that I'm encountering is when I dcpromo the DC2 and select "Replicate data over the network from and existing domain controller" I select that I want to replicate from DC1 and I get the following error: Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request. Is this because the Schema Master and Domain Naming Master roles are still on the old DC01? And if so, if I transfer Schema Master and Domain Naming Master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way.

    Read the article

  • Site hanging in iis7 - how do I troubleshoot?

    - by Chris Foot
    I am currently having a problem with a windows 2008 server running IIS 7. The server runs several sites but only seems to have the issue with one particular site. Every so often, the whole server slows to a crawl with nearly all requests timing out! Invariably, when we log in to take a look there is always an IIS process using up around 90% cpu. Looking into the worker processes in IIS there are usually one or two requests that have been running for a long time. They are always in the ExecuteRequestHandler state with ManagedPipeline as the module name and the current ones i'm looking at have been running for 7686248 (what units is this in, it doesn't say?). It is also not always the same page, in fact we have seen at least 3 different pages listed under url when this has happened. It seems that the only way to bring the server back to life is to kill the 90% process! The site is running under .Net 4.0 and the code on it is very similar to other sites on the server which do not have the problem! How do I start troubleshooting this?

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • How do I restore a Windows Server 2008 R2 bare metal backup to a Windows Server 2012 R2 Hyper-V instance?

    - by Michael J. Gray
    I have been trying to find a simple way to migrate a physical Windows Server 2008 R2 installation over to a virtual machine hosted on Windows Server 2012 R2 Datacenter Edition /w Hyper-V. I came across the bare metal backup feature on Windows Server 2008 R2 and assumed I would be able to easily back it up and simply restore it into a new virtual machine by booting the installation media and getting into the Windows recovery process. When I attempted this, Hyper-V got into a network based restore process, but I do not have a PXE server or anything like that and I would rather not set it up. I tried mounting the VHD produced in the bare metal backup, just to see if it would somehow work, but it of course did not and failed with an error related to an incorrect boot device. I checked the virtual machine's BIOS settings and everything looked fine. I did not expect this to work anyway, so I stopped working through this method any further. Is there a way to take my bare metal backup and restore it into a virtual machine without a PXE server or SCVMM? I am opening to using proprietary tools but since the last time I did this I used Norton Ghost, which is no longer supported, I figured I would try doing it with what is readily available.

    Read the article

  • How can one make vim change terminal colors?

    - by amn
    I am using command line vim running from an xterm (which runs sh). I have color in vim according to a color scheme I like. The problem is, as usual with 256-color terminals and truecolor color schemes, colors are wrong. Now, I know I can do a gazillion things to fix this, including installing gvim, but I like my terminal. In fact, using xrdb [-merge] .Xresource file, I now actually have xterm override the color values, and the theme now looks perfect. Since, I may be switching to another theme, I need some workflow to have vim actually do what xrdb does - to reset terminal color pallette. Because right now I have to reset color values with xrdb ... first, then launch another xterm to actually use these values, then launch vim from that newly opened xterm to have the exact colors. The way I understood it is that vim color scheme, just as any other terminal application, uses colors by referencing their ids, and X resources set the values themselves. I think I saw somewhere on Internet, that terminal control character sequences can reset actual color values, in fact, I am sure they can - I managed to set my terminal background color at runtime. How would I make vim execute these sequences to match values for the color scheme? And is there any reference to these control sequences, as part of any standard?

    Read the article

  • Moving my OpenID from Livejournal to... something else.

    - by T-Boy
    I've actually been an early user of OpenID, although there are still some questions that I've had with OpenID that I've never really had satisfactorily answered. Now, I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the Livejournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get Livejournal, when asked by a authenticating provider, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. (tried tagging this with livejournal, and it told me I couldn't, because I don't have enough reputation. Oh well; one must start somewhere. Sorry for the inconvenience!)

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • Oracle: Getting ORA-01195 and ORA-01110 when attempting resetlogs

    - by MacAnthony
    I am trying to get our database to startup. When I login to sqlplus and do a startup, I get the message: Total System Global Area 534462464 bytes Fixed Size 2215064 bytes Variable Size 331350888 bytes Database Buffers 192937984 bytes Redo Buffers 7958528 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open So I do a shutdown, startup mount (which works fine) and then run: SQL> alter database recover using backup controlfile until cancel; alter database recover using backup controlfile until cancel * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/<path>/system01.dbf' SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01195: online backup of file 1 needs more recovery to be consistent ORA-01110: data file 1: '/<path>/system01.dbf' I know I've used instructions to get me past this error before, but I seem to be having trouble tracking it down. A bit of history: We wanted to refresh the data in this from another db so we attempted to do a expdb/impdb into this instance. The impdb did not complete correctly and got an end of file error message in it and hung (I still have the message in a log if it's important). Since the instance would start at this point, we decided to use the hotbackup process we have to restore the db. The hotbackups are from another server/instance. We went through the same process 2 weeks ago. At the point of recreating the control file is where we got to the issue above.

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • Dell Driver Support for Latitude E6320 Windows 7 Enterprise

    - by IamPolaris
    I recently did a reinstall of Windows 7 Enterprise on a Dell Latitude E6320, which is a 64 bit system. After the install process, and doing typical Windows Update stuff, I looked at my Device Manager and found that I had devices which were missing drivers. My missing drivers: After going to the Dell Support site and looking at the files, and doing some sleuthing I found the following support document: http://downloads.dell.com/utility/Latitude%20E-Family%20%20Mobile%20Precision%20Re-Image%20How-To%20Guide%20-%20A03%20Rev%203%200.pdf This document hints in appendix C that the Broadcom USH is the Control Point Security and the Unknown device is Micro freefall sensor. The network controller is my wireless, as I cannot connect wirelessly, and the final missing driver I am not sure. Attempting to install the control point security exe on the support page will not work. After downloading, I am given the message that I am attempting to install a 32 bit driver on a 64 bit machine EVEN THOUGH I selected the win7 64 bit option from the support page. Beyond that, some of the drivers (Which are confusing to read and hard to understand what they do) and the system utilities which are supposedly supposed to make this process simpler will either a) not run because they are 32 bit exe's or b) the support page cannot find the file attempted to download. Is there anything I can do to get (at the very least) my wireless running, but idealistically all of my drivers. A solution which assumes Dell is completely incompetent would be ideal. :P Some forums have said that I should download the chipset driver, others say to get the system utility file (DSS_UTIL_WIN_R282536.EXE). I have had no luck as of yet...

    Read the article

  • Content not being compressed even though I'm using zlib in php.ini

    - by Tola Odejayi
    I've edited my php.ini file so that it has these two entries: zlib.output_compression = On zlib.output_compression_level = 4 However, after restarting apache, when I request php pages, the headers returned in the response indicate that my server is still NOT serving compressed pages (here are selected headers as viewed using Chrome's Network feature): Cache-Control:no-cache, must-revalidate, max-age=0 Connection:Keep-Alive Content-Type:text/html; charset=UTF-8 Date:Mon, 17 Sep 2012 23:46:13 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Mon, 17 Sep 2012 23:46:13 GMT Pragma:no-cache Proxy-Connection:Keep-Alive Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Via:1.1 XXX-PRXY-07 X-Powered-By:PHP/5.2.17 What might I be doing wrong? Is there any other setting that I need to change? EDIT Here is another set of headers returned to another computer: Cache-Control:no-cache, must-revalidate, max-age=0 Connection:close Content-Type:text/html; charset=UTF-8 Date:Thu, 20 Sep 2012 09:45:26 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Thu, 20 Sep 2012 09:45:26 GMT Pragma:no-cache Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Vary:Cookie X-Powered-By:PHP/5.2.17

    Read the article

  • Virus cleanup; Windows Automatic Updates service crashes in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • Linux: can't downgrade ATI graphic driver

    - by java.is.for.desktop
    I had the old 10.9 driver of ATI, it worked fine. Sadly, I decided to upgrade it to the new 10.12. After that, HDMI stopped working. So, I want to downgrade back to 10.9. I did everything by the book. But it fails: As root, I do: cd /usr/share/ati sh ./fglrx-uninstall.sh Yes, the uninstall process tells me that everything is fine uninstalled. And after doing reboot, I have the default opensource ATI driver which comes with X11 (or the kernel?). The ATI control panel shortcut points to nowhere. So, the driver seems uninstalled. Now I install the older (10.9) proprietary ATI driver. It also finishes successfully. But after reboot, the ATI control panel tells me that I still have 10.12. And it seems to be true, because HDMI doesn't work. So, how can I completely purge the new non-working driver and downgrade to the old one?

    Read the article

  • Two different subwoofers aren't working on my machine or my phone

    - by Philluminati
    I have speakers that come with my computer. Two small desktop speakers and a subwoofer with a base volume control on the back. It's worked for years. I was listening to Spotify on my speakers as loud they would possibly go and with the base turned up to max and suddenly the subwoofer stopped working. I've plugged the speakers into my Android HTC Desire Z handset and again, the desktop speakers play music but the subwoofer doesn't (even after fiddling with the volume control). So I figured I'd broken it. I went to Amazon and bought a replacement one. I bought this one: http://www.amazon.co.uk/dp/B002N46YD8/ref=pe_217191_31005151_dp_1 but it doesn't work either, on either my desktop nor my Android phone. I had a play with alsamixer and the LFE and center controls are switched on and the speakers are okay... but still no base. Am I unlucky enough to bought a new subwoofer which is already broken out of the box or is there something else which is wrong and I could look into please? Are there any other tests which I could perform to see if the problem is me or not?

    Read the article

  • Windows 7 search does not return results from indexed folders

    - by Dilbert
    I am experiencing this issue over and over again and I just cannot seem to find the answer. It doesn't make sense, but search simply does not return results from folders that certainly have these files inside. It's weird that this technology exists for more than 5 years now (it could be added to Windows XP as an addon), and they still haven't got it right. My folder contains 10 image files with .png extensions. Two scenarios: Scenario 1: I exclude the folder using Indexing options. Search works. Scenario 2: I turn on indexing for this folder. Search does not work. Of course, Agent Ransack returns results every time. When I check Advanced options for the Indexing options inside control panel, .png files are checked in the File Types tab, using the "File Properties filter". What's the deal with this? [Edit] To clarify, this doesn't happen with all folders, but does with more than one. For the "problematic" folders, even *.* doesn't return a single result. I found some advice to clear the archive and readonly attributes for all files (doesn't make sense, but hey), but it didn't work. Indexing status in Control panel is: Indexing complete. 100,000 items indexed. Folder is included in the list. File types list contains the .png extension (although it doesn't work with any filter, not even *.*).

    Read the article

  • Not able to access external Hard disk

    - by Jash Jacob
    I have a 1TB External Hard drive which I'm currently not able to access. When I open the External drive in Finder, It shows it's empty. When I use the option to "Get Info", I get the dialog box stating it has about 300GB Free. Tried to get into the External Drive using Terminal, I had no luck. Checking in Disk Utility, It showed that I have many number of files but ZERO folder. I tried to "repair disk", in the process the external Drive got unmounted in between the process. I checked this drive on Windows. I was able to open almost all the folders but I wasn't able to copy anything onto the external drive. One folder caused my windows computer to hang, So i connected the drive back onto my MacBook Pro and tried to access the drive through terminal (this time it worked!) and then I tried to delete the folder with rm command, I got an "input/output error" What should i do to recover the files in that folder? How can i access my external drive on my mac

    Read the article

  • In solaris, how monitor & auto-respond to critical events

    - by mamcx
    I have a website that randomly fail. Is running in open solaris on joyent. I have a monitoring service that alert me when the site is down, but, I want a way to put a "insider" tool that tell me why that happened. Is because the cpu is too high? Not memory? Which process fail? Is possible to have a backtrace of that? Everything is running on the Solaris Service Management Facility. The webserver is cherokee, the database is mysql and the language is python/django. I want the most simple setup to monitor that & auto-respond , ie: restart the webserver or the django process in case of failure. I prefer a low-overhead tool. I don't need the fancy monitoring that some tools have, no ned graphs or sms alert. Only know what fail, restart it if possible (maybe up to n times), and have a log somewhere when I will check it.

    Read the article

  • UAC being turned off every time Windows 7 starts

    - by Mehper C. Palavuzlar
    I have strange problem on my HP laptop. This began to happen recently. Whenever I start my machine, Windows 7 Action Center displays the following warning: You need to restart your computer for UAC to be turned off. I never disable UAC, but obviously some process or virus (I'm not sure, only guessing) causes this. As soon as I get this warning, I head for the UAC settings, and re-enable UAC to dismiss this warning. This is a bothersome situation as I really don't know what causes the problem. I have run a full scan on the computer for any probable virus activity, but TrendMicro OfficeScan said that no viruses have been found. Malwarebytes' Anti-malware could not find any malicious items either. There are no other strange incidents on the machine. Everthing works fine except this bizarre incident. How can I learn what process is trying to turn off UAC? What way should I follow to overcome this problem?

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • Microsoft Mouse and Keyboard Center Needed for Keyboard but Doesn't Support Mouse

    - by eljay
    I recently built a new computer (running Win7 Pro 64-bit) that includes the Microsoft Sidewinder X4 Keyboard. To make use of all the extra features of this keyboard I need the Mouse and Keyboard Center. I just ran Windows Update for the first time on this system and the Mouse and Keyboard Center was included in the update. I'm left-handed and before the update I had the mouse set up for lefty use. Now after the update, it's been set to righty use and the original mouse control panel applet no longer allows the assignment of buttons. For that there's a link to the Mouse and Keyboard Center which does not support my oldish mice. (I have an IntelliMouse Optical and a Creative Mouse Lite Pro.) So I need the new utility for my keyboard, but I have to be right-handed to use my mouse? Really! I tried changing HKCU\Control Panel\Mouse\SwapMouseButtons to 1, but a reboot set it back to 0. Is there some way I can change my mouse back to left-handed? Thanx -eljay

    Read the article

  • /usr/bin/install hangs, apparently due to SELinux

    - by Cooper
    I'm trying to use the GNU coreutils install utility, however it is hanging: /usr/bin/install -v test_file test_dir/ `test_file' -> `test_dir/test_file I see the same behavior whether I run as a normal user, or root/sudo. I ran an strace -f, and this is the end of the output: ... read(4, "<username>\t-d\tsystem_u:object_r:ho"..., 4096) = 2197 <0.000012> brk(0x6e3b1000) = 0x6e3b1000 <0.000009> mmap(NULL, 29138944, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2abd831ae000 <0.000014> munmap(0x2abd815dd000, 29138944) = 0 <0.003466> The read() is reading from /etc/selinux/targeted/contexts/files/file_contexts.homedirs, apparently successfully. It appears that the process is hanging right after the munmap, but continues to eat 100% CPU. My two questions are: 1) Any good way to see what is going on with the process? I'm currently too lazy to compile a debug version of install I can run gdb on - but a strong suggestion in an answer here may motivate me to do so if needed. 2) Any idea what the SELinux issue could be? I'm not too familiar with SELinux. Additional info of possible relevance: # ls -Z drwxr-xr-x my_user 7001 user_u:object_r:user_home_t test_dir -rw-r--r-- my_user 7001 user_u:object_r:user_home_t test_file # id ... context=user_u:system_r:unconfined_t # uname -a Linux hostname 2.6.18-238.1.1.el5 #1 SMP Tue Jan 4 13:32:19 EST 2011 x86_64 x86_64 x86_64 GNU/Linux I am suspicious that SELinux + Quest Authentication Services (QAS) is causing the issue. QAS is generally well behaved, but it did cause the /etc/selinux/targeted/contexts/files/file_contexts.homedirs to get quite large (~18k users, @23 lines per user) Update: install -v -Z user_u:object_r:user_home_t file dir/ seems to work. Can anyone suggest why, given that SELinux is in permissive mode (see comments).

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >