Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 308/401 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • McAfee ePolicy-Orchestrator (ePO) - policy ownership by groups?

    - by bkr
    Is there a way to grant ownership of an ePO policy to a group? Alternatively, is there a permission that can be set that would allow owners of an ePO policy to add other owners to that policy without making them ePO admin? In the case I'm looking at, ePO is deployed within a large heterogeneous organization with a large amount of delegation in the form of create/modify policy rights to allow multiple IT departments to customize to their needs for their sections of the system tree. The problem is that the policies are owned by the creator of the policy. This causes problems when they leave (staff turnover) or when other people on their teams need the ability to modify the existing policy. Unfortunately, as far as I can see, only someone who is an ePO admin can change the owners. Even the owner of the policy cannot add other owners (unless they are also an ePO admin). Ideally, I should be able to assign ownership of a policy to a group - since that would be easier to manage than me or another admin having to continually fix policy ownership or remove orphaned polices. Even just allowing the owners of the polices to add other owners would be sufficient. How are other people handling policy ownership when dealing with a large amount of delegated control of polices? Is there a way to delegate this out without making users full ePO admins?

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • Trying to retrieve data with a thermaltake blacX enclosure: Windows 7 believes the drive to be "uninitialized"

    - by Peeter Joot
    I have a laptop that won't boot. It appears to be a power problem ... laptop auto-turns-off within about 10 seconds of pressing the power button (with power buttons lighted temporarily and no display with or without external monitor). I've followed the dell troubleshooting guide which suggested reseating the memory modules and the hard drives, but that didn't help. Before trying to have the laptop serviced, I wanted to get some data off off the hard drive. I bought a thermaltake blacX enclosure, intending to use this to both use to retrieve the drive data with, and then later use as external storage. Following the instructions (insert cables, insert drive power on) goes fine, and Windows 7 on another laptop installs the device driver software. However, no drive letter shows up in 'Computer'. Under Computer-manage-storage I see the drive is there, and there's an option to "initialize" the drive. The Windows "initialize" dialog gives me the option to pick between "MBR" and "GPT" partitioning, which sounds like a good way to destroy the data on the drive. I'm thinking that I've purchased the wrong device for the job (or that my old drive is damaged). The old drive to recover info from is a Western Digital 500G/7200rpm SATA drive if that is relavent. Both the original laptop and the one I'm using for recovery are running Windows 7. Does anybody have experience with using a blacX enclosure to recover data off an already formatted drive?

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • Windows 7 breaks even in safe mode

    - by delenda
    Hi, I have a Dell XPS M1730 with Windows 7 installed. I noticed last night that after a few hours of use, the fans kicked into full and I couldn't do anything without it taking forever. Minimising windows, opening device manager or even opening process explorer took minutes and a game install I had just started took nearly 4 hours to complete. When procexp finally loaded, the refresh was so slow that it was mostly useless. From what I could gather, it was reporting 60% idle processes with procexp using nearly 40%. There were no hardware interrupts listed. When I rebooted, the problem went away for about 10 minutes and then the same thing happened. The issue persists in safe mode and even after I removed the graphics drivers, which have been an issue in the past, it still happens. Icons flash quite quickly on the desktop periodically and screen refresh is painfully slow. When booting now, the fans kick in to full as soon as the windows logon box comes up and it's taking 10 minutes to bring the desktop up. Chkdsk reports nothing and the raid check says that everything is fine. I'm thinking hardware failure, probably HDD but wanted some other opinions. I'm planning to try a linux live cd to see if it works without using the hard disks. If anyone has any input, it would be greatly appreciated. Delenda

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • How can I get multiple video cards to work on linux?

    - by user17943
    I installed fedora 12. I have 2 ATI cards that I used to use on windows to run 4 monitors. A recurring problem has been to get them detected in linux. Only my secondary card is picked up linux. When I manage the displays it detects the 2 monitors connected that card. What are the specific steps I should take to get the second card detected? Supposedly there is a tool system-config-xfree. I don't have it, yum can't find it. Also I heard it has something to do with editing some xorg.conf file or something to that effect. I have absolutely no idea how to find the "bus id" of my card, or lookup the horizontal refresh rates, etc.. I would probably have no problem following the documentation & editing the file if I knew a good way to find these values. Someone also suggested installing linux twice and saving the xorg.conf it generates each time (with different card each time) and then merging the two by hand. That is like killing a fly with a hammer though, when I do this again and again in the future It'd be nice to not have to take twice as long. Thanks

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • Is there a way to log commands that a user runs in Windows 7?

    - by camster342
    I manage a large enterprise environment, and while we try to advise users not to, there are inevitably users that need to have local admin access to their machines. The problem is that some of these users like to "fiddle" and sometimes screw up their machines in "wonderful" ways. Is there an easy way to log what a user does on a machine, specifically in the command prompt? Maybe there is 3rd party tools I could use to log this information? With Linux that I used to use in past ages, you could look at a users bash history file to see what commands they have run. While I realise that specific log could also be altered by the user if they wanted to cover their tracks, that is the sort of log I'm looking for. If there are other ways I can also log what other system configuration type changes they make as well (not necessarily command line based), that's also useful. I know about event/system logs and so on, but that doesn't necessarily catch all the information I need to figure out how the user has buggered their machine this time.

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Grub rescue, unknown file system. Can't boot into Windows 7

    - by Sam J
    So, I'm confused, so I'm also going to use this question to get clarification and fix my computer. So, some background: I had Windows 7 on a 1 TB HDD and decided to partition my hard drive into two ~500 GB, one for Windows 7 and one for Ubuntu or whatever flavour I desired (like a sandbox partition...) I installed Ubuntu but the installation had issues so I decided to uninstall. Note before uninstallation I had to press f12 when I turned on to boot from my primary HDD, then choose what OS I wanted to use. Undesirable, but it worked. Anyway, after I decided to uninstall Ubuntu I went into Windows 7 Start Computer Manage and deleted the EXT4 filesystem (Ubuntu parition) giving me 4xx GB of free space. However when I restarted Windows 7, I am now unable to boot Windows. When I DON'T hit f12, I see a blank screen with a flashing underscore. When I DO hit f12, I choose my primary HDD, and then I get a GRUB error: Unknown filesystem: grub rescue _ Something I'm unclear of: GRUB boots linux partitions, right? What boots Windows? Is GRUB "overwriting" the Windows bootloader? How can I completely get Windows back to normal? (IE, It boots automatically without hitting f12.) Thanks for any help, I'm on a live CD version of Ubuntu right now until I can get back on Windows.

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Outlook 2010 IMAP account - send on behalf

    - by Master of Celebration
    So I was looking for a possibility to manage the mail distribution of online shops, newsfeeds, etc. and have a nice solution via distribution groups aka. alias addresses. In example, I register an account on eBay using "[email protected]" (where org.com is my company obviously). That address is an alias and can be managed on my on-premise mail server setting destination to somebody's mailbox independent from logging on to eBay - in case somebody else shall do the eBay-stuff, I can quick change the destination of that alias :-) So far, so good - and now to the problem: Using Microsoft Outlook 2010 and an IMAP account on our mail server, I cannot figure out how to remove that "on behalf of"-string visible in the from-field when sending a message under that [email protected] address. That's quite a pity, because especially eBay doesn't accept/forward mails not coming from the registered address.. Using other mail clients (e.g. Mozilla Thunderbird), the problem does not occur so I guess it's Outlook specific. I cannot "grant" permission to "send as", because that address is not a mailbox, but rather an alias only. Furthermore, the mail accounts are not Exchange, but IMAP! Does anybody have any other ideas to "remove" that annoying string? Consideration: We have to use Microsoft Outlook for some reason! :-)

    Read the article

  • Ubuntu + LigHTTPd: Server requests taking ages

    - by ctrl_freak
    I've had an issue since upgrading my distro a couple of weeks ago from hardy; receiving data after making a request has increasing intervals of nothing, as you can see from the picture below. http://i49.tinypic.com/2w5lvr9.png I have since reinstalled fresh from an Ubuntu 10.04 Server (i386) disk, but am still having the same issues. I'm running on a LigHTTPd, MySQL, PHP5 stack. The surprising thing is, that local browsing using lynx is super fast, as expected. Initially, after reinstalling, I copied over the old configuration files from the previous installation, but have since reinstalled LigHTTPd and rebuilt the config file from scratch. The only correlation I could find, was that I attempted installation of ionCube and Zend Optimizer for a script I was testing, however I would think that it could no longer impact seeing I had reinstalled the OS. I have also removed Suhosin just in case, however it had no impact. I'm thinking it possibly has something to do with networking, but I wouldn't know where to start. The server is manually assigned an IP by it's MAC address on the router. The fact that the time seems to be exponential (to a point) worries me. I've tried strace'ing the LigHTTPd and MySQL processes, however I couldn't see anything obvious, not that I'd really know what I'm looking for. RAM and CPU usage don't seem to be out of the ordinary, but I can't say its perfect.. I'm hoping someone has experienced the same, or can point me in a direction, as searching has proved fruitless as I don't know anything specific. Config files can be posted, if requested.

    Read the article

  • System randomly freezes yet mouse still moves

    - by user784446
    This problem has lasted for the past 48 hours. The first time it happened, a program I was running stopped responding, so I tried to end it from task manager. The processes at first were listed fine until hovered upon. Eventually, despite the mouse still being able to move, after a few persisting clicks the mouse finally stopped moving. The screen went blank shortly thereafter. The second time it occurred, items on the screen stopped responding - hovering over the taskbar or such wouldn't elicit a response. Sound would still play however. Eventually, the mouse became unresponsive and the system restarted itself. I suspect that it may be a problem of my SSD drive. After looking through some Google search results, I downloaded HDTunePro to determine if there's a problem with the drive. Results returned a problem of reallocated sector count. An error scan also revealed 48 bad sectors. Also, an attempt to backup the contents of the most important areas of the drive returned a few explorer "Error: cannot read source from disk" errors. Should I ditch the drive and use another drive or is there anything that can be done to repair the drive? SSD: OCZ Petrol 64gb CPU: AMD Athlon II X4 640 RAM: Generic 3GB DDR2 Motherboard: Gigabyte MA74GM-S2H OS: Windows 7 Ultimate x64 Thanks!

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Physical Debian to VMWare: vmware-converter, dd-image or otherwise?

    - by Dabu
    we have two debian Lenny production machines, both running larger commercial websites. Now these machines need to be moved, and in the process, they need to be virtualized to VMWare ESX. If you believe the internet information, there are several ways to accomplish this. The easiest for us would be to use our weekly dd backup where the whole disk, however, I have no experience with this kind of technology and if it is really possible. The second best way would be via an application on the source machine virtualizing it and generating an ESX compatible VM. However, the software is beta and unsupported, and after installation, nothing really works (the /etc/init.d/vmware-converter script doesn't actually do anything, start and stop reply with success messages, yet ps shows that there are no new processes). The worst way with the most work would be to install a new machine and set it up manually, copying files and databases as needed. This part is clear in it's execution, and my question(s) do not touch this. Is my 1st way possible? Has anyone done this yet, or better, has a page with instructions? Or is there a help page that explains how to correctly install, run and use the vmware-converter tool using a Debian installation (it's possible that I dod something wrong during installation already)? Thank you.

    Read the article

  • Ubuntu: Memory Leak

    - by Keener
    I'm having trouble finding from where this memory leak is occurring. I'm running Ubuntu 8.04 LTS on a Dell XPS M1530. I have 3GB of ram and I'm finding after about an hour or so of use top shows me 2GBs+ used. The strange thing is when I add up the memory percentages by PID either from top or ps aux I find that I should only be using about 20-25% of my available ram. What brought this to my attention was I've begun running vmware server again. Now, obviously the ram usage spikes when I load a virtual machine, but the memory VMware is using does not account for the memory usage I'm seeing via top or free. Stopping vmware server releases the memory which was allocated to it, but I'm still unable to find where this RAM is being used. After a complete reboot, of course, the memory is fine, but very quickly it climbs to 60-80% usage with the processes only appearing to account for a third of that. Any ideas where I should look for more information on what this could be?

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >