Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 417/503 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • How to resume XMPP groupchat window in Irssi (using bitlbee)?

    - by mcnesium
    I use Bitlbee to chat in XMPP-networks within my IRC-client Irssi. This works great so far, and recently I started using XMPP Multi User Chats as an alternative to IRC-channels. I set up a channel using chat add <account> <[email protected]> in the &bitlbee control window, set chan <room> set autojoin true and entered /join #room in the &bitlbee window to join that groupchat. It then appears as a unique Irssi window in the status bar. This seems to work ok too, but with one exception: Since I idle in the channels 24/7 my irssi has to cope with the every-night-24h-DSL-disconnection by the ISP. After it automatically reconnects, it does kind of rejoin that XMPP-groupchat, but the traffic of the groupchat does not go back to the unique irssi window, but keeps flooding &bitlbee with messages from root telling me about a Groupchat Message from unknown JID <jid>: <message> - which is the traffic of the groupchat. The unique groupchat window is gone after the reconnect, and I will again have to go /join #room in &bitlbee to get it back. Even worse, the window number is unused before I rejoin the groupchat, and if I get a query from any network, the window nests in that unused window spot, so I will first have to remove that query from the spot, and then move the rejoined groupchat to that window number. I want my groupchat window to resume after the reconnect just like every other IRC channel too. How can I get this done? Any ideas?

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • Network driver for Hyper-V restore from Windows Home Server

    - by Philipp Schmid
    I have backed up Windows Server 2008 running virtualized on Hyper-V to a Windows Home Server 2008 SP1 (I know I should have backed up the VHD instead). Now I need to restore the contents of the VM from WHS. I have created a restore CD ISO and used it to create a new VM. It all works as advertised up to the point where the restore process wants to load the network drivers (it only finds 4 disk drivers on the restore CD. but no network drivers). So I created a virtual floppy and copied the contents of 'Home Server Drivers for Restore onto it. But no luck! I have tried moving the 4 subdirectories into the root of the floppy, but that didn't work either. Finally, I started another instance of the WS 2008 to identify the network driver that the virtualized instance is using (%WINDOWS%\system32\drivers\netvsc60.sys) and copied that file onto the virtual floppy, without success. Does anyone have any suggestions on how to get networking working on a Hyper-V instance running off the Windows Home Server Restore CD? UPDATE: As suggested by delenda, I have added a legacy network adapter to my VM, and indeed I now get a network driver listed! However, the WHS it still not found, even after entering the home server name manually. PHS

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • erratic response times with Apache 2.0.52 on redhat 4.

    - by Kevin
    Under load, we've noticed response times from Apache vary greatly for the same 7k image. It can range anywhere from .01 seconds to 25 seconds or greater. Unfortunately, due to corporate policy constraints we are pretty much stuck on Apache 2.0.52. I'm at best an Apache novice so I'm in over my head with this problem. My focus recently has turned to our choice of MPM modules. We use the worker model on a dual core hyper threaded blade. It doesn't appear that swapping is an issue, and I don't see any signs of a hardware problem. I've read that worker is optimal on hardware with many CPU's where prefork it more suitable for our specific hardware profile. I can see conceptually how choosing the wrong MPM could result in this erratic behavior, but I'm not confident that it's the root cause here. Has anyone else seen this type of range in your response times for simple static content? What else should I be looking into here?

    Read the article

  • how check all of my file copy correctly by batch file?

    - by rima
    Dear all friends I have a batch file that copy all the files from src place to dest place. I used xcopy command. Now I want to make sure all of my file copy correctly and delete all the files in src folder, do you have any idea? I dont know is there any command for delete the folder with all the files and folder inside it? please advise me.... my source folder has below structure > root | > [sub folder1] > | > filex.s > filei.z > [sub folder2] > | > filep.a > fileq.q > [sub folder3] > | > filex.s > filei.z > filsi.w > file1.xx > file2.cc > file3.ss

    Read the article

  • Out of memory errors but not actually out of memory...

    - by commradepolski
    So, myself and my fellow support techs have been fighting with this issue and we still dont know what the problem is. Lets start off with the system specs: Windows XP 32 bit Corporate (SP2 and SP3) Intel D975XBX2 Mobo 4gb of ram Intel Core 2 Quad Q6600 ATI Radeon HD 3600 - 512mb After a few hours of working on the machine, the end user will begin to see the following symptoms: Out of memory messages Title bars and menus dont draw in properly Problems accessing network resources Problems opening up documents such as MSWord and MSPowerpoint and text files Problems opening up explorer windows General instability We have looked at task manager while this issue was occurring, and all indicators, like PF usage, threads, handles, etc. are normal. We have been having trouble pinpointing the root cause of this issue. It is also not situated with one user, it affects 8-10. So far we have tried: Resetting CMOS (Waiting to see results) Replacing video card (didnt help) Windows updates (didnt help) Updating network drivers (didnt help) Switching user from 1gbps to 100mbps network connection (awaiting results) Swapping the affected user's hardware (waiting for results) Increasing desktop heap size (helped for a bit but then the issue became more frequent) Applying the /3 switch to XP (didnt help) Increasing and decreasing and setting PF to system managed state (didnt help) We did have a power outage at the office a couple weeks ago, and all these issues became more frequent. Prior to the power outage it may take a week or so for the users to experience the issues but since the power outage it takes 3-4 hours or less. We havent had reports of the above issues causing BSODs, although that would be easier to diagnose :). Any help is greatly appreciated.

    Read the article

  • Mail Server using Postfix

    - by unknown (google)
    I have currently set up my web application on Amazon EC2 server. As a well known fact sending email from EC2 has a problem. As a cheap and long lasting solution instead of using "authsmtp" is it possible to rent a server and use it as a Mail Server? I am currently looking for cheap hosting which will give me root access so that it can be configured and used as a relayhost. I am curently using Postfix as MTA. Has any one implemented this before? I am curious about its feasibility of this solution. I guess common requirements are: 1: Dedicated IP which is not black listed. 2: Open relay( open to my Server only) Any Tips for Header configurations to keep the mails out of spam folder. This is like exactly cloning authsmtp for personal use. Any suggestions for other Mail Server software instead of Postfix? Another problem is Reverse DNS for this server. Should PTR entry be present if a server is used as a relayhost?

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • Master does not appear to be a git repository error

    - by EmmyS
    I've inherited a position and instructions for creating a new git repository. Unfortunately I've run into problems and no one here knows what to do. Hoping someone can help me out. Here are the instructions I was left: Create a new repository: For these steps you need to be in the gitosis-admin repository, if you don't have it, in a suitable parent folder do: git clone [email protected]:gitosis-admin.git Edit gitosis.conf file - in gitosis-admin root, under [group base-repo] section, add the name of the new repo to the end of the "writable =" section. Commit change and push back to gitosis-admin master. For the next commands, my_new_project represents the name of your project mkdir my_new_project cd my_new_project git init Copy in any files you want to use to start the repo git commit -a -m "Initializing new repository" git remote add origin [email protected]:my_new_project.git git push master git push master:qa So I did 1 and 2, with no problem. It created a local folder on my machine called gitosis-admin. I edited the gitosis.conf file as indicated. But when I try to do step 3 (which I assume is git push gitosis-admin master) bash tells me that fatal: 'master' does not appear to be a git repository fatal: The remote end hung up unexpectedly What am I doing wrong?

    Read the article

  • windows xp cannot access admin share

    - by barlop
    I have 3 systems. A,B,Compx all on xp. but comps A and B have an issue with Compx. Compx has network shares I can access. I can do \\compx and get some. But I cannot access the admin share c$ \\compx\c$ gives a login prompt, and I can't get any user/pass to work. I looked at permissions but don't see an issue. Nevertheless, I will describe what I see in the permissions. In the security tab of C, I have Administrators,creator owner,everyone,bob,system,users (6 things there) "creator owner" has nothing ticked, I can't seem to change that. If I tick so they all get ticked, and click apply, 2.5min and it's completed its opration and they all untick. Though this isn't the root of the problem. Since I get the same in the share I can access. In advanced, I see those 6 things, Administrators,creator owner,everyone,bob,system,users (6 things there) all "full control" all are "this folder, subfolders and files".. except creator owner, which is just subfolders and files only I look at the properties for the share I can see. looks the same, except in security..advanced, double clicking any of them the boxes are all ticked but greyed. That's not the problem though since I can access that share. So, I don't know what the problem is.

    Read the article

  • named responding recursive on norecurse queries

    - by Keks
    I have a server on which named is running. It is intercepted with another named server which it is not aware of. Querying the first named server results in timeouts. The server tries to resolve the query recursively. During that the firewall redirects the DNS Request from the first named server to the second one (the query from the first one is addressed to a e.g. a root server and has its "Recursion desired" bit set to 0). Despite that the second named responds to this request with a entirely or at least 1 level more resolved response than the first named server expects. So it ends up with a timeout even though it got a correct name server or even the full IP for the queried domain. In the first case the first name server tries to follow the authority domain ignoring the coresponding glue record and ends up in a loop it aborts: queried: google.com -> got from named#2: ns1.google.com -> ignore glue record and query: ns1.google.com -> got authority from named#2: google.com In the second case it ignores the answer section with the correct IP and instead tries to follow the name servers from the authority section, which ends up in the same dead end as case 1. So how can it be that the second named responds with recursive results even though the bit was explicitly set to 0 in the request from the first named?

    Read the article

  • mod_rewrite changes case even if not matching RewriteCond?

    - by kirdie
    I have a really strange problem with my MediaWiki which I want to have articles of the form mywiki.org/MyArticle. Now I got most of it to work using the following code but it mysteriously cannot display the logo anymore. RewriteEngine On # don't rewrite valid requests to files and directories RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-d # mywiki.org/MyArticle gets rewritten to mywiki.org/index.php/MyArticle RewriteRule ^/(.*)$ /index.php/$1 [L,QSA] Now when I type in mywiki.org/img/logo.jpg in my browser the adress changes to http://wiki.geoknow.eu/Img/logo.jpg (capital I) and I get to the empty article page but the image is definitely there (in my document root under the img folder): /var/www/mywiki.org$ ls img logo.jpg So far so bad. But now it gets really crazy: When I add RewriteCond %{REQUEST_URI} !^/.*\.jpg my adress still gets rewritten and my access log says - - [05/Dec/2012:16:30:21 +0100] "GET /Img/geoknow_logo.jpg HTTP/1.1" 404 509 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0" Where does that capital I in Img come from? The rule is not even executed because at least one condition is definitely not met now and I also don't have any to lowercase-transformation defined anywhere. What is happening there and how can I repair this? P.S.: Now all of the sudden the problem went away (the image is displayed as it should and there is no capital replacement anymore. What can cause this and why does it spontanously appear and disappear?

    Read the article

  • Is it possible to to create a live linux iso containing a win xp virtual machine?

    - by mark
    I would like to have a Linux live system that contains a Windows xp virtual machine. This would be run from a bootable USB flash drive. My attempts so far have been unsuccessful. I created a Lubuntu 12.04 virtual machine with VMware. I updated and configured it to my needs, and installed Virtualbox. I then created a Windows xp vm with Virtualbox in the Lubuntu vm. I tested everything and everything worked, including USB devices. I installed Remastersys in the Lubuntu vm, copied the xp vm folder to the /etc/skel folder then created the custom iso with remastersys. I burned the iso and tested it on a laptop. It worked flawlessly. All programs and wireless networking worked. My problem was the xp vm. Virtualbox started fine but would not run the vm. I have the following error: Result Code: NS_ERROR_FAILURE (0x80004005) Component: VirtualBox Interface: IVirtualBox {c28be65f-1a8f-43b4-81f1-eb60cb516e66}. I ran remastersys again changing the permissions on the skel folder to R W for everyone. I also logged into Lubuntu as root and ran remastersys again. Each iso I created worked fine but would not start the xp vm inside. The last attempt virtualbox gave me an access error stating it can not access the virtual disk. Is what I want to do possible? In theory I don't see why it would not work. Is it a permissions issue? Should I create the iso then add the xp vm after by editing the iso by hand? Using a vm and not real hardware as a build machine a problem? Any ideas? keep any responses in laymens terms. I am still a Linux novice.

    Read the article

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

  • How do I rsync an entire folder based on the existence of a specific file type in that folder

    - by inquam
    I have a server set up that receives movies to a folder. I then serve these movies using DLNA. But in the initial folder where they end up all kind of files end up. Pictures, music, documents etc. I thought I'd fix this by running the following script inside that folder rsync -rvt --include='*/' --include='*.avi' --include='*.mkv' --exclude='*' . ../Movies/ This works and scans the given folder and moves all the found movies of the given extension types to the Movies folder. But I wonder if there is anyway to tell rsync to if a folder if found that includes a movie of the given extension types, sync the entire folder. Including other files such as .srt. This is to make it easier for me to get subtitles moved along with the movie. I have a solution figured out via a script made in php (yea, I actually do most of my scripting in linux using php... just a habbit that stuck a long time ago). But if rsync can handle it from the start that would be super. Also, I have noticed that this line of rsync actually copies all the root folders in the given folder. If no movie is in the folder it will create an empty folder. How do I prevent rsync from doing this... and saving me the trouble of deleting all folder in Movies that are empty.

    Read the article

  • Wordpress on Apache is redirecting all https to http

    - by Krist van Besien
    I have a problem with a wordpress site on a server I admin. I don't know anything about wordpress however. My problem is that we want the site to be accessed over https, bot somehow all requests to https:// URLs are answered by the server with a 302, redirecting to http. The wordpress site itself is configured to use https, and we see that in the pages that are generated the links are all https links. In the apache config there are no rewrite rules and no redirects. However, any request to a https:// URL is answered with a redirect to the equivalent http URL. And I really would like to know where these redirects are coming from, what is generating these redirects. I've increased the loglevel on the webserver to DEBUG, but did not get any info there. I tried to enable debug logging in wordpress per the recipy I found here: http://codex.wordpress.org/Debugging_in_WordPress But did not get a debug.log file in the directory where one should appear. I'm really at a loss here, and need to fix this urgently. Any hints as where to start looking? Apache is 2.2.14 on Ubuntu. There are several other virtual hosts on this server, using php and https without any problem... Edit: I created a small info.php script and dropped that in the webservers' root. Calling this yields the output of the script, no redirect is generated. This suggest that it's not the webserver, but wordpress that is doing it. A second thing I noticed is that the redirect comes with several cookies, one of which has "httponly" set. Could that be it?

    Read the article

  • SSH hangs when executing command remotely

    - by Serty Oan
    Client : OpenSSH_5.1p1 Debian-5ubuntu1 (Ubuntu 9.04) Server : OpenSSH_5.1p1 Debian-5 (Proxmox 2.6.24-7-pve) I use SSH to execute commands remotely on the server (module check_by_ssh of Nagios). But SSH hangs from time to time when trying to execute commands. I can log to the server via SSH but not executing a simple 'ls'. And it seems to block from all clients from the same IP address. Authentication is not the problem, may it be made by SSH keys or password. ssh -l root -p 2222 server.domain.tld 'ls' Here the client debug info debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR *** skipping approx 40 env var ignored debug1: Sending command: ls debug2: channel 0: request exec confirm 1 It hangs there. Then after a random time, it works again (without doing anything). Killing all sshd process on the server seems to work too. It works from a Putty. I saw that some people had trouble like this due to ISP reverse DNS problem, but it does not seem to be the case here. It can work for hours and then not work for half an hour or so. What could explain this behaviour ?

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • Enabling http access on port 80 for centos 6.3 from console

    - by Hugo
    Have a centos 6.3 box running on Parallels and I'm trying to open port 80 to be accesible from outside tried the gui solution from this post and it works, but I need to get it done from a script. Tried to do this: sudo /sbin/iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT sudo /sbin/iptables-save sudo /sbin/service iptables restart This creates exactly the same iptables entries as the GUI tool except it does not work: $ telnet xx.xxx.xx.xx 80 Trying xx.xxx.xx.xx... telnet: connect to address xx.xxx.xx.xx: Connection refused telnet: Unable to connect to remote host UPDATE: $ netstat -ntlp (No info could be read for "-p": geteuid()=500 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:37439 0.0.0.0:* LISTEN - tcp 0 0 :::111 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - tcp 0 0 ::1:631 :::* LISTEN - tcp 0 0 :::60472 :::* LISTEN - $ sudo cat /etc/sysconfig/iptables # Generated by iptables-save v1.4.7 on Wed Dec 12 18:04:25 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5:640] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Wed Dec 12 18:04:25 2012

    Read the article

  • Disk operations in windows 7 are slow

    - by Skadlig
    My computer started lagging last Sunday. I tried to reboot it and it failed. Trying to boot into failsafe mode takes around two hours. It mainly freezes on two files: scsiport.sys and classpnp.sys When it finally has started all disc operations are really slow. When it has run for a while it goes faster, probably due to data moved into RAM instead. It froze on an other file before that was associated with Avast but uninstalling it didn't really help. A critical windows update was installed on Sunday but rolling back the update didn't help. I had a guess about the sound card but disabling the sound card drivers also didn’t help. I have an inkling of an idea that it might be Intel rapid storage technology that might be acting up but it doesn't allow me to reinstall it from failsafe mode and I haven't been able to log into normal mode for a while. I would appreciate suggestions regarding how to get into normal mode again and/or what can be the root cause.

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >