Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 649/771 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Redhat Linux password fail on ssh

    - by Stephopolis
    I am trying to ssh into my linux machine from my mac. If I am physically at the machine I can log in with my password just fine, but if I am sshing it refuses. I am getting: Permission denies (publickey,keyboard-interactive) I thought that it might be caused by some changes that I recently made to system-auth, but I restored everything to what I believe was the original format: #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_fprintd.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so But I still could not ssh in. I tried removing my password all together and that didn't seem to help either. It still asks and even entering an empty string (nothing) it still fails me out. Any advice?

    Read the article

  • Issue with a secure login - Why am I being redirected to the insecure login?

    - by mstrmrvls
    Im having some issues getting a website working at my place of work. The issue was rasised when a "double login" occurred from the secure login site. The second login was actually being prompted by the HTTP domain and not HTTPS. In essence the situation is like this: The user navigates to https://mysite.com/something The login prompt pops up Enter username and password The user is presented with ANOTHER login prompt (IE will say its insecure, and the address bar reflects that) If the user puts in their password the insecure one, they will login to the insecure site. if they hit cancel it will present them with a 401 page Navigating back to https://somesite.com/something will by pass the login prompt and log them in to the secure site automatically (cookie maybe) I'm a bit confused to why the user isnt being logged in properly the first time (redirected to non-ssl) but any consecutive login will be okay? I've been trying to use fiddler to see what is happening after the user puts in their password the first time and trying to get fiddler to automatically login to the site (with no luck) I believe the website in question is using Basic Digest authentication. Thanks for any help

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • Snort not detecting outgoing traffic

    - by Reacen
    I'm using Snort 2.9 on windows server 2008 R2 x64, with a very simple configuration that goes like this: # Entire content of Snort.conf: alert tcp any any -> any any (sid:5000000; content:"_secret_"; msg:"TRIGGERED";) # command line: snort.exe -c etc/Snort.conf -l etc/log -A console Using my browser, I send the string "_secret_" in the url to my server (where Snort is located). Example: http://myserver.com/index.php?_secret_ Snort receives it and throws an alert, it works, no problem ! But when I try something like this : <?php // (index.php) header('XTest: _secret_'); // header echo '_secret_'; // data ?> If I just request http://myserver.com/index.php, it does not work or detect anything from the outgoing traffic even though the php file is sending the same string both in headers and in data, with no compression/encoding or whatsoever. (I checked using Wireshark) This looks to me like a Snort problem. No matter what I do it only detects receiving packets. Did anyone ever face this sort of problems with Snort ? Any idea how to fix it ?

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • sendmail appends server name to external domains when relaying

    - by Chris
    My server is set to send all email to a corporate relay server. For the company domain, it works perfectly. I've recently found emails being sent to an outside domain are getting the hostname of my server appended to the email prior to being sent. Here is the log entry for one such attempt. Nov 6 09:46:45 myservername sendmail[45023]: rA6EkjiI045023: [email protected], delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30590, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (rA6Ekj2g045037 Message accepted for delivery) Nov 6 09:46:45 myservername sendmail[45061]: rA6Ekj2g045037: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=120885, relay=relay.company.com [x.x.x.x], dsn=2.0.0, stat=Sent (ok: Message 342335947 accepted) Notice the email address difference between it being accepted by my server for delivery (correct email address), and being sent and accepted by the corporate relay (incorrect with server name appended). To make it more interesting, the application on my server uses email for user account verification/activation. In August, this particular user was able to register his account and activate it. I have made no configuration changes to mail since setting the server up over a year ago. DNS is also a corporate service. I've never touched my /etc/resolv.conf configuration. domain company.com nameserver <ip1> nameserver <ip2> search myservername Thanks!

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Apache LocationMatch does not work for group

    - by dma_k
    I would like to configure Apache to proxy mldonkey running at localhost. Initially I have used the following configuration: <IfModule mod_proxy.c> <LocationMatch /(mldonkey|bittorrent)/> ProxyPass http://localhost:4080/ ProxyPassReverse http://localhost:4080/ </LocationMatch> </IfModule> and it didn't worked! error.log reads [error] [client 192.168.1.1] File does not exist: /var/www/mldonkey which means that Apache does not intersect the URL. However, when I change the regexp to following: <LocationMatch /mldonkey/> it started to work (i.e. mod_proxy functions OK, more over all ). I have tried the following alternatives: <LocationMatch ^/(mldonkey|bittorrent)/> <LocationMatch ^/(mldonkey|bittorrent)/.*> <LocationMatch ^/(mldonkey|bittorrent)> <LocationMatch /(mldonkey|bittorrent)> <LocationMatch "^/(mldonkey|bittorrent)/"> <LocationMatch "/(mldonkey|bittorrent)"> <LocationMatch "/(mldonkey)"> <LocationMatch "/(mldonkey)/"> with no positive result. I am stuck. Please give me a hint where to look at. P.S. Apache Server 2.2.19. P.P.S. Would be happy if <LocationMatch> would work, without using the heavy artillery of mod_rewrite.

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • What tools can I use to locate the IP of a machine on my network?

    - by user134918
    I am logged in to a remote Windows Server machine and am trying to attach it to a VPN for a LAN that I am also connected to locally from another Windows machine using Remotr Desktop. I can connect the remote machine to the VPN but when I do so, I lose my remote desktop connection. I am now in a situation where I know/think that the remote machine is on my LAN, but do not know what its current IP is and can therefor not connect to it again. I do not have any control over the infrastructure, all I have is a remote machine that I do control, and another machine that I also control that is connected to the same LAN as I'm trying to get the remote machine on using the existing VPN. What tools are available for Windows to allow me to locate the machine on my LAN again? I am imagining that there must be a tool that broadcasts the machines new IP using multicast, or tries to log in to a server component running somewhere with a know IP. Effectively, I am looking for some software that I can run on my remote machine, as well as my local machine, to allow me to discover the new IP address (on the LAN) assigned to the remote machine after connecting to the VPN.

    Read the article

  • How and where do you manage your domain names?

    - by Saif Bechan
    In the past several years doing web development I often times needed to buy new domain names. I changed registrars a lot also so over the years I have multiple domain names scattered over different registrars all over the world. Now I want to bring a little structure into my business, and I am at the point that I want to be able to have easy control over my domain names in a convenient way. Does anyone have an idea on what the best way is to give structure on this. I have made some suggestions maybe you can comment on them for me. 1) Just leave it as it is I can leave everything as it is. To make adjustments I have to log into different panels, and for some registrars I have to email the changes. 2) Transfer all the domains to one registar This will cost a lot, about 10 usd per domain name. But if I can find a registar where I have full control over DNS this is worth looking at. Can you give me some comments on how you are doing things now. Maybe also which registrar you prefer on doing things.

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • cannot get mssql working with sql server 2005

    - by Ryan
    I'm a MySQL/Apache user, trying my hand with IIS and SQL server, so please, if this is a stupid question have patience. I'm using IIS version 7.5. PHP version 5.3.13 and SQL server 2005 IIS is running on port 90, not sure if that will make a difference or not. I know my sql server is running because I can explore/connect to it in Server management studio. I know php is configured properly, because //localhost:90/phpinfo.php works fine. I updated the php_msql.dll extension in phpinfo to: extension=ext/php_msql.dll EDIT- However, when I run phpinfo() under the "configure command" row, this is present: --without-mssql I found/downloaded the ntwdblib.dll and placed it in both sys32 and php root. All these things were supposed to fix the issue, and they haven't. This is the code I'm using, straight from php.net: <?php // Server in the this format: <computer>\<instance name> or // <server>,<port> when using a non default port number $server = 'localhost'; // Connect to MSSQL $link = mssql_connect($server, 'uname', 'pwd'); if (!$link) { die('Something went wrong while connecting to MSSQL'); } ?> obviously I'm using a real username and password, but when I load the file in my browser, I receive a 500 error. Upon checking the log, this is what is displayed: 2012-06-25 12:41:29 ::1 GET /test.php - 90 - ::1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/536.5+(KHTML,+like+Gecko)+Chrome/19.0.1084.56+Safari/536.5 500 0 0 5 That (to me) doesn't help me much. What am I doing wrong? Thank you

    Read the article

  • Explain why .bash_logout won't run commands?

    - by Droogans
    So I've been wondering how to run these two lines of code everytime I close an open instance of Terminal: history -c cat /dev/null > ~/.bash_history I export HISTFILE=5 on startup, but still want to flush that out when I'm done. I've tried looking around a bit in a couple of places, and haven't had much luck. I run Linux Mint, and would also note here that I ran into a similar issue with .bash_profile; eventually, I discovered I needed to place all start up code in .bashrc, so maybe that has something to do with it. Here's my .bash_logout file: #!/bin/bash # ~/.bash_logout: executed by bash(1) when login shell exits. # when leaving the console clear the screen to increase privacy if [ "$SHLVL" = 1 ]; then history -c cat /dev/null > ~/.bash_history [ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q fi #this does nothing on exit... echo 'logout'; sleep 2s I've tried re-arranging this script many ways, I'm not sure if I don't understand how bash works, and if any of this is running in the first place. Does the fact that I run Xserver make bash consider Terminal something that isn't a log-out on exit?

    Read the article

  • Route outbound connections from local network through VPN

    - by Sharkos
    I have a server A running OpenVPN, an OpenVPN client B (a rooted Android phone as it happens) and a third party C (a laptop, tablet etc.) tethered to B. B can use the VPN to access the internet via A; C can use the tethered connection WITHOUT the VPN to access the internet via B. However, with the VPN on B active, I cannot load information from the internet on C. A appears to log similar traffic inbound and outbound when B or C attempt to load a webpage, say, but the VPN on device B reports no inbound traffic when the connection originated from C. Where should I look for packets being dropped, and what ip rules should I use to make sure they are passed back through the VPN and into the local network B <- C? (I'll obviously post whatever further information is needed.) Further info Without VPN: root@android:/ # ip route default via [B's External Gateway] dev rmnet0 [B's External Subnet] dev rmnet0 proto kernel scope link src [B's External IP] [B's External Gateway] dev rmnet0 scope link 192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.1 With VPN: root@android:/ # ip route 0.0.0.0/1 dev tun0 scope link default via [B's External Gateway] dev rmnet0 [B's External Subnet] dev rmnet0 proto kernel scope link src [B's External IP] [B's External Gateway] dev rmnet0 scope link [External address of A] dev tun0 scope link 128.0.0.0/1 dev tun0 scope link 172.16.0.0/24 dev tun0 scope link 172.16.0.8/30 dev tun0 proto kernel scope link src 172.16.0.10 192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.1 192.168.168.0/24 dev tun0 scope link

    Read the article

  • environment variables generated by at command

    - by Jordan Arseno
    I'm inspecting /var/spool/cron/atjobs/a001cf01570e44 with cat, after running the at command from PHP using exec(). It looks like at has prepended the script with lots of APACHE environment variables. #!/bin/sh # atrun uid=33 gid=33 # mail www-data 0 umask 22 APACHE_RUN_DIR=/var/run/apache2; export APACHE_RUN_DIR APACHE_PID_FILE=/var/run/apache2.pid; export APACHE_PID_FILE PATH=/usr/local/bin:/usr/bin:/bin; export PATH APACHE_LOCK_DIR=/var/lock/apache2; export APACHE_LOCK_DIR LANG=C; export LANG APACHE_RUN_USER=www-data; export APACHE_RUN_USER APACHE_RUN_GROUP=www-data; export APACHE_RUN_GROUP APACHE_LOG_DIR=/var/log/apache2; export APACHE_LOG_DIR PWD=/home/jordanarseno/webroot/public_html/myapp; export PWD cd /home/jordanarseno/webroot/public\_html/myapp || { echo 'Execution directory inaccessible' >&2 exit 1 } curl -k http://localhost/myapp/crons/this_action/3 The last line is the only real command I sent along with at via stdin. What is the purpose of these variables? Where is this procedure stored?

    Read the article

  • Removed Old Domain Trust. Now Progress (9.1D) can't open DB File

    - by RLH
    My company has an old server, running Progress 9.1D on a Windows 2000 VM, which was used by our company OS (Vantage 6 by Epicor.) Vantage was our primary OS for a very long time. About 2 years ago, we migrated to a larger, corporate OS and we cancelled our service contract with Epicor. Yesterday, we removed an AD trust between the corporate domain and our old AD domain we used in the days of Vantage. After restarting the virtual server, I have been able to start the ProService for 9.1D Windows service, however, I can not get Vantage to start back up. When I run the application, I get the error in the message listed below. Transcript: ** Could not connect to server for database [progress db file], errno 0. (1432) How can I fix this? FYI, I haven't had to work with Progress in years and even then I wouldn't have considered myself a "novice"-- I'm even less knowledgeable than that title would suggest. Vantage had a lot of internal tools and I recall that Epicor support managed to prevent .pf scripts from being executed. If there was a Progress specific patch that needed to be applied, you had to do it within the Vantage software OR they had to remote into the machine to fix this. I may not be able to run a .pf script but I do know that I can log into the console-based server application. (Yes, I can't even recall which utility that was called. It is sad.) It's been a long time and I never had to digg into Progress that much. Please help and feel free to ask questions. If you need more info, I'll update this post.

    Read the article

  • Apache in MAC OS X

    - by Michal K.
    I have problems with apache on MAC OS Lion 10.7.5. I have VirtualHosts: <VirtualHost *:80> ServerName devel.dev DocumentRoot /var/www </VirtualHost> <VirtualHost *:80> ServerName test.dev DocumentRoot /var/www <Directory "/var/www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have configured httpd.conf to ServerRoot = /usr/htdocs (empty directory). test.dev and devel.dev says always "It works!". Why ? In /var/www I have index.php which contains only one letter "k" (for tests). Edit, more info: I have restart apache milion times. File with VirtualHosts is included. error.log: [Tue Oct 02 20:03:55 2012] [notice] caught SIGTERM, shutting down [Tue Oct 02 20:03:55 2012] [warn] mod_bonjour: Cannot stat template index file '/System/Library/User Template/English.lproj/Sites/index.html'. [Tue Oct 02 20:03:55 2012] [notice] Digest: generating secret for digest authentication ... [Tue Oct 02 20:03:55 2012] [notice] Digest: done [Tue Oct 02 20:03:55 2012] [notice] Apache/2.2.22 (Unix) DAV/2 PHP/5.3.15 with Suhosin-Patch configured -- resuming normal operations When I stop apache, localhost still displays It works!

    Read the article

  • Microsoft Word 2008 on the Mac sometimes "Disappears" documents, really.

    - by Ross Charette
    This happens in a computer lab environment, has happened at least 3 times. We are running Microsoft Office 2008 for mac on Leopard, everything is updated. Our user's home directories are on a network drive, but the /Library/Cache folder is running locally. Typically a student will have a Word file that they have been working on, it's been saved before they even logged onto the computer that day. They log on, open the document, click the save icon (not go to File Save), sometimes even save multiple times, then close Word. The document is now gone. It's not hidden, there are no autosaves or anything in the Cache folder. Definitely not in the trash or trashes folder. It can't find it when you click on it in 'recent documents'. Searching meticulously though every folder in their home drive turns up nothing. They look using Finder, I look ssh'd as root into their home using ls -la. I look for similar files in case they renamed it by mistake. It's gone. Disappeared. Vaporized. It's happened to at least 3 different users in the past year. Much whining. Any idea?

    Read the article

  • Exchange 2007 issue internet receive connector

    - by user223779
    I have issue with yahoo.co.uk if I send a mail from within the yahoo webconsole the mail arrives in my inbox on the exchange server If I send mail from Iphone configure to send via mail box configure with yahoo setting mail is dropped. It is not the phone I can send perfectly fine to other exchange 2007 servers same service pack etc. if you look at the smtprec log below. this message sent from the phone you can see stops after 354 Start mail input; end with . ,<,EHLO nm26-vm7.bullet.mail.ir2.yahoo.com, ,,250-mail.marcocm.com Hello [212.82.97.49], ,,250-SIZE 10485760, ,,250-PIPELINING, ,,250-DSN, ,,250-ENHANCEDSTATUSCODES, ,,250-AUTH, ,,250-8BITMIME, ,,250-BINARYMIME, ,,250 CHUNKING, ,<,MAIL FROM:, ,*,08D13F3CADECA060;2014-06-04T11:26:50.898Z;1,receiving message ,,250 2.1.0 Sender OK, ,<,RCPT TO:, ,,250 2.1.5 Recipient OK, ,<,DATA, ,,354 Start mail input; end with ., ,+,, This is the message hitting the same server sent from yahoo webmail. ,"220 mail.marcocm.com Microsoft ESMTP MAIL Service ready at Wed, 4 Jun 2014 12:29:26 +0100", ,<,EHLO nm4-vm6.bullet.mail.ir2.yahoo.com, ,,250-mail.xxx.com Hello [212.82.96.104], ,,250-SIZE 10485760, ,,250-PIPELINING, ,,250-DSN, ,,250-ENHANCEDSTATUSCODES, ,,250-AUTH, ,,250-8BITMIME, ,,250-BINARYMIME, ,,250 CHUNKING, ,<,MAIL FROM:, ,*,08D13F3CADECA06B;2014-06-04T11:29:26.237Z;1,receiving message ,,250 2.1.0 Sender OK, ,<,RCPT TO:, ,,250 2.1.5 Recipient OK, ,<,DATA, ,,354 Start mail input; end with ., 2,,250 2.6.0 <[email protected] Queued mail for delivery, <,QUIT, ,,221 2.0.0 Service closing transmission channel, ,-,,Local ,+,, Any Thoughts how to fix this issue much appreciated.

    Read the article

  • Random HTTP 413 error on apach2/php/joomla site

    - by jfab
    I have a Joomla site, and every once in a while when I submit something via a form, I get a HTTP 413 error: Request Entity Too Large The requested resource /index.php does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the error.log file I get: Invalid Content-Length, referer: [site]/index.php It doesn't seem this has anything to do with the actual size of the request, for the following reasons: a) I tinkered with the configuration of both Apache, and PHP. In Apache I tried increasing LimitRequestBody, and in PHP post_max_size, max_input_vars, memory_limit, and even upload_max_filesize. Every value is far beyond what is sent in a typical request that generates an error. b) The error pops up quite randomly, and often just hitting refresh allows me to get through. c) I checked the request in Fiddler to make sure everything is right with the content-length stated in the header, and the content of the request itself. Everything appears to be in order. A curious thing is that when I resent the exact same request via Fiddler, I never got the error. It seems I can only recreate it through a browser. So I'm at my wit's end here. I don't even know where to look for the problem anymore. I don't know if it's Apache or PHP (though I can't find anything in PHP error logs, so maybe that means Apache is the more likely culprit?), or PHP in general, or my Joomla site in particular (my bets were on Joomla until a recreated the error on a test script, with a very basic post form, though it does pop up much more often on the Joomla site). If anyone can give any advice on where to even begin with this, I'll be very grateful!

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

  • SSH with public/private key to iMac fails.

    - by bennedich
    I'm trying to connect to my iMac (server) from my macbook (client) on my LAN. Both have Mac OS X 10.6.4. Server running on a new clean install of the OS. When just activating Remote Login in System Preferences everything works fine. But when setting up ssh to only work with public/private key I get the following error messages from the server log depending on if I use a rsa passphrase or not: With passphrase (case 1): PAM: user account has expired for <myServerUserName> from 192.168.X.X via 192.168.X.Y Without passphrase (case 2): Failed publickey for <myServerUserName> from 192.168.X.X port AAAAA ssh2 This is my setup algorithm: Create a private and public key on client with command ssh-keygen -t rsa. In case 1 I also set a passphrase. Move the id_rsa.pub to the server path /Users/<myServerUserName>/.ssh/ In this folder I execute cat id_rsa.pub > authorized_keys Making sure Remote Login isn't active, I now execute sudo /usr/sbin/sshd -d on the server. Back on the client I now type ssh -v -v -v <myServerUserName>@192.168.X.Y and get prompted to accept RSA key fingerprint. This is NOT the same fingerprint as the one from when I created the private/public key (should it be?). I accept. Depending on case: CASE 1: Client gets halted for password and the response is permission denied even though correct password is given. Back on the server I can read the error message I stated above for case 1: PAM: user account has expired... CASE 2: Client gets message Connection closed by 192.168.X.Y. Back on the server I can read the error message I stated above for case 2: Failed publickey... What could possibly cause this?

    Read the article

  • linux kernel buffer memory is zero

    - by user64772
    Hi all. There are one qestion that i can`t find in google. I have many linux boxes mostly with SLES or openSUSE, diffrent versions and kernels. On some of them i faced with slow oracle transactions problem. It time to time problem and when i log in the box on that time i see that oracle blocked in kernel function sync_page # while :; do ps axo stat,pid,cmd,wchan | egrep '^D|^R'; echo --; sleep 5; done D 3483 hald-addon-storage: polling ide_do_drive_cmd Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page D 12457 [smtpd] sync_page R+ 12458 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12501 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12535 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12570 ps axo stat,pid,cmd,wchan - -- so i think that box is run out of memory for disk buffers but memry is fine total used free shared buffers cached Mem: 4149084 3994552 154532 0 0 2424328 -/+ buffers/cache: 1570224 2578860 Swap: 3148700 750696 2398004 i think that this is the problem, buffer is zero and we must write directly to disk, but why buffer is zero ? - i try to google it and find nothing - is anyone can help ?

    Read the article

  • Config nginx for slow connection to avoide corrupted doanlowds

    - by user1850273
    We have a Windows 2003 server that nginx 1.3.8 is running. Our problem is users with slow connction about 10K . Our server is serving our program update files and when they download from our server the downloade file is incompleted or crrupted. (Users can not download file with DL manager and the problem is in IE ) for example in slow connection a file with 25mb , after 2Mb downloaded finish . in high speed connections there is no problem. Also when we redirect these slow connection to other port F.e 50005 with the same config they download will be much better but not good as other servers. Which config we must apply to avoide such these download stops or corrupted downloads in slow connection ? this is our server config : worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent ' '"$http_user_agent"'; access_log logs/access.log main; sendfile off; keepalive_timeout 60; server { listen 80; server_name localhost; location / { root html; deny 127.0.0.3; index index.html index.htm; } } server_tokens off; } Our server use Htaccess password accounting and we can not use IIS on windows , Which soloution you think is better ? IIS with a extention to use apache htaccess ? Or use apache for windows insted of nginx ? Thank You.

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >