Search Results

Search found 17625 results on 705 pages for 'techno log'.

Page 568/705 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

  • Can't install mysql 5.1 on a windows machine because the last install left artifacts.

    - by Zombies
    After uninstalling mysql 5.1 (64 bit version) I cannot install the win32 version! Apparently the devs felt it neccasery to leave helpful artifacts behind? I have rebooted my machine but no effect.. Running this: C:\Users\User1>net start mysql The MySQL service is starting. The MySQL service could not be started. A system error has occurred. System error 1067 has occurred. The process terminated unexpectedly. And ran this: C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin>mysqld --console 100213 10:52:58 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Error: log file .\ib_logfile0 is of different size 0 10485760 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 100213 10:52:59 [ERROR] Plugin 'InnoDB' init function returned error. 100213 10:52:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100213 10:52:59 [ERROR] Unknown/unsupported table type: INNODB 100213 10:52:59 [ERROR] Aborting 100213 10:52:59 [Note] mysqld: Shutdown complete Update: For some reason it looks like it is installing the 32bit DB into the old 64bit directoy.... will look into this... (the bin directory is going into the 32 bit program files directory).

    Read the article

  • Ubuntu hard disk getting SATA errors

    - by Henadzy
    I am getting "UNC" errors on a hard disk on Ubuntu 9.10. It slows down my system, applications have not been responding for a long time. But when I mount the filesystem on another computer, it works properly. disk: SAMSUNG HD161HJ (SATA) syslog: Apr 25 00:28:25 vare6gin kernel: [ 885.773839] ata3.00: exception Emask 0x1 SAct 0x1e SErr 0x0 action 0x6 frozen Apr 25 00:28:25 vare6gin kernel: [ 885.773845] ata3.00: Ata error. fis:0x21 Apr 25 00:28:25 vare6gin kernel: [ 885.773861] ata3.00: cmd 60/08:08:3f:00:ad/00:00:10:00:00/40 tag 1 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773864] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773871] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773877] ata3.00: error: { UNC } [...snip 3 similar repeats of last 4 lines; see revision history for full log...] Apr 25 00:28:25 vare6gin kernel: [ 885.773970] ata3: hard resetting link Apr 25 00:28:25 vare6gin kernel: [ 885.773974] ata3: nv: skipping hardreset on occupied port Apr 25 00:28:25 vare6gin kernel: [ 886.240073] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 25 00:28:25 vare6gin kernel: [ 886.256277] ata3.00: configured for UDMA/133 Apr 25 00:28:25 vare6gin kernel: [ 886.256305] ata3: EH complete Apr 25 00:28:27 vare6gin kernel: [ 888.176088] ata3: EH in SWNCQ mode,QC:qc_active 0xF sactive 0xF Apr 25 00:28:27 vare6gin kernel: [ 888.176099] ata3: SWNCQ:qc_active 0xF defer_bits 0x0 last_issue_tag 0x3 Apr 25 00:28:27 vare6gin kernel: [ 888.176102] dhfis 0xF dmafis 0x1 sdbfis 0x0 Apr 25 00:28:27 vare6gin kernel: [ 888.176109] ata3: ATA_REG 0x51 ERR_REG 0x40 Apr 25 00:28:27 vare6gin kernel: [ 888.176113] ata3: tag : dhfis dmafis sdbfis sacitve Apr 25 00:28:27 vare6gin kernel: [ 888.176120] ata3: tag 0x0: 1 1 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176126] ata3: tag 0x1: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176131] ata3: tag 0x2: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176136] ata3: tag 0x3: 1 0 0 1

    Read the article

  • Setting up squid proxy server to in turn connect using another proxy server [closed]

    - by AnkurVj
    My institute uses the Squid proxy server and authentication mechanism requires username and password to be entered. This means that, I can log in on only one machine at a time and Internet access for me is restricted to that machine. I sometimes require Internet access on multiple machines simultaneously. What previosuly worked for me was the following : On one of my own machines A, I set up a Squid proxy server that allowed all local machines without any username and password. I configured rest of the machines to use this machine A as the proxy server. On machine A I logged into the institute proxy server using my browser. This way, I could access Internet from all my machines, by effectively channeling my requests through the server A. Recently, I lost that machine and configuration and now I tried to set it up again in the same manner. However, I cant seem to remember exactly how I made it work. I keep getting Connection Refused (111) on other machines. My guess is that my squid server isnt able to forward requests from other machines to the actual squid server. I could use any help for debugging this problem. I don't want to use alternatives such as ssh tunneling. This solution has worked for me in the past, I just don't remember how to set it up the same way again.

    Read the article

  • Postfix message ID originating process?

    - by Anders Braüner Nielsen
    Last night my postfix mail server(Debian Squeeze with dovecot, roundcube, opendkim and spamassassin enabled) started sending out spam from a single domain of mine like these: $cat mail.log|grep D6930B76EA9 Jul 31 23:50:09 myserver postfix/pickup[28675]: D6930B76EA9: uid=65534 from=<[email protected]> Jul 31 23:50:09 myserver postfix/cleanup[27889]: D6930B76EA9: message-id=<[email protected]> Jul 31 23:50:09 myserver postfix/qmgr[7018]: D6930B76EA9: from=<[email protected]>, size=957, nrcpt=1 (queue active) Jul 31 23:50:09 myserver postfix/error[7819]: D6930B76EA9: to=<[email protected]>, relay=none, delay=0.03, delays=0.02/0/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta5.am0.yahoodns.net[66.196.118.33] while sending RCPT TO) The domain in question did not have any accounts enabled but only a catchall alias set through postfixadmin - most emails were send from a specific address I use frequently but some were also sent from bogus addresses. None of the other virtual domains handled by postfix were affected. How can I find out what process was feeding postfix/sendmail or more info on where they originated? As far as I can tell php mail() wasn't used and I've run several open relay tests. I did a little tinkering(removed winbind from the server and ipv6 addresses from main.cf) after the attack and it seems to have subsided but I still have no idea how my server was suddenly sending out spam. Maybe I fixed it - maybe I didn't. Can anyone help figuring out how I was compromised? Anywhere else I should look? I've run Linux Malware Detect on recently changed files but nothing found.

    Read the article

  • Applying ACLs to a Dovecot public namespace

    - by larsks
    I have a public namespace define in my dovecot (dovecot-2.0.9) configuration that looks like this: namespace { type = public separator = . prefix = news. location = maildir:/var/spool/news subscriptions = no } I would like to make all the mailboxes in this namespace read-only. I've got the following configuration for the ACL plugin: plugin { acl = vfile:/etc/dovecot/acls:cache_secs=300 } After perusing the documentation, it seemed as if I had a mailfolder /var/spool/news/.foo.bar that I could place the following into /var/spool/news/.foo.bar/dovecot-acl: anyone rl But that doesn't have any affect. I also tried creating a file /usr/local/etc/dovecot/acls/news.foo.bar with the same contents, but that didn't do anything, either. I've turned on mail debugging: mail_debug = yes But the log doesn't produce anything that appears to be relevant to ACL processing. I'm curious to know if anyone has gotten this to work correctly and if so if you could provide some configuration examples. Also, if there's any way to do this that doesn't involve per-mailbox configuration (.e.g, the ability to apply an ACL to news.* or something), that would be awesome. Getting the documented behavior for default ACLs working would be a step in the right direction.

    Read the article

  • Solaris Mysql Failure and Unable to Restart

    - by Iscariot
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' when typing mysql as root it says mysql: not found. The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it.

    Read the article

  • Single m0n0wall - Two LAN Subnets - How To Setup

    - by SnAzBaZ
    I have two LAN subnets that I need to link together they are 192.168.4.0/24 and 192.168.5.0/24 There is a m0n0wall running on 192.168.4.1. It's LAN connection goes out to our network switch, and it's WAN port goes out to our ADSL modem. WAN is connected via PPPoE. The 192.168.4.0 subnet contains all of our office workstations. The 192.168.5.0 subnet contains development servers and test machines that need to obtain internet access and be "managed" by computers on the 192.168.4.0 subnet, but need to be on their own subnet as well. I have a Draytek 2820N configured on 192.168.5.1 with it's WAN2 port configured as 192.168.4.25 and a default gateway of 192.168.4.1. Machines on the 5.0 subnet can connect to the internet via the m0n0wall just fine. I configured a static route on the m0n0wall LAN interface, Network 192.168.5.0/24 and Gateway 192.168.4.25. Machines on the 5.0 subnet can ping machines on the 4.0 network but the reverse does not work. I configured a new firewall rule on the m0n0wall that allows any traffic on the LAN interface with a source IP of 192.168.4.25 to be allowed. The DrayTek firewall is currently configured to pass all traffic regardless. When I try to ping a machine in the 5.0 subnet from 4.0 I see this in my m0n0wall log: BLOCK 14:45:27.888157 LAN 192.168.4.25 192.168.4.37, type echoreply/0 ICMP So the reply is being sent from the 5.0 subnet but is not being allowed to reach my workstation because the firewall is blocking it. Why is the firewall blocking it ? I hope the explanation of my network is clear, please ask if you require further clarification. Thank you.

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • QMail do not delivery to remote mailboxes for my own domain

    - by lorenzo-s
    Sorry for the title, I don't know how to sum up this situation. I have a web server at mydomain.com, running qmail for website related mail delivery (i.e. newsletter, sign up confirmation, etc). qmail here is used only to send mails, because I have a fully working Google App Gmail associated with mydomain.com for normal email receiving. qmail runs fine when sending email to remote addresses, for example to [email protected], but fails when sending to [email protected]. I think it's because the server thinks that he have to manage mailboxes for mydomain.com locally, instead of redirect them to Gmail. Here is the /var/log/qmail/current for two email: the first one is sent without problems to example.com, second one fails because it's for mydomain.com: 2012-11-15 15:04:11.551933500 new msg 262580 2012-11-15 15:04:11.551936500 info msg 262580: bytes 5604 from <[email protected]> qp 5185 uid 33 2012-11-15 15:04:11.575910500 starting delivery 316: msg 262580 to remote [email protected] 2012-11-15 15:04:11.575912500 status: local 0/10 remote 1/20 2012-11-15 15:04:12.189828500 delivery 316: success: 74.125.136.27_accepted_message./Remote_host_said:_250_2.0.0_OK_1352991894_j49si13055539eep.9/ 2012-11-15 15:04:12.189830500 status: local 0/10 remote 0/20 2012-11-15 15:04:12.189831500 end msg 262580 2012-11-15 16:49:20.270332500 new msg 262580 2012-11-15 16:49:20.270336500 info msg 262580: bytes 2192 from <[email protected]> qp 5479 uid 33 2012-11-15 16:49:20.315125500 starting delivery 323: msg 262580 to local [email protected] 2012-11-15 16:49:20.315128500 status: local 1/10 remote 0/20 2012-11-15 16:49:20.320855500 delivery 323: failure: Sorry,_no_mailbox_here_by_that_name._(#5.1.1)/ 2012-11-15 16:49:20.320858500 status: local 0/10 remote 0/20 2012-11-15 16:49:20.372911500 bounce msg 262580 qp 5484 2012-11-15 16:49:20.372914500 end msg 262580 As you can see, it says: Sorry,_no_mailbox_here_by_that_name I can't say he's wrong :) How to solve this? How to let Google App Gmail manage incoming email for mydomain.com for messages sent by mydomain.com qmail server?

    Read the article

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

  • Catastrophic Failure opening ODBC via Citrix

    - by Joshdan
    We recently had our Citrix server crash unexpectedly. When it came back up, there was a new issue -- every ODBC connection fails with "Catastrophic Failure" (0x8000FFFF). The issue is limited to Citrix / ICA connections; logging in as the same user via RDP works as usual. The following code is my minimal test case (for wscript): ''// test_odbc.vbs strConn = "Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=c:\files\;" Set rs = CreateObject("ADODB.recordset") strSQL = "SELECT * FROM myFile.csv" wscript.echo "Press OK to Test" ''// This line breaks over Citrix, but not over Terminal Services ''// ---------------------- rs.open strSQL, strConn, 3,3 ''// ---------------------- wscript.echo rs("a") Any insight would be greatly appreciated. Windows Server 2003 SP1, Citrix MetaFrame Presentation Server 4.0. Clients include at least versions 10.2-11 running on 2000-Vista, OS X. ODBC error happens whether a DSN is used or not, on at least Access, MS-SQL, and CSV. Connections both through the SSL Gateway and directly. There have been a few users actually able to log in without trouble, but I can't pin down anything special about them.

    Read the article

  • Application runs fine manually but fails as a scheduled task

    - by user42540
    I wasn't sure if this should go here or on stackoverflow. I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • Reliable backup software for windows network/samba shares.

    - by Eli
    Hi All, I have a Win2003 server that works as a pdc for a number of XP boxes, and a couple related FreeBSD boxes. I need to back up roaming profiles, non-roaming profiles via network shares, local hard drive data, and files on the FreeBSD boxes via samba shares. I have tried Genie Backup Manager and Backup4All pro, and both have excellent features, but both also begin to fail disastrously with more than a few days use. Mostly, the errors seem to have been from the backup catalog getting out of synch with itself. Whatever it is, there is no excuse for a backup software that says it backed up files when it really didn't, or the log saying it backed up exactly the same file 10,000 times in a single run, or flat-out crashing, or any of the other myriad problems I've run into with these. Really sad for products that fill such an important need. Anyway, does anyone know of a backup software that works reliably and can do the following? Scheduled backups for multiple jobs, without a user logged in. Backup from local hard drives or network shares. Incremental backups. Thanks! Edit: Selected solution: I've added my (hopefully final) solution as an answer.

    Read the article

  • Exchange 2010 issuing NDRs to Hotmail/Live & few other domains on receipt of message

    - by John Patrick Dandison
    I'm working through a beast of an issue at the moment. Exchange 2010 single server on prem Hybrid deployment to Office 365 ESMTP filtering turned off on ASA Certain domains (most consistently, Hotmail/Live) cannot send us mail. At one point, we couldn't send out either, but I created a new Send Connector that forces HELO instead of EHLO. I turned on SMTP logging, an example of the failed inbound message connection is below. I've read that it could be that reverse DNS is the problem, i.e., the exchange banner smtp address needs to reverse-DNS back to the same IP. Since it's the default exchange connector, its banner is the server's name, but the DNS name of the MX record is different. I'm waiting for the PTR records to update to reflect the internal name as well. Is that the right direction? Is this all DNS or something different? SMTP Session Log (single failed session for illustration): SMTPSubmit SMTPAcceptAnySender SMTPAcceptAuthoritativeDomainSender AcceptRoutingHeaders 220 ExchangeServerName.internalSubDomain.example.com Microsoft ESMTP MAIL Service ready at Mon, 15 Oct 2012 09:57:24 -0400 EHLO col0-omc3-s4.col0.hotmail.com 250-ExchangeServerName.internalSubDomain.example.com Hello [65.55.34.142] 250-SIZE 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-STARTTLS 250-X-ANONYMOUSTLS 250-AUTH NTLM LOGIN 250-X-EXPS GSSAPI NTLM 250-8BITMIME 250-BINARYMIME 250-CHUNKING 250-XEXCH50 250-XRDST 250 XSHADOW MAIL FROM:<[email protected]> 08CF5268DABBD9AA;2012-10-15T13:57:24.564Z;1 250 2.1.0 Sender OK RCPT TO:<[email protected]> 250 2.1.5 Recipient OK XXXX 1282 LAST Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXX from COL002-W38 ([65.55.34.135]) by col0-omc3-s4.col0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command " XXXX 15 Oct 2012 06:57:24 -0700" Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXXXX <[email protected]> Tarpit for '0.00:00:05'

    Read the article

  • Can't display RSSI values in Wireshark

    - by Giovanni Soldi
    I am trying to analyze the up-link Wireless traffic generated by my Sony Ericsson phone and captured by my D-Link router, on which I installed the DD-WRT firmware. To do this, first I log in the router and enable the prism0 interface by typing the command: wl -i eth1 monitor 1 and then I start to capture the packets by typing: tcpdump -i prism0 ether src xx:xx:xx:xx:xx:xx -s0 -w /tmp/smbshare/sony_ericsson_test.pcap where xx:xx:xx:xx:xx:xx is the MAC address of my Sony Ericsson phone. After a while I transfer the sony_ericsson_test.pcap file to my computer and open it with Wireshark program. In order to display the RSSI values I follow this procedure: Edit - Preferences... - Columns - Press "Add" button - As "Field type" I choose "IEEE 802.11 RSSI" and finally I choose name "Power" and click on "Apply" button. The problem is that the column "Power" is empty with no RSSI values. Does Anyone has a clue on why are RSSI values not displayed? Maybe I am missing a passage. Looking forward to hearing from anyone of you! Thanks in advance for your help!

    Read the article

  • In solaris, how monitor & auto-respond to critical events

    - by mamcx
    I have a website that randomly fail. Is running in open solaris on joyent. I have a monitoring service that alert me when the site is down, but, I want a way to put a "insider" tool that tell me why that happened. Is because the cpu is too high? Not memory? Which process fail? Is possible to have a backtrace of that? Everything is running on the Solaris Service Management Facility. The webserver is cherokee, the database is mysql and the language is python/django. I want the most simple setup to monitor that & auto-respond , ie: restart the webserver or the django process in case of failure. I prefer a low-overhead tool. I don't need the fancy monitoring that some tools have, no ned graphs or sms alert. Only know what fail, restart it if possible (maybe up to n times), and have a log somewhere when I will check it.

    Read the article

  • server 2008 r2 stuck on installing updates

    - by volody
    I have 2008 r2 64bit server that is stuck on installing updates. It shows message "Please do not power off or unplug your machine. Installing 57 of 61.." This message is shown for 3 hours now. I was trying to connect to this server using mmc. runas /user:AdministratorAccountName@ComputerName "mmc %windir%\system32\compmgmt.msc" RUNAS ERROR: Unable to run - mmc C:\Windows\system32\compmgmt.msc 1311: There are currently no logon servers available to service the logon request. What else can I do? Update: Remote Desktop does not work. It just disappear after entering username and password from Win7-64 computer Update: I have disconected server. Now Server Manager in Roles Summary shows message "Unexpected error refreshing Server Manager: The remote procedure call failed." Event log shows Could not discover the state of the system. An unexpected exception was found: System.Runtime.InteropServices.COMException (0x800706BE): The remote procedure call failed. (Exception from HRESULT: 0x800706BE) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) at Microsoft.Windows.ServerManager.ComponentInstaller.ThrowHResult(Int32 hr) at Microsoft.Windows.ServerManager.ComponentInstaller.CreateSessionAndPackage(IntPtr& session, IntPtr& package) at Microsoft.Windows.ServerManager.ComponentInstaller.InitializeUpdateInfo() at Microsoft.Windows.ServerManager.ComponentInstaller.Initialize() at Microsoft.Windows.ServerManager.Common.Provider.RefreshDiscovery() at Microsoft.Windows.ServerManager.LocalResult.PerformDiscovery() at Microsoft.Windows.ServerManager.ServerManagerModel.CreateLocalResult(RefreshType refreshType) at Microsoft.Windows.ServerManager.ServerManagerModel.InternalRefreshModelResult(Object state) Update: Manual update hangs on (Installing update 58 of 62...) Seurity Update for Windows Server 2008 R2 x64 Edition (KB979309) Update: It could be an issue with that admin user was renamed

    Read the article

  • Outlook 2010 on WinXP runs once then refuses to run again until reboot

    - by msorens
    Since I installed Outlook 2010 on a new machine (WinXP Pro SP3) a couple months back I have had an issue that is quite annoying: If I close Outlook then attempt to restart it I get a small pop-up saying only: "Cannot start Microsoft Outlook". I found one workaround, but not a terribly practical one: reboot. If I reboot then launch Outlook, it opens fine. Here is what I know: Since I can run Outlook just fine after a reboot, I do not see that a system restore, an OS reinstall, or the like would help. I tried "outlook.exe /resetnavpane" and "outlook.exe /safe" but those give the same error. There are no entries in the event log. There is no instance of Outlook appearing in the process list once I close the program, so it does not seem to be an alias for "outlook is already running". As far as I have found, my situation is unique among reports of similar incidents: I have uncovered no other reports saying Outlook would run fine the first launch or that a reboot would again allow it to run. Suggestions?

    Read the article

  • SSH Interactive mode not working

    - by Ekin Koc
    I have a Debian based linux server running for a year or so, without any problems. A couple of days ago, ssh interactive mode stopped working for no reason. I mean, I can open an ssh connection just fine, the server greets me with shell but I just can't type anything. However, if I send commands like this: ssh [email protected] cat /var/log/messages, I get the response. I dug through several logs and found one message, which feels remotely relevant to the problem; sh kernel: [10222733.062511] ------------[ cut here ]------------ sh kernel: [10222733.062522] WARNING: at /build/buildd-linux-2.6_2.6.32-39-amd64-7yVIH2/linux-2.6-2.6.32/debian/build/source_amd64_none/drivers/char/tty_ldisc.c:738 tty_ldisc_reinit+0x46/0x7b() sh kernel: [10222733.062526] Hardware name: PowerEdge R210 II sh kernel: [10222733.062528] Modules linked in: ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables x_tables sha1_generic arc4 ecb ppp_mppe ppp_async crc_ccitt ppp_generic slhc loop snd_pcm snd_timer snd soundcore snd_page_alloc i2c_i801 i2c_core pcspkr evdev joydev dcdbas container button processor ext3 jbd mbcache sg sd_mod sr_mod crc_t10dif cdrom usb_storage usbhid hid mpt2sas ahci ehci_hcd libata scsi_transport_sas usbcore bnx2 nls_base scsi_mod fan thermal thermal_sys [last unloaded: scsi_wait_scan] sh kernel: [10222733.062568] Pid: 8662, comm: sshd Not tainted 2.6.32-5-amd64 #1 sh kernel: [10222733.062569] Call Trace: sh kernel: [10222733.062572] [<ffffffff811ff056>] ? tty_ldisc_reinit+0x46/0x7b sh kernel: [10222733.062574] [<ffffffff811ff056>] ? tty_ldisc_reinit+0x46/0x7b Is there any way to get back the sshd working in interactive mode? I tried restarting sshd but that is no help. And somehow, I can not reboot the server. Tried sending shutdown -r now and reboot but it refuses to go down. Should I go ahead and request a physical reboot?

    Read the article

  • NTBackup (on WS2k3) fails to backup remote server (WS2k8R2) with " Error: is not a valid drive, or you do not have access."

    - by Mark A
    We run an NTBackup job on a Windows Server 2003 R2 SP2 with all updates (as of Q4-2011). It works well backing up two WS2k3 servers as well as the backup server itself. However, we have been unable to successfully back up our Windows Server 2008 R2 machine ("G5-01"). It often runs for about 2GB worth of backup and then dies out with one of the below error messages. It should be more like 20GB for the full server. We have tried using the admin share (C$), an explicitly shared drive share, UNC and mapped drives. The result is the same each time, the only thing that varies is the amount of stuff backed up before it chokes. We've also run NTBbackup from the UI interface, from the command line and as a scheduled task. We are backing up to 400/800GB tapes and they have plenty of space available on them (blank media). Error: \\G5-01\c is not a valid drive, or you do not have access. Error: \\G5-01\c$ is not a valid drive, or you do not have access. Error: Y: is not a valid drive, or you do not have access. Error: Could not access or create backup catalog files. Verify that you have full access to the working folder and there is disk space available. The job is run as Administrator and we have no problems logging onto the server and transferring files. The Event Log on the WS2k8 is not much help, as it has success audits for each login. All of the hardware involved (HP DL360 G3, HP LTO Ultrium 3, Adaptec 39320A) has the latest supported drivers. We've seemingly tried a bunch of different options but are wondering where to look next to resolve the backup issue. We've been super happy with our reliable schedule task for years but this one is stumping us!

    Read the article

  • Automating first time login process in Windows Server 2008 R2 SP1 virtual machine

    - by George Durzi
    I have a set of Windows 2008 Server R2 SP1 Enterprise Edition virtual machines running in Hyper-V. The host server has 64GB of RAM and two SSD drives (one drive for the host OS, and the second one for the VMs). The virtual machines are as follows: Domain Controller: 4GB RAM Exchange Server: 4GB RAM Terminal Services: 50GB RAM We use this setup for a travelling training class where users remote desktop to one of the VMs - let's call it the Terminal Services or "TS" VM - where tools such as Visual Studio are installed. The students go through some labs on the TS VMs in Visual Studio. Overall, this setup works great. However, when users are collectively logging in for the first time, the VM really struggles to keep up while all the user profiles are created. It can take some users up to 10 minutes to login. The number varies from 30 to 40 students. A workaround to this would be to manually remote desktop to the TS virtual machine using all the accounts to ensure that the local profile is created in advance. I'm looking for a way to automate the first time login process on the TS virtual machine. I am envisioning iterating through the accounts in a certain Active Directory OU, and then somehow initiating a remote desktop session to the TS VM to log them in for the first time. Are there ways to do this? Thanks

    Read the article

  • w2k3 AD DC Demotion fails with "no other AD DC for that domain can be contacted"

    - by Kstro21
    i've a small office with a single w2k3 sp2 DC(bad idea, but it is real), now, i want to make a clean install of that pc, so, i got another one, install w2k3 sp2, add it to the domain, dcpromo and set it to be a GC, untill now everything is ok, then tried to dcpromo in the primary DC, but it fails with The box indicating that this domain controller is the last controller for the domain mydomain.com is unchecked. However, no other Active Directory domain controllers for that domain can be contacted. Do you wish to proceed anyway? If you click Yes, any Active Directory changes that have been made on this domain controller will be lost. So, i started to move all the roles to the new server as described here, when all was ok with the roles, i tried doing the same, but got the same result. Tried moving the DNS to the new server, but it doesn't make difference. Shutdown to the old server, then tried to log into a workstation, but it fails saying the domain is not available, also coudln't add new workstation to the domain, so i have to power on the old server again. So, if i successfully move all the roles and dns to the new server: why dcpromo give such message in the old server? why if i shutdown the old server the domain is not available?? if i successfully move all the roles and dns to the new server, and i click yes when dcpromo give warning in the old server, will i lose all users, computers, ou, etc.? am i missing some steps to make this work?? hope you can help me thanks

    Read the article

  • Davical + LDAP + NTLM

    - by slavizh
    I have set up a Davical server on CentOS. I've configured it to use LDAP and the users use their usernames and passwords to authenticate to the Davical server. I am using Lightning as client software for calendaring. Using Lightning requires entering username and password everytime, so I decided to set NTLM. I want my users who are logging in the domain to use the calendar server trough Lightning without entering username and password. I've set up NTLM on the Davical server. But when a user trys to reach the calendar trough Lightning first the server asks for NTLM username and password and then ask for the LDAP username and password. It becomes something like double authentication. The problem is that NLTM requires domain\username and passowrd and Davical trough LDAP requires only username and password. So my questions are: Is there a way to change something in Davical so that Davical trough LDAP to requires domain\username and passwords authentication? That way may be trough NTLM the second authentication will proceed silently and the users will user Lightning without entering usernames and passwords Is there a way I can make this double authentication to become one and to use only NTLM? P.S. We have Samba domain with LDAP server and our users use Thunderbird for their mail and I want to put Lightning too. That way they will have calendar service. But I don't want they to enter username and password for the calendar every time they log in. I know they can save that password but that is not an option for my organization.

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >