Search Results

Search found 27238 results on 1090 pages for 'local variable'.

Page 245/1090 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • What is the correct way to move a file?

    - by Joe McDonald
    We had an issue at my work where I cut and pasted some files. Immediately when I did it, a ton of files were lost. I've been working in IT for 10+ years. I know how to cut and paste a file. Well, when it went up to my managers as to why the files were lost, they deemed it to my cut and paste that caused all the problems and asked why in the world someone as knowledgeable as me would ever cut and paste a file, and didn't I know that was totally the wrong way to move a file? The correct way to move a file is to drag the file. When cutting and pasting, it moves that 1+ GB file (on the server) to the clipboard (on my PC), which, obviously, will cause problems. Dragging a file never hits the clipboard. Be honest, I don't believe that for a minute. I believe when I cut and paste text, it goes to the clipboard. I've seen it in the old versions of windows. But when right clicking on 100+ files that equals 1+ GB, I can't believe that all that data is copied immediately out of whatever share I'm on at the server across my wireless on my laptop to my local clipboard to just go back to the server to another share. It seems they would build some logic in the server OS or my local OS (more likely my local OS) that would say when copying files, don't perform the move action until I click paste and if the files are staying local to where they were before, just move them. So, who's right?

    Read the article

  • gSoap not working with correct pkg-config

    - by O.O
    I run: soapcpp2 myClass.hpp -dsoap and get this error: Package gsoap++ was not found in the pkg-config search path. Perhaps you should add the directory containing `gsoap++.pc' to the PKG_CONFIG_PATH environment variable No package 'gsoap++' found Package gsoap++ was not found in the pkg-config search path. Perhaps you should add the directory containing `gsoap++.pc' to the PKG_CONFIG_PATH environment variable No package 'gsoap++' found The path is set... $ echo $PKG_CONFIG_PATH :/home/someUser/SOAP/gsoap-2.8/:/home/someUser/SOAP/gsoap-2.8/

    Read the article

  • linux "date -s" command not working to change date on a server

    - by nelaar
    date +%T --set="12:19:06" 12:19:06 date Mon Nov 26 12:37:32 SAST 2012 I have tried many different forms of this command but nothing seams to work. In changing the date on this computer server running as VM is not working. Our messages log show messages like thise ntpd[3496]: time correction of -1098 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Our server is now about 20 minutes out. It seams like our server has not been updating the time correctly for a few days. Nov 22 19:29:23 hostname ntpd[1818]: time reset -998.577519 s Nov 22 19:32:34 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:33:39 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 19:52:30 hostname ntpd[1818]: time reset -998.992426 s Nov 22 19:55:47 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:56:53 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:13:04 hostname ntpd[1818]: time reset -999.374412 s Nov 22 20:16:40 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:17:44 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:32:02 hostname ntpd[1818]: time reset -999.716832 s Nov 22 20:35:28 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:36:16 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:56:39 hostname ntpd[1818]: time correction of -1000 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time.

    Read the article

  • need some help figuring out clamav & monit monitoring error...unixsocket...

    - by Ronedog
    I need a bit of help figuring something out. First off, I'm not very well versed with FreeBSD servers, etc. but with some direction hopefully I can get this fixed. I'm using FreeBSD and installed Monit so I could monitor some of the processes that run tomcat, apache, mysql, sendmail, clamav. So far, I'm only successful in getting apache & mysql to be monitored. I'm getting this error for clamav in the log file for /var/log/monit.log 'clamavd' failed, cannot open a connection to UNIX[/usr/local/etc/rc.d/clamav-clamd] My config file for clamav in /etc/monitrc is: #################################################################### # CLAMAV Virus Checks #################################################################### check process clamavd with pidfile /var/run/clamav/clamd.pid group virus start program = "/usr/local/etc/rc.d/clamav-clamd start" stop program = "/usr/local/etc/rc.d/clamav-clamd stop" if failed unixsocket /usr/local/etc/rc.d/clamav-clamd then restart if 5 restarts within 5 cycles then timeout Honestly, I really don't know much of what's going on here. My host who helped me get the box set up basically installed clamav, but doesn't offer this kind of detail in supporting me, so I'm left to figure this stuff out on my own as I own the box, but they provide the isp service. Is there anyone who can help me troubleshoot this? Thanks for your help in advance.

    Read the article

  • Howto enable SMPTS (465) postfix CentOS

    - by user197284
    I need help is enabling SMTPS. I use postfix , dovecot with MySQL(virtual domains). I do not know how to enable SMTPS(465). I already added tls related settings and key and certificate in the "/etc/postfix/main.cf" OS: Centos 6.4 64 bit Please my /etc/postfix/master.cf file here # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd -o content_filter=smtp-amavis:127.0.0.1:10024 -o receive_override_options=no_address_mappings pickup fifo n - n 60 1 pickup -o content_filter= -o receive_override_options=no_header_body_checks cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap smtp unix - - n - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # ==================================================================== maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient} uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=foo argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient # # spam/virus section # smtp-amavis unix - - y - 2 smtp -o smtp_data_done_timeout=1200 -o disable_dns_lookups=yes -o smtp_send_xforward_command=yes 127.0.0.1:10025 inet n - y - - smtpd -o content_filter= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks=127.0.0.0/8 -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o receive_override_options=no_header_body_checks -o smtpd_bind_address=127.0.0.1 -o smtpd_helo_required=no -o smtpd_client_restrictions= -o smtpd_restriction_classes= -o disable_vrfy_command=no -o strict_rfc821_envelopes=yes # # Dovecot LDA dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} # # Vacation mail vacation unix - n n - - pipe flags=Rq user=vacation argv=/var/spool/vacation/vacation.pl -f ${sender} -- ${recipient} retry unix - - n - - error proxywrite unix - - n - 1 proxymap Please help to enable SMTPS. I have amavis enabled

    Read the article

  • Wake for Network Access Apache servr in OS X 10.8, followup

    - by Gary
    Sorry, I can't seem to post this response within the same thread. Thank you both (Zoredache and Gordon) for your answer. But the fix seems temporary. I entered the command you suggested, and it seemed to work: ...smith$ Registering Service ApacheNoDoz._http._tcp.local port 80 DATE: ---Fri 14 Sep 2012--- 12:04:15.813 ...STARTING... 12:04:16.566 Got a reply for service ApacheNoDoz._http._tcp.local.: Name now registered and active So, I checked for it on my G5: Browsing for _http._tcp Timestamp.....A/R Flags if Domain......Service Type...Instance Name (lots of Bonjour printers omitted)... 12:07:38.370..Add.....2..4 local.......... _http._tcp.........ApacheNoDoz 12:07:45.921..Rmv.....0..4 local..........._http._tcp.........ApacheNoDoz So, it was running at 12:07:38, at which time the host was asleep. But, shortly after, the activity seems to have been removed. I don't know why. Does this mean that I can never let the cpu sleep, or is there something else I have to set? Thanks, again.

    Read the article

  • Exchange 2010 - Certificate error on internal Outlook 2013 connections

    - by Lorenz Meyer
    I have an Exchange 2010 and Outlook 2003. The exchange server has a wildcard SSL certificate installed *.domain.com, (for use with autodiscover.domain.com and mail.domain.com). The local fqdn of the Exchange server is exch.domain.local. With this configuration there is no problem. Now I started upgrading all Outlook 2003 to Outlook 2013, and I start to get consistently a certificate error in Outlook : The Name on the security certificate is invalid or does not match the name of the site I understand why I get that error: Outlook 2013 is connecting to exch.domain.local while the certificate is for *.domain.com. I was ready to buy a SAN (Subject Alternate Names) Certificate, that contains the three domains exch.domain.local, mail.domain.com, autodiscover.domain.com. But there is a hindrance: the certificate provider (in my case Godaddy) requires that the domain is validated as being our property. Now it is not possible for an internal domain that is not accessible from the internet. So this turns out not to be an option. Create self-signed SAN certificate with an Enterprise CA is an other option that is barely viable: There would be certificate error with every access to webmail, and I had to install the certificate on all Outlook clients. What is a recommended viable solution ? Is it possible to disable certificate checking in Outlook ? Or how could I change the Exchange server configuration so that the public domain name is used for all connections ? Or is there another solution I'm not thinking of ? Any advice is welcome.

    Read the article

  • Unable to connect via NetBIOS Name

    - by grom
    I can't connect to machines/shares by NetBIOS names. Below is console output showing the problem. C:\>nbtstat -n Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Local Name Table Name Type Status --------------------------------------------- BEAST <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BEAST <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered WORKGROUP <1D> UNIQUE Registered ..__MSBROWSE__.<01> GROUP Registered C:\>nbtstat -A 192.168.1.3 Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- BRCLAPTOP <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BRCLAPTOP <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered MAC Address = 00-1C-BF-14-B8-6E C:\>ping beast Pinging beast [fe80::59b8:179f:b90b:a63f%11] with 32 bytes of data: Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Ping statistics for fe80::59b8:179f:b90b:a63f%11: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\>ping brclaptop Ping request could not find host brclaptop. Please check the name and try again. C:\>nbtstat -a brclaptop Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] Host not found.

    Read the article

  • use ssh tunnel with phpmyadmin

    - by JohnMerlino
    I been using ssh tunnel to bypass firewall of remote mysql server. On my Ubuntu 12.04 installation, it works via the terminal and it works when using a program called mysql workbench. However, that program freezes often and I want to try phpmyadmin as an alternative. However, I cannot connect to remote server using ssh tunnel on phpmyadmin, albeit I can connect locally. These are the steps I've tried: 1) Open a tunnel, listening on localhost:3307 and forwarding everything to xxx.xxx.xxx.xxx:3306 (used 3307 because MySQL on my local machine uses the default port 3306): ssh -L 3307:localhost:3306 [email protected] So now I have the port for tunnel open and I have my local mysql installation default port: $ netstat -tln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3307 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN ... 2) Now I can easily connect to remote server via localhost using the terminal: $ mysql -u user.name -p -h 127.0.0.1 -P 3307 Notice that I expicitly identify 3307 as the port, so traffic forwards to the remote server, and hence it logs me in to the remote server. Unfortunately, the localhost/phpmyadmin local login interface doesn't allow you to specify a port option. So I modify the config-db.php file and change the $dbport variable to 3307, under the impression that the phpmyadmin interface will now work with port 3307: $ sudo vim /etc/phpmyadmin/config-db.php $dbport='3307'; Then I restart the mysql server. Unfortunately, it didn't work. When I use the remote credentials to login, it gives me error: #1045 Cannot log in to the MySQL server

    Read the article

  • Slow RDP after server joins domain

    - by Chris Grove
    We're having RDP issues with Amazon cloud servers that we recently joined to an Active Directory domain. The setup is: A local office network A virtual private cloud in Amazon An IPSec tunnel between the two networks A number of Windows 2008 R2 servers on both networks An AD domain (call it abc.net), with one domain controller in each network. The domain controllers are both new, fresh installs. Before we had the domain set up we had local accounts for the cloud computers which were used for RDP access. Our idea was to get all of the servers on to the domain so we could use domain logins instead of per-server local logins. Before the cloud servers were in the domain, RDP (from the office network or through a VPN to the cloud) worked great. After we joined the cloud servers to the domain, RDP from the office became very slow - a few minutes to log in, long frequent pauses when the interface is unresponsive, generally just a slow and frustrating experience. This is a problem regardless of whether a domain or local login is used for RDP. Oddly, when outside of the office network and connecting to the cloud directly with the VPN, RDP is still very responsive. Any idea why RDP from office to cloud is suddenly very slow after the cloud servers join the domain? What can I look at in our configuration to address this? Any help is greatly appreciated.

    Read the article

  • How to configure network on Windows Server 2008

    - by Gokhan Ozturk
    I have a IBM x3400 Server Machine with Windows Server 2008 R2 installed on it. But, since I am not expert on networking I have some problems. These roles installed on my server: Active Directory DNS File Sharing Hyper-V ISS VPN There is two network card on them. I configured them like this: Local Connection 1: 192.168.30.3 255.255.255.0 192.168.30.2 127.0.0.1 Local Connection 2: 192.168.30.101 255.255.255.0 192.168.30.6 127.0.0.1 My problem is, when I use this Ip gateways, It is sharing internet to all computers. This is not I want. I want to use Local Connection 1 for internal network. I am giving all computers gateway and DNS IP as 192.168.30.3 The Local Connection 2 is for Hyper-V and VPN connections. 192.168.30.2 and 192.168.30.6 are my modem's gateways. I am using 192.168.30.6 external IP for VPN connections. There is two 24 port switches. There is a connection between them and this two ethernet card connected directly to them. And modems are connected to switches as well (Morems are not near the server. They are somewhere in the building). I disabled network Bridge and removed all ethernet cards from it. With this configuration, all computers can ping my server's IP (192.168.30.3) but on server I cannot ping any clients (Request timeout). What is the best way to configure my network? Thank you. Redgards

    Read the article

  • do I need to create an AD site for VPN network

    - by ykyri
    I have Windows Domain level 2008 R2. There are four GC DC in four different physical locations. I have Kerio-based VPN network for replication and remote administration. Here is how network configured: dc1: local IP: 192.168.0.10 VPN IP: 192.168.1.10 dc2: local IP: 10.10.8.11 VPN IP: 192.168.1.11 dc3: local IP: 10.10.9.12 VPN IP: 192.168.1.12 dc4: local IP: 10.10.10.13 VPN IP: 192.168.1.13 That's simple, replication and all works fine but when running dcdiag on dc3 I have an error: A warning event occurred. EventID: 0x000016AF During the past 4.12 hours there have been 216 connections to this Domain Controller from client machines whose IP addresses don't map to any of the existing sites in the enterprise. <...> The log(s) may contain additional unrelated debugging information. To filter out the needed information, please search for lines which contain text 'NO_CLIENT_SITE:'. The first word after this string is the client name and the second word is the client IP address. Here is netlogon.log lines example: 05/30 12:07:39 DOMAIN.NAME: NO_CLIENT_SITE: dc2 192.168.1.11 05/31 09:52:11 DOMAIN.NAME: NO_CLIENT_SITE: dc4 192.168.1.13 05/31 19:49:31 DOMAIN.NAME: NO_CLIENT_SITE: adm-note 192.168.1.101 07/01 05:16:26 DOMAIN.NAME: NO_CLIENT_SITE: dc1 192.168.1.10 All VPN-joined computers are generates same log line as above. Computer amd-note is for example administrator's notebook, also have VPN. Question is should I add new AD site and bind VPN subnet 192.168.1.0/24 with that site?

    Read the article

  • Fetch new Mails (Also from Subfolders) from another IMAP server as new Mail in Postfix

    - by Tobi
    everyone. I have installed Postfix on a server with Aliases and Domains from a MySQL Database. It is configured to forward some adresses to other Mail Accounts and also delivers some mails in local mailboxes that will be queried over a dovecot imap server. For this example let there be two users: [email protected] what is a user that gets its mail just forwarded to let's say [email protected] [email protected] what is a user that accesses its mail from local IMAP. Now, I want to fetch some Mails from another mailserver and handle them as if they were sent to a user of my Mailserver. Lets say those corelations exist: [email protected] has two external accounts: [email protected] and [email protected] [email protected] has also one external account [email protected] The Problem is the new mails on that other Mailserver is not always in the inbox, it might be in subdirectories: mailinglists/all or mailinglists/it but also in mailinglists/some-other-department which is not interesting and should not be delivered. I already found a programm called fetchmail but I cannot find how to fetch subdirectories or decide which subdirectories are fetched.

    Read the article

  • How to access variables in an erb-subtemplate in puppet?

    - by c33s
    example.pp $foo = 'bar' $content = template('mymodule/maintemplate.erb') maintemplate.erb <% bar = foo + "extra" %> foobar = scope_function_template(['mymodule/subtemplate.erb']) subtemplate.erb <%# here i want to access the variable bar %> <%= bar %> there is the function <%= scope.lookupvar('::bar') %> is there a kind of parent::bar in erb templateing, or can i pass some variables to the subtemplate, or can i only access the outer variable (of the .pp file) with ::foo

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • nginx: js file loads indifferently every refresh

    - by poymode
    I have this nginx problem wherein a js file in a rails app loads indifferently. Whenever I try to access the JS file in the browser and refresh the page, the scrollbar changes length meaning sometimes it loads half the js page, sometimes the whole and sometimes just a part of it. the js file size is 71K. my nginx server is on different server,separate from my rails app. when I try to access the js file directly through the app server, lets say 10.48.30.150:3000/javascripts/file.js it works fine and doesnt show any half-loaded page. but when I use the nginx server which upstreams the rails app, it shows the indifferent page loads. here is my nginx http conf error_log /usr/local/nginx/logs/error.log; pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 256; access_log /usr/local/nginx/logs/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 0; tcp_nodelay on; #gzip on; #gzip_min_length 4096; #gzip_buffers 16 8k; #gzip_types application/x-javascript text/css text/plain; large_client_header_buffers 4 8k; client_max_body_size 2G; include /usr/local/nginx/conf.d/*.conf; }

    Read the article

  • How to determine JAVA_HOME on Debian/Ubuntu?

    - by Witek
    On Ubuntu it is possible to have multiple JVMs at the same time. The default one is selected with update-alternatives. But this does not set the JAVA_HOME environment variable, due to a debian policy. I am writing a launcher script (bash), which starts a java application. This java application needs the JAVA_HOME environment variable. So how to get the path of the JVM which is currently selected by update-alternatives?

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

  • upstart scripts: run a task after networking goes up

    - by The Journeyman geek
    I'm working on moving my current server setup to newer hardware, and migrating from ubuntu karmic koala to lucid lynx. Currently i'm using gw6c (compiled from the gogo6 website, as opposed to the version from the repositories) to get ipv6 access for my systems. On the karmic koala system, i used simple init.d script to get the ipv6 client started #! /bin/sh /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf I figured since this runs at any runlevel, it should translate to respawn console none start on startup stop on shutdown script exec /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf emit free6_ipv6_started end script this works fine started from initctrl, but it apparently fails to start when it boots. - its status being stop/waiting. It works fine (and respawns) when started otherwise.Any ideas on where i'm going wrong, and what would be the appropriate 'start on' arguement? EDIT: the exact error is 'init: gw6c main process (xxx) ended with status 8' followed by the process respawning , with xxx being a PID i suspect. I'm also half suspecting this is cause gw6c starts before networking does, and i need my eth0 up before gw6c is

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • can't install anything anymore with apt-get

    - by Aymane Shuichi
    Welcome this is the log I have when trying to install anything (php5-fpm after removing it) apt-get install php5-fpm Reading package lists... Done Building dependency tree Reading state information... Done php5-fpm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up php5-fpm (5.4.4-14+deb7u10) ... insserv: warning: script 'S55IptabLes' missing LSB tags and overrides insserv: warning: script 'S55IptabLex' missing LSB tags and overrides insserv: There is a loop between service IptabLes and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service networking at depth 7 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: There is a loop between service rc.local and mountall if started insserv: loop involving service mountall at depth 6 insserv: loop involving service checkfs at depth 5 insserv: loop involving service kbd at depth 11 insserv: There is a loop between service rc.local and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 7 insserv: loop involving service urandom at depth 9 insserv: There is a loop between service IptabLes and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rc.local if started insserv: There is a loop at service IptabLes if started insserv: Starting IptabLes depends on rc.local and therefore on system facility `$all' which can not be true! (x99 times repeated ) insserv: Max recursions depth 99 reached insserv: loop involving service postfix at depth 2 insserv: There is a loop between service IptabLes and udev if started insserv: loop involving service mountkernfs at depth 1 insserv: loop involving service IptabLes at depth 1 Now here is the error i get insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing php5-fpm (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: php5-fpm E: Sub-process /usr/bin/dpkg returned an error code (1) The biggest operation I held before this was updating nginx from 1.2 to 1.6 and it was thanks to this site : here is the link : How to upgrade nginx from 1.2 to 1.6 on debian 7 Please help !

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >