Search Results

Search found 2388 results on 96 pages for 'rare man'.

Page 70/96 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • Alter charset and collation in all columns in all tables in MySQL

    - by The Disintegrator
    I need to execute these statements in all tables for all columns. alter table table_name charset=utf8; alter table table_name alter column column_name charset=utf8; Is it possible to automate this in any way inside MySQL? I would prefer to avoid mysqldump Update: Richard Bronosky showed me the way :-) The query I needed to execute in every table: alter table DBname.DBfield CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; Crazy query to generate all other queries: SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname'; I only wanted to execute it in one database. It was taking too long to execute all in one pass. It turned out that it was generating one query per field per table. And only one query per table was necessary (distinct to the rescue). Getting the output on a file was how I realized it. How to generate the output to a file: mysql -B -N --user=user --password=secret -e "SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname';" > alter.sql And finally to execute all the queries: mysql --user=user --password=secret < alter.sql Thanks Richard. You're the man!

    Read the article

  • centos postfix send email problem

    - by Catalin
    I have a big problem with postfix. I can receive mail in webmin and outlook but I can't send (only on local I can - user to user). Dovecot is working just fine. Sendmail is disable. Please help me. postfix -n postfix: invalid option -- n postfix: fatal: usage: postfix [-c config_dir] [-Dv] command [root@xprivatecams usr]# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man milter_default_action = acceptsmtpd_tls_auth_only = no milter_protocol = 2 mydestination = $myhostname, localhost.$mydomain, localhost myhostname = xprivatecams.com mynetworks = 94.177.41.0/24, 127.0.0.0/8 newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = inet:localhost:20207 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_use_tls = yes smtpd_milters = inet:localhost:20207 smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 Jan 18 00:46:17 xprivatecams postfix/postfix-script: starting the Postfix mail system Jan 18 00:46:17 xprivatecams postfix/master[15545]: daemon started -- version 2.3.3, configuration /etc/postfix Jan 18 00:48:00 xprivatecams postfix/pickup[15546]: EDE7EA8001B: uid=0 from=<[email protected]> Jan 18 00:48:00 xprivatecams postfix/cleanup[15817]: EDE7EA8001B: message-id=<[email protected]> Jan 18 00:48:00 xprivatecams opendkim[2776]: EDE7EA8001B: DKIM-Signature header added Jan 18 00:48:01 xprivatecams postfix/qmgr[15547]: EDE7EA8001B: from=<[email protected]>, size=615, nrcpt=1 (queue active) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: connect to mail.flabell.com[72.47.224.75]: Connection timed out (port 25) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: EDE7EA8001B: to=<[email protected]>, relay=none, delay=30, delays=0.08/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.flabell.com[72.47.224.75]: Connection timed out) telnet 94.177.41.70 25 Trying 94.177.41.70... Connected to xprivatecams.com (94.177.41.70). Escape character is '^]'. 220 xprivatecams.com ESMTP Postfix ehlo me 250-xprivatecams.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • cyrus-imapd is not work with sasldb2, but postfix work

    - by Felix Chang
    centos6 64 bits: when i use pop3 for access cyrus-imapd: S: +OK li557-53 Cyrus POP3 v2.3.16-Fedora-RPM-2.3.16-6.el6_2.5 server ready <3176565056.1354071404@li557-53> C: USER [email protected] S: +OK Name is a valid mailbox C: PASS abcabc S: -ERR [AUTH] Invalid login C: QUIT and with USER "abc" failed too. my imapd.conf: configdirectory: /var/lib/imap partition-default: /var/spool/imap admins: cyrus sievedir: /var/lib/imap/sieve sendmail: /usr/sbin/sendmail hashimapspool: true sasl_pwcheck_method: auxprop sasl_mech_list: PLAIN LOGIN tls_cert_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_key_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_ca_file: /etc/pki/tls/certs/ca-bundle.crt allowplaintext: true #defaultdomain: myabc.com loginrealms: myabc.com sasldblistuser2: [email protected]: userPassword but my postfix is ok with same user. /etc/sasl2/smtpd.conf pwcheck_method: auxprop mech_list: plain login log_level:7 saslauthd_path:/var/run/saslauthd/mux /etc/postfix/main.cf queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = localhost mydomain = myabc.com myorigin = $mydomain inet_interfaces = all inet_protocols = all mydestination = $myhostname, localhost.$mydomain, localhost,$mydomain local_recipient_maps = unknown_local_recipient_reject_code = 550 mynetworks_style = subnet mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = $mydestination alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases home_mailbox = Maildir/ mailbox_transport = lmtp:unix:/var/lib/imap/socket/lmtp debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_security_options = noanonymous smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_security_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination message_size_limit = 15728640 broken_sasl_auth_clients=yes please help.

    Read the article

  • centos postfix send email problem

    - by Catalin
    Hello. I have a big problem with postfix. I can receive mail in webmin and outlook but I can't send (only on local I can - user to user). Dovecot is working just fine. Sendmail is disable. Please help me. postfix -n postfix: invalid option -- n postfix: fatal: usage: postfix [-c config_dir] [-Dv] command [root@xprivatecams usr]# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man milter_default_action = acceptsmtpd_tls_auth_only = no milter_protocol = 2 mydestination = $myhostname, localhost.$mydomain, localhost myhostname = xprivatecams.com mynetworks = 94.177.41.0/24, 127.0.0.0/8 newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = inet:localhost:20207 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_use_tls = yes smtpd_milters = inet:localhost:20207 smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 Jan 18 00:46:17 xprivatecams postfix/postfix-script: starting the Postfix mail system Jan 18 00:46:17 xprivatecams postfix/master[15545]: daemon started -- version 2.3.3, configuration /etc/postfix Jan 18 00:48:00 xprivatecams postfix/pickup[15546]: EDE7EA8001B: uid=0 from=<[email protected]> Jan 18 00:48:00 xprivatecams postfix/cleanup[15817]: EDE7EA8001B: message-id=<[email protected]> Jan 18 00:48:00 xprivatecams opendkim[2776]: EDE7EA8001B: DKIM-Signature header added Jan 18 00:48:01 xprivatecams postfix/qmgr[15547]: EDE7EA8001B: from=<[email protected]>, size=615, nrcpt=1 (queue active) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: connect to mail.flabell.com[72.47.224.75]: Connection timed out (port 25) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: EDE7EA8001B: to=<[email protected]>, relay=none, delay=30, delays=0.08/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.flabell.com[72.47.224.75]: Connection timed out) telnet 94.177.41.70 25 Trying 94.177.41.70... Connected to xprivatecams.com (94.177.41.70). Escape character is '^]'. 220 xprivatecams.com ESMTP Postfix ehlo me 250-xprivatecams.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • What's up with stat on MacOSX/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on MacOSX 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Subversion 1.6 + SASL : Only works with plaintext 'userPassword'?

    - by SiegeX
    I'm attempting to setup svnserve with SASL support on my Slackware 13.1 server and after some trial and error I'm able to get it to work with the configuration listed below: svnserve.conf [general] anon-access = read auth-access = write realm = myrepo [sasl] use-sasl = true min-encryption = 128 max-encryption = 256 /etc/sasl2/svn.conf pwcheck_method: auxprop auxprop_plugin: sasldb sasldb_path: /etc/sasl2/my_sasldb mech_list: DIGEST-MD5 sasldb users $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP test@myrepo: userPassword You'll notice that the output of sasldblistusers2 shows my test user as having both an encrypted cmusaslsecretOTP password as well as a plain text userPassword passwd. i.e., if I were to run strings /etc/sasl2/my_sasldb I would see the test users' password in plaintext. These two password entries were created with the following subversion book recommended command: saslpasswd2 -c -f /etc/sasl2/my_sasldb -u myrepo test After reading man saslpasswd2 I see the following option: -n Don't set the plaintext userPassword property for the user. Only mechanism-specific secrets will be set (e.g. OTP, SRP) This is exactly what I want to do, suppress the plain text password and only use the mechanism-specific secret (OTP in my case). So I clear out /etc/sasl2/my_sasldb and rerun saslpasswd2 as: saslpasswd2 -n -c -f /etc/sasl2/my_sasldb -u myrepo test I then follow it up with a sasldblistusers2 and I see: $ sasldblistusers2 -f /etc/sasl2/my_sasldb test@myrepo: cmusaslsecretOTP Perfect! I think, now I have only encrypted passwords.... only neither the Linux svn client nor the Windows TortoiseSVN client can connect to my repo anymore. They both present me with the user/pass challenge but that's as far as I get. TLDR So, what is the point of SVN supporting SASL if my sasldb must store its passwords in plaintext to work?

    Read the article

  • PBS batch jobs - the qalter command

    - by Ryan Budney
    I've got a giant computation running on a Scientific Linux cluster. At present I have over 600 jobs parked in the queue, waiting for processor time, while a few are running. I'm trying to use the qalter command on some of the idle but scheduled jobs. I'd like to schedule them for a later time, so that other users can jump part of the queue, sort of as an act of politeness. Is this doable? For example, JOBNAME 292399 is currently idle, scheduled to be run whenever a spot in the queue opens up. But if I run qalter -a 10051000 292398 followed by qrerun 292398 I get qrerun: Request invalid for state of job 292398.euler. From the qalter documentation, I thought 10051000 refers to tomorrow (oct 5th, 10am) but perhaps I'm misunderstanding something? If I'm going about this the wrong way, please let me know. The main thing I'm looking for is a command that's easily scriptable, so that I can modify when my queued tasks get run. qalter seems good for those purposes if I can get it working. I'd rather avoid running qdel and re qsubbing the computations, as there's a bookkeeping issue on which tasks to restart (vs which ones not to). I want to avoid that kind of bookkeeping. From googling around I notice some qalter commands have rather different date formats, but the above appears to be correct, as far as I can tell from the man docs. Any help would be appreciated.

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Can reprepro accept a new version of a package into the repository?

    - by kai
    I have installed a package into my own debian package repository like so: $ sudo reprepro -b /var/packages/ubuntu includedeb maverick my-package_0.8-0_all.deb my-package_0.8-0_all.deb: component guessed as 'main' Exporting indices... I have installed my package on a few machines using apt-get install. I have now added new features to my software and would like to add a new minor version of my package to the repository so that I may update my machines using apt-get upgrade. I try to do this like so: $ sudo reprepro -b /var/packages/ubuntu includedeb maverick my-package_0.9-0_all.deb my-package_0.9-0_all.deb: component guessed as 'main' Skipping inclusion of 'my-package' '1.0-0' in 'maverick|main|i386', as it has already '1.0-0'. Skipping inclusion of 'my-package' '1.0-0' in 'maverick|main|amd64', as it has already '1.0-0'. It looks like I need to tell reprepro that this is a new version of the same package but I have no idea how to do this. I have read the reprepro man page several times and searched on the net for a couple of hours but I have not found any answers. Am I missing something? Many thanks.

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • Unable to receive any emails using postfix, dovecot, mysql, and virtual domain/mailboxes

    - by stkdev248
    I have been working on configuring my mail server for the last couple of weeks using postfix, dovecot, and mysql. I have one virtual domain and a few virtual mailboxes. Using squirrelmail I have been able to log into my accounts and send emails out (e.g. I can send to googlemail just fine), however I am not able to receive any emails--not from the outside world nor from within my own network. I am able to telnet in using localhost, my private ip, and my public ip on port 25 without any problems (I've tried it from the server itself and from another computer on my network). This is what I get in my logs when I send an email from my googlemail account to my mail server: mail.log Apr 14 07:36:06 server1 postfix/qmgr[1721]: BE01B520538: from=, size=733, nrcpt=1 (queue active) Apr 14 07:36:06 server1 postfix/pipe[3371]: 78BC0520510: to=, relay=dovecot, delay=45421, delays=45421/0/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied) Apr 14 07:36:06 server1 postfix/pipe[3391]: 8261B520534: to=, relay=dovecot, delay=38036, delays=38036/0.06/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3378]: 63927520532: to=, relay=dovecot, delay=38105, delays=38105/0.02/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3375]: 07F65520522: to=, relay=dovecot, delay=39467, delays=39467/0.01/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3381]: EEDE9520527: to=, relay=dovecot, delay=38361, delays=38360/0.04/0/0.15, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3379]: 67DFF520517: to=, relay=dovecot, delay=40475, delays=40475/0.03/0/0.16, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3387]: 3C7A052052E: to=, relay=dovecot, delay=38259, delays=38259/0.05/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3394]: BE01B520538: to=, relay=dovecot, delay=37682, delays=37682/0.07/0/0.11, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:07 server1 postfix/pipe[3384]: 3C7A052052E: to=, relay=dovecot, delay=38261, delays=38259/0.04/0/1.3, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection rate 1/60s for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection count 1 for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max cache size 1 at Apr 14 07:35:32 Apr 14 07:41:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:41:06 server1 postfix/pipe[4594]: ED6005203B7: to=, relay=dovecot, delay=334, delays=334/0.01/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:51:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:51:06 server1 postfix/pipe[4604]: ED6005203B7: to=, relay=dovecot, delay=933, delays=933/0.02/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) mail-dovecot-log (the log I set for debugging): Apr 14 07:28:26 auth: Info: mysql(127.0.0.1): Connected to database postfixadmin Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): query: SELECT password FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: client out: OK 1 [email protected] Apr 14 07:28:26 auth: Debug: master in: REQUEST 1809973249 3356 1 7cfb822db820fc5da67d0776b107cb3f Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): SELECT '/home/vmail/mydomain.com/some.user1' as home, 5000 AS uid, 5000 AS gid FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: master out: USER 1809973249 [email protected] home=/home/vmail/mydomain.com/some.user1 uid=5000 gid=5000 Apr 14 07:28:26 imap-login: Info: Login: user=, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=3360, secured Apr 14 07:28:26 imap([email protected]): Debug: Effective uid=5000, gid=5000, home=/home/vmail/mydomain.com/some.user1 Apr 14 07:28:26 imap([email protected]): Debug: maildir++: root=/home/vmail/mydomain.com/some.user1/Maildir, index=/home/vmail/mydomain.com/some.user1/Maildir/indexes, control=, inbox=/home/vmail/mydomain.com/some.user1/Maildir Apr 14 07:48:31 imap([email protected]): Info: Disconnected: Logged out bytes=85/681 From the output above I'm pretty sure that my problems all stem from (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ), but I have no idea why I'm getting that error. I've have the permissions to that log set just like the other mail logs: root@server1:~# ls -l /var/log/mail* -rw-r----- 1 syslog adm 196653 2012-04-14 07:58 /var/log/mail-dovecot.log -rw-r----- 1 syslog adm 62778 2012-04-13 21:04 /var/log/mail.err -rw-r----- 1 syslog adm 497767 2012-04-14 08:01 /var/log/mail.log Does anyone have any idea what I may be doing wrong? Here are my main.cf and master.cf files: main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = server1.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all # Virtual Configs virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_mailbox_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf relay_domains = mysql:/etc/postfix/mysql_relay_domains.cf smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unauth_destination, reject_unauth_pipelining, reject_invalid_hostname smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous virtual_transport=dovecot dovecot_destination_recipient_limit = 1 master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • pure-ftpd not listening on specified port

    - by Jason McLaren
    I installed the pure-ftpd package (version 1.0.35-1) on an Ubuntu 12.04 box (an EC2 instance based on the standard Ubuntu 12.04 AMI). The pure-ftpd daemon is running (verified with ps), though there is no PID file (expected one to be created by the /etc/init.d/pure-ftpd script). Here's the resulting command that gets run by the init.d script: /usr/sbin/pure-ftpd -l pam -O clf:/var/log/pure-ftpd/transfer.log -o -8 UTF-8 -u 1000 -E -B -g /var/run/pure-ftpd/pure-ftpd.pid Here's my real problem: the ftp server isn't actually listening on any port (checked with netstat and nmap). So I can't ftp to the server (either locally using localhost or remotely using the public IP address). I tried adding a Bind file to /etc/pure-ftpd/conf and restarting, but it didn't help. When I installed pure-ftpd, it replaced inetd with openbsd-inetd, but did not run it since there were no services enabled. So inetd is not listening on port 21 either. (Apparently Ubuntu has a no-inetd-by-default policy, according to https://lists.ubuntu.com/archives/ubuntu-users/2010-September/227905.html .) I want to run pure-ftpd by itself (not with inetd) anyways, since the /etc/init.d/pure-ftpd script requires no inetd if you use the UploadScript feature. I'm not familiar with how Ubuntu handles network services (and can't find any relevant docs besides generic man pages), so I'm probably missing something obvious. Nothing seems out of the ordinary with /etc/hosts.allow (empty) or hosts.deny (empty), and I didn't add any firewall rules (iptables -L shows that the firewall is in its initial state). I've checked the pure-ftpd docs; not sure what else to look at. Any help would be appreciated, thanks!

    Read the article

  • How to add a Linux Partition on FreeBSD

    - by Ömer
    Today I installed FreeBSD 9.0 PPC on my Mac mini G4 with 40GB HDD. During installation, (using the FSBD utility 'gpart') I have allocated a total of about 23GB for FreeBSD leaving 17GB totally free (neither partitioned, nor formatted) for a later Linux installation. Now, when try to install Linux (Ubuntu 10.10 PPC) on the remaining 17GB, the Linux/Ubuntu installer (or Linux's Disk Utility for the same matter) wants presumably a linux partition and when I try to add a (Linux) partition on that area using Linux DU it fails with this message: Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/hda, start=23363101696, size=16644660224, type= Entering MS-DOS parser (offset=0, size=40007761920) No MSDOS_MAGIC found Exiting MS-DOS parser Entering Apple parser Mac MAGIC found, block_size=512 map_count = 17 Leaving Apple parser Apple partition table detected containing partition table scheme = 2 got it Error: The partition's data region doesn't occupy the entire partition. ped_disk_new() failed Now, I'm trying to add a Linux partition on FreeBSD running on the harddisk. I use seemingly most suitable tool for this job: gpart. Here is the 'gpart show ad0' But it seems unable to add a Linux partition because "man gpart" doesn't list either "Linux Partition" nor anything like Ext2 or Ext3/Ext4. The closest thing to Linux Partition in gpart is "mbr" but it doesn't work: #gpart add -t mbr ado So, how to add properly a Linux Partition on FreeBSD? Thanks.

    Read the article

  • Mounting NAS drive with cifs using credentials file through fstab does not work

    - by mahatmanich
    I can mount the drive in the following way, no problem there: mount -t cifs //nas/home /mnt/nas -o username=username,password=pass\!word,uid=1000,gid=100,rw,suid However if I try to mount it via fstab I get the following error: //nas/home /mnt/nas cifs iocharset=utf8,credentials=/home/username/.smbcredentials,uid=1000,gid=100 0 0 auto .smbcredentials file looks like this: username=username password=pass\!word Note the ! in my password ... which I am escaping in both instances I also made sure there are no eol in the file using :set noeol binary from Mount CIFS Credentials File has Special Character chmod on .credentials file is 0600 and chown is root:root file is under ~/ Why am I getting in on the one side and not with fstab?? I am running on ubuntu 12 LTE and mount.cifs -V gives me mount.cifs version: 5.1 Any help and suggestions would be appreciated ... UPDATE: /var/log/syslog shows following [26630.509396] Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE [26630.509407] CIFS VFS: Send error in SessSetup = -13 [26630.509528] CIFS VFS: cifs_mount failed w/return code = -13 UPDATE no 2 Debugging with strace mount through fstab: strace -f -e trace=mount mount -a Process 4984 attached Process 4983 suspended Process 4985 attached Process 4984 suspended Process 4984 resumed Process 4985 detached [pid 4984] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4984] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = -1 EACCES (Permission denied) mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Process 4983 resumed Process 4984 detached Mount through terminal strace -f -e trace=mount mount -t cifs //nas/home /mnt/nas -o username=user,password=pass\!wd,uid=1000,gid=100,rw,suid Process 4990 attached Process 4989 suspended Process 4991 attached Process 4990 suspended Process 4990 resumed Process 4991 detached [pid 4990] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4990] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = 0 Process 4989 resumed Process 4990 detached

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Can IIS (Ideally Azure) do SSL Proxying?

    - by Acoustic
    My team has been asked to add a new feature to a project we're working on, and none of can find authoritative details on whether it's possible with Windows/IIS. The short of it is that we're hoping to have customers update their DNS with a CNAME record to point their website to our server instead of theirs (they why's are trivial - it's what the app does on behalf of your site). We're using a reverse proxy with several custom modules to serve particular content from the original servers. So far everything works perfectly until we encounter SSL. Is there a way to have IIS serve up an SSL certificate from another server? In other words, is there a way to be a trusted man in the middle? I'm hoping that's possible so that we don't have to require all our clients to re-issue their SSL certs. Frankly, we don't want to have to manage hundreds of certs. I'd also like to avoid a UCC situation if there's a way to because it seems to require re-creating the cert each time a client is added. So, any pointers on proxying/hosting SSL (or even dynamic SSL hosting like http://www.globalsign.com/cloud/) would be appreciated.

    Read the article

  • "Add correct host key in known_hosts" / multiple ssh host keys per hostname?

    - by Samuel Edwin Ward
    Trying to ssh into a computer I control, I'm getting the familiar message: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the RSA key sent by the remote host is [...]. Please contact your system administrator. Add correct host key in /home/sward/.ssh/known_hosts to get rid of this message. Offending RSA key in /home/sward/.ssh/known_hosts:86 RSA host key for [...] has changed and you have requested strict checking. Host key verification failed. I did indeed change the key. And I read a few dozen postings saying that the way to resolve this problem is by deleting the old key from the known_hosts file. But what I would like is to have ssh accept both the old key and the new key. The language in the error message ("Add correct host key") suggests that there should be some way to add the correct host key without removing the old one. I have not been able to figure out how to add the new host key without removing the old one. Is this possible, or is the error message just extremely misleading?

    Read the article

  • How do I install PHP 5.3 on CentOS?

    - by fivelitresofsoda
    I have to install PHP 5.3 on my CentOS server. If I do yum install php, the base repository installs 5.1.6 which is too old for the applications I need to install. So I've been trying to use the IUS repository, following the official instructions from IUS: root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-2.ius.el5.noarch.rpm root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm root@linuxbox ~]# rpm -Uvh ius-release*.rpm epel-release*.rpm OK. Now I simply do yum install php53, etc. for all I need... but I get this error: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /usr/bin/php from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/bin/php-cgi from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/share/man/man1/php.1.gz from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /etc/php.ini from install of php53u-common-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-common-5.1.6-27.el5_5.3.x86_64 Error Summary ------------- I have no idea on how to solve this. I think I have to delete the base packages. However, as someone new to Linux, I don't know how to do that.

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >