Search Results

Search found 4585 results on 184 pages for 'master subdocument'.

Page 43/184 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • How to Create Features for Windows SharePoint Services 3.0

    To customise a SharePoint (WSS 3.0) site, you'll need to understand 'Features'. The 'Feature' framework has become the most important method of customising a SharePoint site, because it is now defined by a list of Features, a layout page and a master page. One templated site can be turned into another by toggling Features and maybe switching the layout page or master page. Charles Lee explains.

    Read the article

  • Git workflow for small teams

    - by janos
    I'm working on a git workflow to implement in a small team. The core ideas in the workflow: there is a shared project master that all team members can write to all development is done exclusively on feature branches feature branches are code reviewed by a team member other than the branch author the feature branch is eventually merged into the shared master and the cycle starts again The article explains the steps in this cycle in detail: https://github.com/janosgyerik/git-workflows-book/blob/small-team-workflow/chapter05.md Does this make sense or am I missing something?

    Read the article

  • Migrate ldap to another server

    - by user3215
    I have ubuntu(8.10) ldap server running. I am migrating from my old servers to ubuntu 10.04.2 server and I have set up almost all freshly on the new servers. How could I perfectly migrate ldap to the new server after installing ldap?. I found the following to export and import ldap in the google: slapcat -l master.ldif slapadd -c -l master.ldif Is there anything I've to do before/after using those commands?. Any precautions?

    Read the article

  • How Important is Keyword Selection?

    If you are planning to be one of, or the best SEO expert in town, there are a few things you must consider first. Are you a master of the SEO basics? Whether you are self-taught or trained by an SEO specialist, you must master the basics. Also, how good are you at keyword selection?

    Read the article

  • i'm confused what skill shall i learn at my internship

    - by iyad al aqel
    i'm a software Engineering student having my internship this summer. the company asked me to choose one or two skills that i want to master and they will coordinate me and give me small tasks to medium projects to master it . Now , i'm confused shall i continue with web development and learn .NET given that i've been working with PHP for 2 years OR entering the mobile development world and learn Android. any advice guys ?

    Read the article

  • Think Before You Leap - Life is Dangerous for Change Agents

    - by technodrone
    So you want to introduce agile methods to your team... The following are some "lessons learned" when from someone who advocated agile/scrum to a group that was not ready for it. "Change agents, in my experience, face negative consequences. Sometimes, most of the time at the beginning, it's painful. This is the question you might have to ask yourself. Do you want to be a developer in scrum project or do you want be a scrum master managing the process? I think with proper mentoring/training, you can become good scrum master. But is that what you want? if yes, you can go ahead, take the training. if you want to be a developer, you may not need to be certified  as scrum master. You can just pick up from a book such as Mike Cohn new book Succeeding with Agile, I am reading it now. It's good. In my experience, I did waste my resources by trying to change the culture. It cost me lot. Instead, I should have focused on technical practices that are core to agile. Then look for teams that are good at agile. I would have saved lot of energy, and time. Try baby steps first yourself in the company, and next with the team, starting with technical practices like writing unit tests, SOLID principles, patterns, refactoring, continuous integration, pairing, and peer code reviews. These have inherent pull that can bring collaboration from a team.  Once you see team adaption in core practices, then you can introduce scrum concepts like user stories/task board etc.  This idea of Leading by example seems to be working for most of the agile folks. You can pitch core practices to the manager, and the team, and start showing them how you are doing.  You can put a road map for agile adaption and you can pitch to your manager. I would include need for scrum master training as part of the road map. " I thought about his advice for a couple of weeks and read about the pitfalls of technical debt and the team not having prior awareness of agile methods. The more I read and think about it the more I think he was right.  What do you think?

    Read the article

  • Book Review - Windows 7 Administrator's Pocket Consultant

    If you are a Windows geek or an Administrator then you should master all the advanced concepts associated with Windows 7. Windows 7 Administrators Pocket Consultant by renowned Windows expert William R. Stanek provides a glimpse of all the concepts related to management and administration of Windows 7. Does this book help you in your quest to master Windows 7? Find it out by reading Anand's review.

    Read the article

  • Dynamic Bind9 + DHCP

    - by AcidRod75
    i have been working on setup a server for my internal network, so far i have a working isc-dhcp-server that can upgrade a chrooted BIND9 (on the same machine), i need to add some static entries on the DNS, so users can resolve the websites that resides in our DMZ. What i had tryed all ready was to modify the /etc/bind/named.conf.local with this info: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; key DHCP_UPDATER { algorithm HMAC-MD5.SIG-ALG.REG.INT; secret "MySuperSecretHash"; (this is not the real value BTW) }; zone "quality.internal" IN { type master; file "/var/lib/bind/quality.internal.db"; allow-update { key DHCP_UPDATER; }; }; zone "0.10.10.in-addr.arpa" { type master; file "/var/lib/bind/rev.10.10.0.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; }; logging { channel query.log { file "/var/log/named/query.log"; severity debug 3; }; category queries { query.log; }; }; --- EOF ---- then i added this 2 entries: zone "ourserver.internal" IN { type master; file "/var/lib/bind/ourserver.internal.db"; }; zone "0.16.172.in-addr.arpa" { type master; file "/var/lib/bind/rev.172.16.0.in-addr.arpa"; }; ---- EOF ---- So.. i created the files ourserver.internal.db and rev.172.16.0.in-addr.arpa placed them BOTH in /var/lib/bind/ and changed the permisions so the bind user can access them, restated the service... when i do a NSLOOKUP www.ourserver.internal i get: Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find www.ourserver.internal: NXDOMAIN BUT when i do a reverse lookup.... Server: 127.0.0.1 Address: 127.0.0.1#53 5.0.16.172.in-addr.arpa name = www.ourserver.internal I do not understand what's wrong. Some help with this will save me from installing a new DNS server at the DMZ JUST to host internal site names- TY in advance BTW: the server i'm using has Ubuntu Server 11.10 fully patched.

    Read the article

  • ps3 controller failed to pair

    - by Michael Ropy
    I'm trying to pair my ps3 controller with trusty. I've tried this so far: $ sudo sixpair Current Bluetooth master: xx:xx:xx:xx:xx:xx Setting master bd_addr to xx:xx:xx:xx:xx:xx And then I unplug my controller and run this command: $ sixad --start D-Bus setup failed: Name already in use sixad-bin[4843]: started sixad-bin[4843]: sixad started, press the PS button now But when I press the PS button, nothing happens! I've searched a lot in Google and this process works everywhere, but I don't know it doesn't work here...thanks for your help...

    Read the article

  • Announcing Oracle MDM YouTube Channel!

    - by Michelle Kimihira
    We are excited about new Oracle MDM YouTube channel where you can watch videos related to Master Data Management. You will find product videos and customer videos. Be sure to subscribe to the channel, so you don't miss out! Spend a moment to visit us at: http://www.youtube.com/oraclemdm. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter Read and Subscribe to our bi-monthly Data Integration and Master Data Management Newsletter

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • ssh connection error

    - by evaG
    I'm trying to log into a ubuntu desktop. I get the following error message: PTY allocation request failed What does it mean and how to connect to my desktop ? Thanks edit: debug1: Reading configuration data /home/evag/.ssh/config debug1: /home/evag/.ssh/config line 1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug1: mux_client_request_session: master session id: 2 PTY allocation request failed

    Read the article

  • Creating ASP.NET Admin Pages

    In the tutorial series for ASP.NET 3.5 Master Page Design using Query String I explained in detail how to create an ASP.NET website using master pages and query string techniques. This tutorial series will show you how to create admin pages for your ASP.NET MasterPage Query String driven website.... Self-Service Ad Manager Sell Ads directly to advertisers - Keep 100% of your ad revenue - Free access

    Read the article

  • EBS+DB????????????????????????

    - by Nishibu
    ????ORACLE MASTER Gold 10g Database ???? ORACLE MASTER Gold 11g Database?????????EBS R12?????????????????????????????????????Oracle E-Business Suite R12 Applications Database Administrator Certified Professional????????????????????EBS???????????????????????????????????????????????????????!??????????

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • Unable to receive any emails using postfix, dovecot, mysql, and virtual domain/mailboxes

    - by stkdev248
    I have been working on configuring my mail server for the last couple of weeks using postfix, dovecot, and mysql. I have one virtual domain and a few virtual mailboxes. Using squirrelmail I have been able to log into my accounts and send emails out (e.g. I can send to googlemail just fine), however I am not able to receive any emails--not from the outside world nor from within my own network. I am able to telnet in using localhost, my private ip, and my public ip on port 25 without any problems (I've tried it from the server itself and from another computer on my network). This is what I get in my logs when I send an email from my googlemail account to my mail server: mail.log Apr 14 07:36:06 server1 postfix/qmgr[1721]: BE01B520538: from=, size=733, nrcpt=1 (queue active) Apr 14 07:36:06 server1 postfix/pipe[3371]: 78BC0520510: to=, relay=dovecot, delay=45421, delays=45421/0/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied) Apr 14 07:36:06 server1 postfix/pipe[3391]: 8261B520534: to=, relay=dovecot, delay=38036, delays=38036/0.06/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3378]: 63927520532: to=, relay=dovecot, delay=38105, delays=38105/0.02/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3375]: 07F65520522: to=, relay=dovecot, delay=39467, delays=39467/0.01/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3381]: EEDE9520527: to=, relay=dovecot, delay=38361, delays=38360/0.04/0/0.15, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3379]: 67DFF520517: to=, relay=dovecot, delay=40475, delays=40475/0.03/0/0.16, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3387]: 3C7A052052E: to=, relay=dovecot, delay=38259, delays=38259/0.05/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3394]: BE01B520538: to=, relay=dovecot, delay=37682, delays=37682/0.07/0/0.11, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:07 server1 postfix/pipe[3384]: 3C7A052052E: to=, relay=dovecot, delay=38261, delays=38259/0.04/0/1.3, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection rate 1/60s for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection count 1 for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max cache size 1 at Apr 14 07:35:32 Apr 14 07:41:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:41:06 server1 postfix/pipe[4594]: ED6005203B7: to=, relay=dovecot, delay=334, delays=334/0.01/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:51:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:51:06 server1 postfix/pipe[4604]: ED6005203B7: to=, relay=dovecot, delay=933, delays=933/0.02/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) mail-dovecot-log (the log I set for debugging): Apr 14 07:28:26 auth: Info: mysql(127.0.0.1): Connected to database postfixadmin Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): query: SELECT password FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: client out: OK 1 [email protected] Apr 14 07:28:26 auth: Debug: master in: REQUEST 1809973249 3356 1 7cfb822db820fc5da67d0776b107cb3f Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): SELECT '/home/vmail/mydomain.com/some.user1' as home, 5000 AS uid, 5000 AS gid FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: master out: USER 1809973249 [email protected] home=/home/vmail/mydomain.com/some.user1 uid=5000 gid=5000 Apr 14 07:28:26 imap-login: Info: Login: user=, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=3360, secured Apr 14 07:28:26 imap([email protected]): Debug: Effective uid=5000, gid=5000, home=/home/vmail/mydomain.com/some.user1 Apr 14 07:28:26 imap([email protected]): Debug: maildir++: root=/home/vmail/mydomain.com/some.user1/Maildir, index=/home/vmail/mydomain.com/some.user1/Maildir/indexes, control=, inbox=/home/vmail/mydomain.com/some.user1/Maildir Apr 14 07:48:31 imap([email protected]): Info: Disconnected: Logged out bytes=85/681 From the output above I'm pretty sure that my problems all stem from (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ), but I have no idea why I'm getting that error. I've have the permissions to that log set just like the other mail logs: root@server1:~# ls -l /var/log/mail* -rw-r----- 1 syslog adm 196653 2012-04-14 07:58 /var/log/mail-dovecot.log -rw-r----- 1 syslog adm 62778 2012-04-13 21:04 /var/log/mail.err -rw-r----- 1 syslog adm 497767 2012-04-14 08:01 /var/log/mail.log Does anyone have any idea what I may be doing wrong? Here are my main.cf and master.cf files: main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = server1.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all # Virtual Configs virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_mailbox_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf relay_domains = mysql:/etc/postfix/mysql_relay_domains.cf smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unauth_destination, reject_unauth_pipelining, reject_invalid_hostname smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous virtual_transport=dovecot dovecot_destination_recipient_limit = 1 master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • BIND9 server types

    - by aGr
    I was configuring DNS on my server using BIND9, everything seems to work, but I have a question regarding my config file. I've ended up with this configuration in /etc/bind/named.conf.local zone "example.com" { type master; file "/etc/bind/db.example.com"; allow-transfer { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; allow-transfer { 192.168.1.1; }; }; forwarders { 10.253.22.140; 10.253.22.141; }; I've read about the different type of dns server, like primary master etc. The first two parts (zone and zone) corresponds to primary dns server configuration. First record for "classic" lookup, second one for reverse. The last part (forwarders) is configuration of cache-server and contains the ISP's IP of DNS server. So all names resolved thanks to this server will be cached. Simple question: am I right? Does my description make sense? Or one server can be only either master or either cached?

    Read the article

  • Hudson deploy specific git revision

    - by brad
    I'm using hudson to auto-deploy my Rails app to heroku. In my main build job I pull from a Git repo (hosted using gitosis on the same machine), master branch with the following: URL of repository: /home/git/repositories/my_app.git Name of repository: origin Refspec: +refs/heads/master:refs/remotes/origin/master Branches to build: master Then, assuming all tests pass, I want to kick off a new build that is the deploy to Heroku. I can't however figure out how to get that deploy build to checkout the particular revision that this build was using. I understand there's a parameterized trigger plugin that would allow me to pass this revision number, but I don't know how I can tell hudson to checkout this particular revision on the deploy build. I'm pretty sure this just has to do with my limited knowledge of git, but where in the hudson git config's is there an option to checkout a particular revision? Otherwise, I could have many commits happen whilst a build is happening, and when it kicks off a deploy build, that deploy build would just check out the HEAD of the branch, which may not be the same as the code that was pushed that triggered this build. I don't fully understand why I have a refspec in Hudson, then also specify a branch to build, I thought this was the same thing. Can refspec somehow specify the revision number? How would this be referenced if it was passed through with the parameterized trigger plugin? (I've never used that plugin, but someone else recommended it as a way to pass in vars to a new build, if there's another way I'm all ears)

    Read the article

  • Unable to receive emails from Ubuntu postfix mail server

    - by Paddington
    I am unable to receive emails on an Ubuntu 11.04 server running postfix with the Plesk control panel. I can't see the mails even on webmail. I am able to send emails and am not getting any error messages on the email client when I try to receive. Here is the output of the logs: *tail -f /usr/local/psa/var/log/maillog Aug 29 10:38:31 cp9 postfix/tlsmgr[3811]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:38:32 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 3811 exit status 1 Aug 29 10:38:32 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:38:36 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158] Aug 29 10:38:36 cp9 pop3d: IMAP connect from @ [::ffff:196.201.7.158]INFO: LOGIN, [email protected], ip=[::ffff:196.201.7.158] Aug 29 10:38:37 cp9 pop3d: 1346229517.874008 LOGOUT, [email protected], ip=[::ffff:196.201.7.158], top=0, retr=0, time=1, rcvd=24, sent=1716, maildir=/var/qmail/mailnames/essentialhuku.co.za/earle/Maildir Aug 29 10:14:05 cp9 postfix/tlsmgr[1133]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:14:06 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 1133 exit status 1 Aug 29 10:14:06 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:14:08 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Redis as substitution for Memcache

    - by Boban P.
    We have distributed web app, and for now, as session handler, we use two separate instances of memcache in redundancy, so everything that is written in one memcache is also written in other. Memcache is fairly easy to install, use, and maintain but we have one problem: if one memcache fail, everything is fine, php comunicate with other instance which has all data (although, half of connections have a delay because they try to use failed one, wait a little, and then contact other memcache). When failed instance comes back to life again, it starts up empty. If established session request data from that instance, session fails, and user logs out, and that happens to half of users.So, we are thinking about to switch to redis for session handling, and maybe keep memcache for cache only. My questions are: If we setup redis instances as master-slave, and if master fails, can sentinel promote slave as new master and when old master comes back to life, will it stay as slave or not? Is redis call malloc at startup to allocate part of memory, like memcache or varnish, or it calls malloc for every key inserted? And what are pros and cons of that?

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >