Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 347/991 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • rsyslog - template - regex data for insertion into db

    - by Mike Purcell
    I've been googling around the last few days looking for a solid example of how to regex a log entry for desired data, which is then to be inserted into a database, but apparently my google-fu is lacking. What I am trying to do is track when an email is sent, and then track the remote mta response, specifically the dsn code. At this point I have two templates setup for each situation: # /etc/rsyslog.conf ... $Template tpl_custom_header, "MPurcell: CUSTOM HEADER Template: %msg%\n" $Template tpl_response_dsn, "MPurcell: RESPONSE DSN Template: %msg%\n" # /etc/rsyslog.d/mail if $programname == 'mail-myapp' then /var/log/mail/myapp.log if ($programname == 'mail-myapp') and ($msg contains 'X-custom_header') then /var/log/mail/test.log;tpl_custom_header if ($programname == 'mail-myapp') and ($msg contains 'dsn=') then /var/log/mail/test.log;tpl_response_dsn & ~ Example log entries: MPurcell: CUSTOM HEADER Template: D921940A1A: prepend: header X-custom_header: 101 from localhost[127.0.0.1]; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost>: headername: message-id MPurcell: RESPONSE DSN Template: D921940A1A: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[2607:f8b0:400e:c02::1a]:25, delay=2, delays=0.12/0.01/0.82/1.1, dsn=2.0.0, status=sent (250 2.0.0 OK 1372378600 o4si2828280pac.279 - gsmtp) From the CUSTOM HEADER Template I would like to extract: D921940A1A, and X-custom_header value; 101 From the RESPONSE DSN Template I would like to extract: D921940A1A, and "dsn=2.0.0"

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • Does mailx send mail using an SMTP relay or does it directly connect to the target SMTP server?

    - by iamrohitbanga
    Suppose i send a mail using the following the following command: mailx [email protected] then does mailx first try to find out the SMTP server of my ISP for relaying the mail or does it connect directly. Does it depend on whether my PC has a public IP address or it is behind a NAT. How do I check the settings of mailx on my PC? How can I verify this using tcpdump?

    Read the article

  • when to use squid on server side?

    - by ajsie
    so i have set up apache serving my php pages. i read about squid but don't understand why/how i should use it to speed up my web server. from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache). but then some people tell others how they have speeded up apache using squid. so im confused. could squid be used on the server side too? and how will it work?

    Read the article

  • FFMPEG: how to add watermark to video?

    - by DocWiki
    My Platform: Ubuntu 10.10 + FFMPEG 0.5.3(I installed ffmpeg from source) I try to add Watermark to a .MOV video with FFMPEG 0.5.3 imlib2.so (Please note FFMPEG 0.6+ dont support imlib2.so, so I use ffmpeg 0.5.3) Here is my code: ffmpeg -sameq -i example.mov -vhook '/usr/local/lib/vhook/imlib2.so -x 0 -y 0 -i /var/www/files/watermark.png' newexample.mov Here is the output: FFmpeg version 0.5.3, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-avfilter --enable-filter=movie --enable-avfilter-lavf libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 built on Jul 3 2011 12:05:08, gcc: 4.4.5 Seems stream 1 codec frame rate differs from container frame rate: 59.94 (5994/100) - 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'example.mov': Duration: 00:03:14.06, start: 0.000000, bitrate: 3350 kb/s Stream #0.0(eng): Audio: aac, 48000 Hz, stereo, s16 Stream #0.1(eng): Video: h264, yuv420p, 1150x647, 29.97 tbr, 29.97 tbn, 59.94 tbc Output #0, mov, to 'newexample.mov': Stream #0.0(eng): Video: mpeg4, yuv420p, 1150x647, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc Stream #0.1(eng): Audio: 0x0000, 48000 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 - #0.0 Stream #0.0 - #0.1 Unsupported codec for output stream #0.1 What could be the possible problem? Is that AAC or H264 that is not supported? I installed libavcodec-extra-52, linfaac, libfaad and etc. but the error is the same. Do I have to install following this instruction? HOWTO: Install and use the latest FFmpeg and x264 or there is a simpler solution?

    Read the article

  • Why are there many processes listed under the same title in htop?

    - by javanix
    Can anyone explain to me why there are sometimes 10 or 15 processes with the same title and "stats" listed in htop? I'm guessing there are multiple threads running - but that many of them obviously couldn't be running concurrently. Is there any sort of performance hit taken if a process uses say, 15 non-concurrent threads vs. 10 non-concurrent threads?

    Read the article

  • Home server - HP Proliant Microserver - Software and setup - OS on USB stick?

    - by Lloyd Watkin
    I've just purchased a HP ProLiant Microserver for home use. I want to set up with web server, samba shares, the usual stuff. My question is really about system setup. It has an internal USB socket so I've attempted to install a copy of Fedora 14 onto it. I turned off X/Gnome, but it still ran like a pig. I've now put the OS on one of the internal disks (250Gb, 7200rpm), but I was wondering if there was a way to utilise the internal USB to give me better power-saving allowing the hard drives to be shut down when not in use. How would you set this server up? I'd rather not go to the extra cost of an SSD right now, but if that's the best way then so be it.

    Read the article

  • FTP Server vsftpd change ftp:nogroup

    - by pygorex1
    I'm running vsftpd using the Debian Lenny package. ftp:nogroup is the user/group that uploads files and owns uploaded files. However, a problem is arising - another process is also writing files to the FTP directory as myprocess:mygroup with restrictive file permissions, preventing vsftpd from overwriting the myprocess authored files. Is it possible to tell vsftpd to use a different user/group for uploading files? (preferably as myprocess:mygroup or ftp:mygroup)

    Read the article

  • Some HTTPS connections via NAT fail, but work on firewall itself.

    - by hnxn
    Hi, I am having trouble establishing some HTTPS connections from internal machines, even though these same connections work if initiated on the firewall itself. The firewall machine is running Ubuntu 10.04.1 and shorewall 4.4.6. The internet connection is Bell PPPoE DSL (in Canada). I have tried various MTU settings, it doesn't seem to make any difference. Other protocols (HTTP, FTP, etc) generally work. The problem seems to be limited to certain sites; this one never works from an internal machine, but always works from the firewall itself: From internal machine: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:51:31-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. ^C From firewall: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:58:28-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 840 [image/gif] Saving to: `corp_logo.gif' 2011-01-13 20:58:28 (149 MB/s) - `corp_logo.gif' saved [840/840] This URL always works from both internal and firewall: https://encrypted.google.com/images/logos/ssl_logo_lg.gif Any troubleshooting tips would be greatly appreciated!

    Read the article

  • GRUB error: unknown filesystem

    - by Ali
    I replaced my old laptop drive which was win7 and ubuntu dual boot with an SSD. Now I connected the old drive through a USB adapter and I want to boot from it. But this comes up: unknown filesystem grub rescue> As i need the programs from old drive I have to boot from it time to time and I don't want to install those software on the new drive. It takes so time to exchange the drives so I want to boot from USB. how can I fix this?

    Read the article

  • Supervisor VS cronjob

    - by Guandalino
    Actually I'm using supervisor to monitor a process and restart it when it stops for some reason. The problem is that in case of a supervisor crash the process stops get monitored. So I thought to schedule a cronjob to check supervisor is running, and eventually restart it. The next thing I'm considering is to get rid of supervisor and check my process directly from the cronjob. I read that sometimes supervisor uses too much memory (to be verified, though). What are the pros in having supervisor VS cronjob monitoring the process?

    Read the article

  • rsync - Exclude files that are over a certain size?

    - by Rory
    I am doing a backup of my desktop to a remote machine. I'm basically doing rsync -a ~ example.com:backup/ However there are loads of large files, e.g. wikipedia dumps etc. Most of the files I care a lot about a small, such as firefox cookie files, or .bashrc. Is there some invocation to rsync that will exclude files that are over a certain size? That way I could copy all files that are less than 10MB first, then do all files. That way I can do a fast backup of the most important files, then a longer backup of everything else.

    Read the article

  • OpenWRT based gateway with dnsmasq and internal server with bind

    - by Peter
    I have router based on OpenWRT which has dnsmasq 2.59. Inside my local area network I have a NS server bind. This server has internal and external views for a couple of my domains. My router forwards port 53 TCP and UDP from outside IP (router WAN) to this server. For the external clients everything works fine. In order to organize the internal view, I decided to add the exception to /etc/dnsmasq.conf server=/mydomain1.com/192.168.1.1 server=/mydomain2.com/192.168.1.1 server=/mydomain3.com/192.168.1.1 (192.168.1.1 - IP address of the NS server) According to dnsmasq manstrong text: More specific domains take precendence over less specific domains, so: --server=/google.com/1.2.3.4 --server=/www.google.com/2.3.4.5 will send queries for *.google.com to 1.2.3.4, except *www.google.com, which will go to 2.3.4.5 this domain name with all the sub-domains is supposed to be forward to my NS server. Everything works (SOA, NS, MX, CNAME, TXT, SRV etc.) except for A-record: # nslookup -type=a mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 *** Can't find mydomain1.com: No answer 192.168.1.100 - IP address of my router (dnsmasq) However, I can get the answer for the TXT-record query: # nslookup -type=txt mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 mydomain1.com text = "v=spf1 include:mydomain1.com -all" When I just specify the local IP of my NS server (direct access to the server without using dnsmasq) then the results are: # nslookup -type=a mydomain1.com 192.168.1.1 Server: 192.168.1.1 Address: 192.168.1.1#53 Name: mydomain1.com Address: 192.168.1.1 There is a similar situation with the MX-record: C:\>nslookup -type=mx mydomain1.com Server: router.lan Address: 192.168.1.100 mydomain1.com MX preference = 10, mail exchanger = mail.mydomain1.com mydomain1.com nameserver = ns.mydomain1.com mail.mydomain1.com internet address = 192.168.1.1 ns.mydomain1.com internet address = 192.168.1.1 C:\>nslookup -type=a mail.mydomain1.com Server: router.lan Address: 192.168.1.100 *** No address (A) records available for mail.mydomain1.com This is a dig result: # dig +nocmd mydomain1.com any +multiline +noall +answer mydomain1.com. 86400 IN SOA ns.mydomain1.com. hostmaster.mydomain1.com. ( 121204007 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 604800 ; expire (1 week) 3600 ; minimum (1 hour) ) mydomain1.com. 86400 IN NS ns.mydomain1.com. mydomain1.com. 86400 IN A 192.168.1.1 mydomain1.com. 604800 IN MX 10 mail.mydomain1.com. mydomain1.com. 3600 IN TXT "v=spf1 include:mydomain1.com -all" When I try to ping: # ping mydomain1.com ping: cannot resolve mydomain1.com: Unknown host Is it a bug of dnsmasq 2.59? How to manage this problem?

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • synchronous network audio

    - by intuited
    I'd like to have an audio transmission shared among computers on a LAN. Although there are various systems to do this -- shoutcast/icecast, pulseaudio, etc. -- I'm not aware of any that provide synchronization. I'd like to have different computers in the house playing the same audio, and have the same sample playing at the same time. Is there a system which can do this?

    Read the article

  • Centos 5.5 [Read-only file system] issue after rebooting

    - by canu johann
    I have a virtual server under centos 5.5 (hosted by a japanese company called sakura ) Since yesterday, connection through ssh couldn't be established. I've contacted support center who told me to restart VS from the control panel. After restarting, I got the message below Connected to domain wwwxxxxxx.sakura.ne.jp Escape character is ^] [ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) @@cat: /proc/self/attr/current: Invalid argument Welcome to CentOS Starting udev: @[ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) [FAILED] *** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. *** Warning -- SELinux is active *** Disabling security enforcement for system recovery. *** Run 'setenforce 1' to reenable. /etc/rc.d/rc.sysinit: line 53: /selinux/enforce: Read-only file system Give root password for maintenance (or type Control-D to continue): bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system (Repair filesystem) 1 # setenforce 1 setenforce: SELinux is disabled (Repair filesystem) 2 # echo 1 (Repair filesystem) 4 # /etc/init.d/sshd status openssh-daemon is stopped (Repair filesystem) 5 # /etc/init.d/sshd start Starting sshd: NET: Registered protocol family 10 lo: Disabled Privacy Extensions touch: cannot touch `/var/lock/subsys/sshd': Read-only file system (Repair filesystem) 6 # sudo /etc/init.d/sshd start sudo: sorry, you must have a tty to run sudo (Repair filesystem) 7 # I have 4 site in production and I need to restart the server quickly (SSH + HTTPD ,...). Thank you for your time.

    Read the article

  • Start multiple Firefoxes; Xephyr rootless mode

    - by Vi
    How to have multiple independent instances of Mozilla Firefox 3.5 on the same X server, but started from different user accounts (consequently, different profiles)? Limited success was only with Xephyr :1, DISPLAY=:1 /usr/local/bin/firefox, but Xephyr has no Cygwin/X's "rootless" mode so it not comfortable. The idea is to have one Firefox instance for various "Serious Business" things and the other for regular browsing with dozens of add-ons securely isolated. /* Requested tags: xephyr rootless */

    Read the article

  • Raid 5 GPT Partitioning

    - by user39325
    Hi, i have a Dell Poweredge r710 server with five 1 TB disks. All of them are in RAID 5. I was trying to install Centos but it sais "Your boot partition is on disk using GPT Partition...". I read somewhere that centos cant install on 2TB disk, so i made some partiotions smaller, but it's not workin. any idea? p.s. i am going to install Proxmox on that, but Proxmox same doesnt accept 2TB disks...

    Read the article

  • Hi , is there any wiki that supports ACL , ADI and API ? [closed]

    - by goutham
    Possible Duplicate: Which wiki satisfies ACL ADI and API ? Hi , is there any wiki that supports ACL , ADI and API ? and my requirement is we need a wiki that does three things 1. Uses ACL (Access Control lists - who can access what pages) 2. Needs AD (active directory integration) 3. Is scriptable via an API (meaning I can create a wiki page through an API in a program instead of logging in and manually typing in the page.) Ur help is appreciated Thanks in Advance Goutham

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >