Search Results

Search found 14531 results on 582 pages for 'proxy pass'.

Page 528/582 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Solaris10 x86 mirror. Making second disk booteable when failure

    - by Kani
    Did a mirror (RAID1) with Solaris 10 in x86. Everything OK. Now, I´m trying to make the second disk booteable, this is: from grub or in case of failure of disk1. I edited /boot/grub/menu.lst: #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris 10 9/10 s10x_u9wos_14a X86 findroot (rootfs1,0,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot -s module /boot/amd64/x86.miniroot-safe #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe #---------------------END BOOTADM-------------------- #Make second disk booteable!!!!!!! title alternate boot findroot (rootfs1,1,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe But is not working. In the BIOS, when I select "alternate boot" I get: Error 15: 15 file not found also, how to configure to GRUB to make the disk2 to boot in case of error in disk1? Additionally, I did (but not related to GRUB): eeprom altbootpath=/devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a Here is the output of some commands that may help you: /sbin/biosdev 0x80 /pci@0,0/pci108e,5352@1f,2/disk@0,0 0x81 /pci@0,0/pci108e,5352@1f,2/disk@1,0 ls -l /dev/dsk/c1t?d0s0 lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t0d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@0,0:a lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t1d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a more /boot/solaris/bootenv.rc setprop ata-dma-enabled '1' setprop atapi-cd-dma-enabled '0' setprop ttyb-rts-dtr-off 'false' setprop ttyb-ignore-cd 'true' setprop ttya-rts-dtr-off 'false' setprop ttya-ignore-cd 'true' setprop ttyb-mode '9600,8,n,1,-' setprop ttya-mode '9600,8,n,1,-' setprop lba-access-ok '1' setprop prealloc-chunk-size '0x2000' setprop bootpath '/pci@0,0/pci108e,5352@1f,2/disk@0,0:a' setprop keyboard-layout 'US-English' setprop console 'text' setprop altbootpath '/pci@0,0/pci108e,5352@1f,2/disk@1,0:a' cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - #/dev/dsk/c1t0d0s1 - - swap - no - /dev/md/dsk/d1 - - swap - no - /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - df -h Filesystem size used avail capacity Mounted on /dev/md/dsk/d0 909G 11G 889G 2% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 14G 972K 14G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 909G 11G 889G 2% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 14G 40K 14G 1% /tmp swap 14G 28K 14G 1% /var/run

    Read the article

  • Tutorial for configuring OpenVPN [on hold]

    - by user2699451
    I have been through 10+ tutorials on setting up a OpenVPN, and each tutorial gives a different problem... Does anyone know of a decent and helpful website/tutorial which I could go to to get it set up? I have been battling through it for almost 2 months now. Yes, I have also bugged forums.openvpn, but I think I have "reached my post limit" with them. I have to configure it remotely via ssh. UPDATE: okay, I have been asked to be more clear on the topic I followed this tutorial (as a example) - http://www.servermom.com/how-to-build-openvpn-server-on-centos-6-x/732/ I had no issues setting up, etc. except when I boot into windows and run the OpenVPN GUI Client, it connects and gives this error: WARNING: Bad encapsulated packet length from peer (21331), which must be 0 and <= 1576 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition could also indicate a possible active attack on the TCP link -- [Attemping restart...] Here is my server config: port 1194 #- port proto udp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 reneg-sec 0 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /usr/lib64/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Co$ #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment$ client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 5 30 comp-lzo persist-key persist-tun status 1194.log verb 3 and my client config: client dev tun proto udp remote [server ip] 1194 # - Your server IP and OpenVPN Port resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key persist-tun ca ca.crt auth-user-pass comp-lzo reneg-sec 0 verb 3 OpenVPN Client Log: Thu Oct 31 11:51:29 2013 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Thu Oct 31 11:51:44 2013 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Thu Oct 31 11:51:44 2013 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Thu Oct 31 11:51:44 2013 LZO compression initialized Thu Oct 31 11:51:44 2013 Control Channel MTU parms [ L:1576 D:140 EF:40 EB:0 ET:0 EL:0 ] Thu Oct 31 11:51:44 2013 Data Channel MTU parms [ L:1576 D:1450 EF:44 EB:135 ET:32 EL:0 AF:3/1 ] Thu Oct 31 11:51:44 2013 Local Options hash (VER=V4): '2547efd2' Thu Oct 31 11:51:44 2013 Expected Remote Options hash (VER=V4): '77cf0943' Thu Oct 31 11:51:44 2013 Attempting to establish TCP connection with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCP connection established with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link local: [undef] Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link remote: x.x.x.x:1194 // after this it just hangs, nothing happens So I dont know what I am doing wrong but I am getting a bit impatient and on each forum I post this, I get stupid/unrelated/unhelpful answers...

    Read the article

  • How is incoming SMTP mail being delivered despite blocked port

    - by Josh
    I setup a MX mail server, everything works despite port 25 being blocked, I'm stumped as to why I am able to receive email with this setup, and what the consequences might be if I leave it this way. Here are the details: Connections to SMTP over port 25 and 587 both reliably connect over my local network. Connections to SMTP over port 25 are blocked from external IPs (the ISP is blocking the port). Connections to Submission SMTP over port 587 from external IPs are reliable. Emails sent from gmail, yahoo, and a few other addresses all are being delivered. I haven't found an email provider that fails to deliver mail to my MX. So, with port 25 blocked, I am assuming other MTA servers fallback to port 587, otherwise I can't imagine how the mail is received. I know port 25 shouldn't be blocked, but so far it works. Are there mail servers that this will not work with? Where can I find more about how this is working? -- edit More technical detail, to validate that I'm not missing something silly. Obviously in the transcript below I've replaced my actual domain with example.com. # DNS MX record points to the A record. $ dig example.com MX +short 1 example.com $ dig example.com A +short <Public IP address> # From a public server (not my ISP hosting the mail server) # We see port 25 is blocked, but port 587 is open $ telnet example.com 25 Trying <public ip>... telnet: Unable to connect to remote host: Connection refused # Let's try openssl $ openssl s_client -starttls smtp -crlf -connect example.com:25 connect: Connection refused connect:errno=111 # Again from a public server, we see port 587 is open $ telnet example.com 587 Trying <public ip>... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Postfix ehlo example.com 250-example.com 250-PIPELINING 250-SIZE 10485760 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250-DSN 250-BINARYMIME 250 CHUNKING quit 221 2.0.0 Bye Connection closed by foreign host. Here is a portion from the mail log when receiving a message from gmail: postfix/postscreen[93152]: CONNECT from [209.85.128.49]:48953 to [192.168.0.10]:25 postfix/postscreen[93152]: PASS NEW [209.85.128.49]:48953 postfix/smtpd[93160]: connect from mail-qe0-f49.google.com[209.85.128.49] postfix/smtpd[93160]: 7A8C31C1AA99: client=mail-qe0-f49.google.com[209.85.128.49] The log shows that a connection was made to the local IP on port 25 (I'm not doing any port mapping, so it is port 25 on the public IP too). Seeing this leads me to hypothesize that the ISP block on port 25 only occurs when a connection is made from an IP address that is not known to be a mail server. Any other theories?

    Read the article

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

  • Ubuntu server 10.04 doesn't boot into installed Gnome desktop automatically

    - by Tong Wang
    I've installed Ubuntu server 10.04 and then installed Gnome desktop on top of it, because I am new to Linux and its command line, I need the GUI desktop to help me get around. However, the problem I got is that the server doesn't boot into the GUI desktop when powered on. It's booting into a shell like this: Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enought?) - check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapper/cecdata-root does not exist. Dropping to a shell! BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) result of (cat /proc/cmdline) BOOT_IMAGE=/vmlinuz-2.6.32-28-server root=/dev/mapper/cecdata-root ro quiet Then I have type "exit" to exit the shell and then it boots into Gnome. Any idea what's wrong? Edit: add output for the following commands wt@cecdata:~$ ls /dev/mapper/ cecdata-root cecdata-swap_1 control wt@cecdata:~$ fdisk -l wt@cecdata:~$ wt@cecdata:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/cecdata-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=1635be41-d025-405e-b4a3-6f0abedb7aab /boot ext2 defaults 0 2 /dev/mapper/cecdata-swap_1 none swap sw 0 0 wt@cecdata:~$ Adding output for lsmod wt@cecdata:~$ lsmod Module Size Used by fbcon 39270 71 tileblit 2487 1 fbcon font 8053 1 fbcon bitblit 5811 1 fbcon softcursor 1565 1 bitblit dell_wmi 2177 0 dcdbas 6918 0 vga16fb 12757 1 vgastate 9857 1 vga16fb psmouse 64576 0 serio_raw 4950 0 power_meter 9473 0 bnx2 72874 0 lp 9336 0 parport 37160 1 lp mptsas 50592 2 usbhid 41116 0 mptscsih 37167 1 mptsas hid 83568 1 usbhid mptbase 91674 2 mptsas,mptscsih scsi_transport_sas 33021 1 mptsas

    Read the article

  • Windows can see Ubuntu Server printer, but can't print to it

    - by Mike
    I have an old desktop that I'm trying to set up as a home backup/print server. Backup was trivial, but am having issues getting the printing to work. The printer is connected to the server running Ubuntu Server 9.10 (no gui). If I access the printer via http://hostname:631/printers/, I am able to print a test page, so I know the printer is working; however, I am having no luck from Windows. Windows can see the printer when browsed via \hostname\, but I am unable to connect. Windows says "Windows cannot connect to the printer" without indicating why. Any suggestions? From /etc/samba/smb.conf: [global] workgroup = WORKGROUP dns proxy = no security = user username map = /etc/samba/smbusers encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user load printers = yes printing = cups printcap name = cups [printers] comment = All Printers browseable = no path = /var/spool/samba writable = no printable = yes guest ok = yes read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = yes From /etc/cups/cupsd.conf: LogLevel warn SystemGroup lpadmin Port 631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseRemoteProtocols CUPS BrowseAddress @LOCAL BrowseLocalProtocols CUPS dnssd DefaultAuthType Basic <Location /> Order allow,deny Allow all </Location> <Location /admin> Order allow,deny Allow all </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow all </Location> <Policy default> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> <Policy authenticated> <Limit Create-Job Print-Job Print-URI> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Puppet - Possible to use software design patterns in modules?

    - by Mike Purcell
    As I work with puppet, I find myself wanting to automate more complex setups, for example vhosts for X number of websites. As my puppet manifests get more complex I find it difficult to apply the DRY (don't repeat yourself) principle. Below is a simplified snippet of what I am after, but doesn't work because puppet throws various errors depending up whether I use classes or defines. I'd like to get some feed back from some seasoned puppetmasters on how they might approach this solution. # site.pp import 'nodes' # nodes.pp node nodes_dev { $service_env = 'dev' } node nodes_prod { $service_env = 'prod' } import 'nodes/dev' import 'nodes/prod' # nodes/dev.pp node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo': } httpd::vhost::package::site { 'bar': } } # modules/vhost/package.pp class httpd::vhost::package { class manage($port) { # More complex stuff goes here like ensuring that conf paths and uris exist # As well as log files, which is I why I want to do the work once and use many notify { $service_env: } notify { $port: } } define site { case $name { 'foo': { class 'httpd::vhost::package::manage': port => 20000 } } 'bar': { class 'httpd::vhost::package::manage': port => 20001 } } } } } That code snippet gives me a Duplicate declaration: Class[Httpd::Vhost::Package::Manage] error, and if I switch the manage class to a define, and attempt to access a global or pass in a variable common to both foo and bar, I get a Duplicate declaration: Notify[dev] error. Any suggestions how I can implement the DRY principle and still get puppet to work? -- UPDATE -- I'm still having a problem trying to ensure that some of my vhosts, which may share a parent directory, are setup correctly. Something like this: node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo_sitea': } httpd::vhost::package::site { 'foo_siteb': } httpd::vhost::package::site { 'bar': } } What I need to happen is that sitea and siteb have the same parent "foo" folder. The problem I am having is when I call a define to ensure the "foo" folder exists. Below is the site define as I have it, hopefully it will make sense what I am trying to accomplish. class httpd::vhost::package { File { owner => root, group => root, mode => 0660 } define site() { $app_parts = split($name, '[_]') $app_primary = $app_parts[0] if ($app_parts[1] == '') { $tpl_path_partial_app = "${app_primary}" $app_sub = '' } else { $tpl_path_partial_app = "${app_primary}/${app_parts[1]}" $app_sub = $app_parts[1] } include httpd::vhost::log::base httpd::vhost::log::app { $name: app_primary => $app_primary, app_sub => $app_sub } } } class httpd::vhost::log { class base { $paths = [ '/tmp', '/tmp/var', '/tmp/var/log', '/tmp/var/log/httpd', "/tmp/var/log/httpd/${service_env}" ] file { $paths: ensure => directory } } define app($app_primary, $app_sub) { $paths = [ "/tmp/var/log/httpd/${service_env}/${app_primary}", "/tmp/var/log/httpd/${service_env}/${app_primary}/${app_sub}" ] file { $paths: ensure => directory } } } The include httpd::vhost::log::base works fine, because it is "included", which means it is only implemented once, even though site is called multiple times. The error I am getting is: Duplicate declaration: File[/tmp/var/log/httpd/dev/foo]. I looked into using exec, but not sure this is the correct route, surely others have had to deal with this before and any insight is appreciated as I have been grappling with this for a few weeks. Thanks.

    Read the article

  • How do I parse file paths separated by a space in a string?

    - by user1130637
    Background: I am working in Automator on a wrapper to a command line utility. I need a way to separate an arbitrary number of file paths delimited by a single space from a single string, so that I may remove all but the first file path to pass to the program. Example input string: /Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3 ... Desired output: /Users/bobby/diddy dum/ding.mp4 Part of the problem is the inflexibility on the Automator end of things. I'm using an Automator action which returns unescaped POSIX filepaths delimited by a space (or comma). This is unfortunate because: 1. I cannot ensure file/folder names will not contain either a space or comma, and 2. the only inadmissible character in Mac OS X filenames (as far as I can tell) is :. There are options which allow me to enclose the file paths in double or single quotes, or angle brackets. The program itself accepts the argument of the aforementioned input string, so there must be a way of separating the paths. I just do not have a keen enough eye to see how to do it with sed or awk. At first I thought I'll just use sed to replace every [space]/ with [newline]/ and then trim all but the first line, but that leaves the loophole open for folders whose names end with a space. If I use the comma delimiter, the same happens, just for a comma instead. If I encapsulate in double or single quotation marks, I am opening another can of worms for filenames with those characters. The image/link is the relevant part of my Automator workflow. -- UPDATE -- I was able to achieve what I wanted in a rather roundabout way. It's hardly elegant but here is working generalized code: path="/Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3" # using colon because it's an inadmissible Mac OS X # filename character, perfect for separating # also, unlike [space], multiple colons do not collapse IFS=: # replace all spaces with colons colpath=$(echo "$path" | sed 's/ /:/g') # place words from colon-ized file path into array # e.g. three spaces -> three colons -> two empty words j=1 for word in $colpath do filearray[$j]="$word" j=$j+1 done # reconstruct file path word by word # after each addition, check file existence # if non-existent, re-add lost [space] and continue until found name="" for seg in "${filearray[@]}" do name="$name$seg" if [[ -f "$name" ]] then echo "$name" break fi name="$name " done All this trouble because the default IFS doesn't count "emptiness" between the spaces as words, but rather collapses them all.

    Read the article

  • Tying down a cloud by virtualizing everything and then locking VMs to real hardware as necessary

    - by tudor
    I'm looking for a cloud software solution that: Can run on both server and desktop machines; Virtualizes hardware and has the option of exposing each real machine to the cloud; Allows a VM to be "locked" to a set of real hardware capabilities and stay there until moved (e.g. a user's "real" desktop); Allows a VM to link to some types of devices elsewhere (e.g. USB/serial via ethernet); and Is geography-aware to control movement of VMs between real networks. I'm aware that this may be the holy grail of virtualization, and I've searched alot. Some solutions appear to meet some criteria but not others. Most cloud implementations appear to ignore real hardware, for example. I realise that this may be solved by using three different implementations in combination: A standard cloud server farm. A bare-metal network backup utility with PXEBoot. VNC and/or VDI. (VNC obviously would require the real hardware to be running.) This combination, however, has some serious drawbacks that I'd like to solve by treating it as one system. My explanation follows... I have a network of real servers and desktops in multiple locations. I've virtualized servers before using Virtualbox and that's worked quite well. I've even connected USB devices to VMs on servers. I would like to virtualize the desktops in all my offices to facilitate movement of desktops, remote access (e.g. VDI) and bare-metal backups. However, I know that there are problems with this. For example, some desktops have specific hardware (e.g. 3D graphics cards, USB devices, etc) that limit their mobility. Geographic constraints also limit movement in that VMs can be moved easily within offices, but transferring between offices is not always preferable. What I would like to find is a system that can virtualize everything from bare-metal easily by maintaining an abstraction layer on each client and server machine that exposes the hardware available and runs as a cloud. Then certain VMs would be "locked" to specific hardware (so that, e.g. the VM runs only on their own desktop.) This would be required for situations where speed is important (e.g. 3D graphics pass-through). In addition, abstracted low-speed devices (e.g. USB) could be piped from real hardware to a VM in the cloud. This is important since if a VM is taken down, another VM can connect to the real hardware for minimum downtime.

    Read the article

  • Single-Signon options for Exchange 2010

    - by freiheit
    We're working on a project to migrate employee email from Unix/open-source (courier IMAP, exim, squirrelmail, etc) to Exchange 2010, and trying to figure out options for single-signon for Outlook Web Access. So far all the options I've found are very ugly and "unsupportable", and may simply not work with Forefront. We already have JA-SIG CAS for token-based single-signon and Shibboleth for SAML. Users are directed to a simple in-house portal (a Perl CGI, really) that they use to sign in to most stuff. We have an HA OpenLDAP cluster that's already synchronized against another AD domain and will be synchronized with the AD domain Exchange will be using. CAS authenticates against LDAP. The portal authenticates against CAS. Shibboleth authenticates with CAS but pulls additional data from LDAP. We're moving in the direction of having web services authenticate against CAS or Shibboleth. (Students are already on SAML/Shibboleth authenticated Google Apps for Education) With Squirrelmail we have a horrible hack linked to from that portal page that authenticates against CAS, gets your original plaintext password (yes, I know, evil), and gives you an HTTP form pre-filled with all the necessary squirrelmail login details with javaScript onLoad stuff to immediately submit the form. Trying to find out exactly what is possible with Exchange/OWA seems to be difficult. "CAS" is both the acronym for our single-signon server and an Exchange component. From what I've been able to tell there's an addon for Exchange that does SAML, but only for federating things like free/busy calendar info, not authenticating users. Plus it costs additional money so there's no way to experiment with it to see if it can be coaxed into doing what we want. Our plans for the Exchange cluster involve Forefront Threat Management Gateway (the new ISA) in the DMZ front-ending the CAS servers. So, the real question: Has anybody managed to make Exchange authenticate with CAS (token-based single-signon) or SAML, or with something I can reasonably likely make authenticate with one of those (such as anything that will accept apache's authentication)? With Forefront? Failing that, anybody have some tips on convincing OWA Forms Based Authentication (FBA) into letting us somehow "pre-login" the user? (log in as them and pass back cookies to the user, or giving the user a pre-filled form that autosubmits like we do with squirrelmail). This is the least-favorite option for a number of reasons, but it would (just barely) satisfy our requirements. From what I hear from the guy implementing Forefront, we may have to set OWA to basic authentication and do forms in Forefront for authentication, so it's possible this isn't even possible. I did find CasOwa, but it only mentions Exchange 2007, looks kinda scary, and as near as I can tell is mostly the same OWA FBA hack I was considering slightly more integrated with the CAS server. It also didn't look like many people had had much success with it. And it may not work with Forefront. There's also "CASifying Outlook Web Access 2", but that one scares me, too, and involves setting up a complex proxy config, which seems more likely to break. And, again, doesn't look like it would work with Forefront. Am I missing something with Exchange SAML (OWA Federated whatchamacallit) where it is possible to configure to do user authentication and not just free/busy access authorization?

    Read the article

  • a hidden program (virus) send hundred e-mail - Can you have any experience on something similar ?

    - by Aristos
    In one tablet computer yesterday I make the usually automatic updates from ms. This tablet have comodo firewall, and and old nod32. After that I notice very soon, that something start sending hundred smtp e-mail the moment the tablet computer is connected to the internet. Also the previous t time I have make updates, some 'virus' gets on the computer but I find very easy and stop it from run. I find using the autostart from sysinternals, and the process explorer. This virus has also break the automatic update from ms, and lost a lot of time to fix it. This is my usually practice when some call me to delete a virus from xp, I use the process explorer and autostart to locate the program, and delete it from everywhere. How ever the last one is so hard to locate. 0.I delete everything from temp directorys and search for suspicious files everywhere, run the nod32, 1.I use the TCPView to see witch program sending the smpt (I see hundred open smpt connections sending emails) but the SMTP was opened by the main service program. 2. I use the process monitor to locate whats happens but find again the main service that do the job. 3.I start delete many thinks on process explorer, but did not found the one that send the emails 4.I open many times the autorun but did not find there also something suspicion, I stop some thinks, but nothing happends. 5.From the last time that I suspect that this virus come to my computer and I partial remove it, he has broke my windows update, to fix it I lost a lot of time, searching on Internet for the error - it was just a register to a dll. 6.From what I suspect something is trigger after the ms update. 7.For the moment I block the email ports, and try to find a way to locate it. I like to notice here that everything is genius - and I mean everything. I believe that this virus pass from a page, or from an e-mail that this computer receive it in the past. Any help or information are appreciate. If you know anything similar, if you know how this virus send emails and how can I locate it, if you know any anti-virus anti-spyware program that maybe can find it. If you know how a virus gets after the ms updates. Million thanks.

    Read the article

  • nginx crashes on ssl after about a minute

    - by Scott
    Here are my configuration files ssl.conf # HTTPS server # server { listen 443 ssl; server_name api.domain.com; error_log /var/log/nginx/api.error.log; location / { root /var/www/api.domain.com; index index.html index.php index.php; try_files $uri $uri/ /index.php?$args; } ssl on; ssl_certificate /etc/nginx/api.domain.com.crt; ssl_certificate_key /etc/nginx/api.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { # root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /var/www/api.domain.com$fastcgi_script_name; fastcgi_param HTTPS on; include fastcgi_params; } location ~ /\.ht { deny all; } } nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; include /etc/nginx/conf.d/*.conf; } I have a server running on port 80 that runs with no issues. As soon as I turn on this api server running on ssl, it works for about a minute and then crashes and gives a 504 Gateway Time-out. Running nginx/1.2.3

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • Windows 7 The boot selection failed because a required device is inaccessible 0xc000000f

    - by piratejackus
    I have a problem with my Windows 7, hardware : Acer 3820TG Operating Systems : Windows 7 and Ubuntu 10.04 dual Case: When I try to boot my windows 7 I see an error: "Window failed to start. A recent hardware or software change might be the cause. To fix the problem: 1.Insert.... 2. .... ... status : 0xc000000f info : The boot selection failed because a required device is inaccessible .... " I can't exactly remember what were my last actions on Windows. I already searched this error and applied the proposed solutions, I created a repair USB (because I don't have a CD-ROM nor a Windows 7 CD) such as; -repair operating system :it says it cannot repair it -checking disk (chkdsk D: /f /r) : it checks the disk without a problem or error and it takes pretty long (more than a hour). But when I restart, still the same error. -I didn't create a restore point so I pass this option -I don't have a system image -I tried to run windows recovery (I have a recovery partition) but there are just two options: 1- Format the operating system but retain user data (copies the files under users to c\backup folder, but when I searched deeper I found that there are some people who already tried this option and couldn't find their user files under backup directory). Plus, I have unfortunately just one partition D (it is a fault I know) because I use always Ubuntu. So this is not applicable in my situation 2- Format entire system (Windows). I keep my valuable data in windows but not in user folder. I was reaching them from Windows. -I tried to repair windows boot by: bootrec /fixMBR bootrec /fixBoot bootrec /rebuildBCD I lost all grub menu, and reinstalled it. - ubuntuforums.org/showthread.php?t=1014708&page=29 nothing changed, same error. I created a thread in microsoft forums - http://social.answers.microsoft.com/Forums/en-US/w7install/thread/69517faf-850a-45fd- 8195-6d4ed831f805 but I couldn't find a solution. Before I run chkdsk from usb repair disk I couldn't able to mount Windows (NTFS) partition from Ubuntu, I was getting "couldn't mount file system, error code 2". I tried to fix ntfs partition from ubuntu and got "segmentation fault". I also created a thread on ubuntuforums for this mount problem: - http://ubuntuforums.org/showthread.php?t=1606427 So, after chkdsk, I could enable to mount windows partition but all I see in this partition is chkdsk logs, no any other data. Now, I don't think I lost my data because I don't get any filesystem errors, just the boot section, but this log files under windows partition makes me afraid. I see that Microsoft developers don't have a solution yet for this error. If you need any information to get more idea I can give, maybe I miss some points or it could be complicated. Thanks in advance.

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • How to redirect http requests to https (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. # MANAGED BY PUPPET upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } # setup server with or without https depending on gitlab::gitlab_ssl variable server { listen *:80; server_name gitlab.localdomain; server_tokens off; root /nowhere; rewrite ^ https://$server_name$request_uri permanent; } server { listen *:443 ssl default_server; server_name gitlab.localdomain; server_tokens off; root /home/git/gitlab/public; ssl on; ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? I've also tried the following configurations listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; The logs tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • can not access http://localhost/phpmyadmin/

    - by nik parsa
    I installed wamp server2 it didn't have password and it had this error below then I went to mysql and set the password for root with this command and I can login using this password but again I can not access phpmyadmin.I restarted the services too. update user.mysql set password=password('root') where user='root'; then i went to config.inc changed the pass to this one: $cfg['Servers'][$i]['password'] = 'root'; again I restarted and again no result. the phpmyadmin page: Welcome to phpMyAdmin Error MySQL said: Documentation #1045 - Access denied for user 'root'@'localhost' (using password: NO) phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. config.inc: <?php /* Servers configuration */ $i = 0; /* Server: localhost [1] */ $i++; $cfg['Servers'][$i]['verbose'] = 'localhost'; $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['port'] = ''; $cfg['Servers'][$i]['socket'] = ''; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = 'root'; $cfg['Servers'][$i]['AllowNoPassword'] = true; /* End of servers configuration */ $cfg['DefaultLang'] = 'en-utf-8'; $cfg['ServerDefault'] = 1; $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /* rajk - for blobstreaming */ $cfg['Servers'][$i]['bs_garbage_threshold'] = 50; $cfg['Servers'][$i]['bs_repository_threshold'] = '32M'; $cfg['Servers'][$i]['bs_temp_blob_timeout'] = 600; $cfg['Servers'][$i]['bs_temp_log_threshold'] = '32M'; ?> It was firstly this line $cfg['Servers'][$i]['password'] = ''; setting allow to false and restarting will not change the error either with the help of Erika I understood that it can't read confg.inc file how to make it read from this file?

    Read the article

  • Authenticate by libpam-mysql and libnss-mysql (CentOS)

    - by Chris
    I'm trying to get MySQL to function as a backend for authenticating users on CentOS 6.3. So far I have successfully installed and configured libnss-mysql. I can test this by doing: # groups testuser testuser : sftp Testuser is a member of the sftp group in fact, all MySQL based useraccounts will be hardcoded to it. The sftp group is chrooted and forced to use internal-sftp so they cannot do anything but access their home directory. Then I configured pam-mysql and PAM to allow mysql logins. This also works.. When SELinux is not enforcing. When I do setenforce 1 users can no longer login. Error: Permission denied, please try again. This is my pam_mysql.conf file: users.host=localhost users.db_user=nss-pam-user users.db_passwd=*********** users.database=sftpusers users.table=users users.user_column=username users.password_column=password users.password_crypt=6 verbose=1 My /etc/pam.d/sshd: #%PAM-1.0 auth sufficient pam_sepermit.so auth include password-auth auth required pam_mysql.so config_file=/etc/pam_mysql.conf account sufficient pam_nologin.so account include password-auth account required pam_mysql.so config_file=/etc/pam_mysql.conf password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session optional pam_keyinit.so force revoke session include password-auth And to be complete the contents of some log files.. /var/logs/secure Nov 20 14:52:20 hostname unix_chkpwd[4891]: check pass; user unknown Nov 20 14:52:20 hostname unix_chkpwd[4891]: password check failed for user (testuser) Nov 20 14:52:20 hostname sshd[4880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.107 user=testuser Nov 20 14:52:22 sftpusers sshd[4880]: Failed password for testuser from 192.168.10.107 port 51849 ssh2 /var/logs/audit/audit.log type=USER_AUTH msg=audit(1353420107.070:812): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.312:813): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="testuser" exe="/usr/sbin/sshd" hostname=192.168.10.107 addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.456:814): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=password acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' I tried to let audit2why explain the problem but it remains silent even though there are some errors. Does anyone see the problem? Thanks! EDIT: Turns out it's almost working with setenforce 0 I can mkdir foobar but if I do a single ls I get an error: Received message too long 16777216

    Read the article

  • Routing not working correctly using the laravel framework

    - by samayres1992
    I'm using the book wrote by one of the guys that created laravel, so I'd like to think for the most part this code isn't horribly wrong. My server is setup with nginx serving all static files and apache2 serving php. My config for each are the following: apache2: <VirtualHost *> # Host that will serve this project. ServerName litl.it # The location of our projects public directory. DocumentRoot /var/www/litl.it/laravel/public # Useful logs for debug. CustomLog /var/log/apache.access.log common ErrorLog /var/log/apache.error.log # Rewrites for pretty URLs, better not to rely on .htaccess. <Directory /var/www/litl.it/laravel/public> <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> </Directory> nginx: server { # Port that the web server will listen on. listen 80; # Host that will serve this project. server_name litl.it *.litl.it; # Useful logs for debug. access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; rewrite_log on; # The location of our projects public directory. root /var/www/litl.it/laravel/public; # Point index to the Laravel front controller. index index.php; location / { # URLs to attempt, including pretty ones. try_files $uri $uri/ /index.php?$query_string; } # Remove trailing slash to please routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } # PHP FPM configuration. location ~* \.php$ { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; try_files index index.php $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/php/$fastcgi_script_name; } # We don't need .ht files with nginx. location ~ /\.ht { deny all; } location @proxy { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; } error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 501 502 503 504 /error/5xx.html; location ^~ /error/ { internal; root /var/www/litl.it/lavarel/public/error; } } I'm including these server configs, as I feel this maybe the issue? Here is my incredibly basic routing file that should return "routing is working" on domain.com/test but instead it just returns the homepage. <?php Route::get('/', function() { return View::make('hello'); }); Route::get('/test', function() { return "routing is working"; }); Any ideas where I'm going wrong, I'm following this tutorial very closely and I'm confused why there is issue. Thanks!

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • Simple Cisco ASA 5505 config issue

    - by Ben Sebborn
    I have a Cisco ASA setup with two interfaces: inside: 192.168.2.254 / 255.255.255.0 SecLevel:100 outside: 192.168.3.250 / 255.255.255.0 SecLevel: 0 I have a static route setup to allow PCs on the inside network to access the internet via a gateway on the outside interface (3.254): outside 0.0.0.0 0.0.0.0 192.168.3.254 This all works fine. I now need to be able to access a PC on the outside interface (3.253) from a PC on the inside interface on port 35300. I understand I should be able to do this with no problems, as I'm going from a higher security level to a lower one. However I can't get any connection. Do I need to set up a seperate static route? Perhaps the route above is overriding what I need to be able to do (is it routing ALL traffic through the gateway?) Any advice on how to do this would be apprecaited. I am configuring this via ASDM but the config can be seen as below: Result of the command: "show running-config" : Saved : ASA Version 8.2(5) ! hostname ciscoasa domain-name xxx.internal names name 192.168.2.201 dev.xxx.internal description Internal Dev server name 192.168.2.200 Newserver ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 shutdown ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! interface Vlan1 nameif inside security-level 100 ip address 192.168.2.254 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 192.168.3.250 255.255.255.0 ! ! time-range Workingtime periodic weekdays 9:00 to 18:00 ! ftp mode passive clock timezone GMT/BST 0 clock summer-time GMT/BDT recurring last Sun Mar 1:00 last Sun Oct 2:00 dns domain-lookup inside dns server-group DefaultDNS name-server Newserver domain-name xxx.internal same-security-traffic permit inter-interface object-group service Mysql tcp port-object eq 3306 object-group protocol TCPUDP protocol-object udp protocol-object tcp access-list inside_access_in extended permit ip any any access-list outside_access_in remark ENABLES OUTSDIE ACCESS TO DEV SERVER! access-list outside_access_in extended permit tcp any interface outside eq www time-range Workingtime inactive access-list outside_access_in extended permit tcp host www-1.xxx.com interface outside eq ssh access-list inside_access_in_1 extended permit tcp any any eq www access-list inside_access_in_1 extended permit tcp any any eq https access-list inside_access_in_1 remark Connect to SSH services access-list inside_access_in_1 extended permit tcp any any eq ssh access-list inside_access_in_1 remark Connect to mysql server access-list inside_access_in_1 extended permit tcp any host mysql.xxx.com object-group Mysql access-list inside_access_in_1 extended permit tcp any host mysql.xxx.com eq 3312 access-list inside_access_in_1 extended permit object-group TCPUDP host Newserver any eq domain access-list inside_access_in_1 extended permit icmp any any access-list inside_access_in_1 remark Draytek Admin access-list inside_access_in_1 extended permit tcp any 192.168.3.0 255.255.255.0 eq 4433 access-list inside_access_in_1 remark Phone System access-list inside_access_in_1 extended permit tcp any 192.168.3.0 255.255.255.0 eq 35300 log disable pager lines 24 logging enable logging asdm warnings logging from-address [email protected] logging recipient-address [email protected] level errors mtu inside 1500 mtu outside 1500 ip verify reverse-path interface inside ip verify reverse-path interface outside ipv6 access-list inside_access_ipv6_in permit tcp any any eq www ipv6 access-list inside_access_ipv6_in permit tcp any any eq https ipv6 access-list inside_access_ipv6_in permit tcp any any eq ssh ipv6 access-list inside_access_ipv6_in permit icmp6 any any icmp unreachable rate-limit 1 burst-size 1 icmp permit any outside no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www dev.xxx.internal www netmask 255.255.255.255 static (inside,outside) tcp interface ssh dev.xxx.internal ssh netmask 255.255.255.255 access-group inside_access_in in interface inside control-plane access-group inside_access_in_1 in interface inside access-group inside_access_ipv6_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 192.168.3.254 10 route outside 192.168.3.252 255.255.255.255 192.168.3.252 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication telnet console LOCAL aaa authentication enable console LOCAL

    Read the article

  • Postfix: How to apply header_checks only for specific Domains?

    - by Lukas
    Basically what I want to do is rewriting the From: Header, using header_checks, but only if the mail goes to a certain domain. The problem with header_check is, that I can't check for a combination of To: and From: Headers. Now I was wondering if it was possible to use the header_checks in combination with smtpd_restriction_classes or something similar. I've found a lot information about header_checks and multiple header fields, when searching the net. All of them basically telling me, that one can't combine two header for checking. But I didn't find any information if it was possible to only do a header check if a condition (eg. mail goes to example.com) was met. Edit: While doing some more Research I've found the following article which suggests to add a Service in postfix master.cf, use a transportmap to pass mails for the Domain to that service and have a separate header_check defined with -o. The thing is that I can't get it to work... What I did so far is adding the Service to the master.cf: example unix - - n - - smtpd -o header_checks=regexp:/etc/postfix/check_headers_example Adding the followin Line to the transportmap: example.com example: Last but not least I have two regexp-files for header checks, one for the newly added service, and one to redirect answers to the rewritten domain. check_headers_example: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 Obviously if someone answers, the mail would go to nirvana, so I have the following check_headers defined in the main postfix process: /To:(.*)<(.*)@mydomain.example.com>(.*)/ REDIRECT [email protected]$2 Somehow the Transport is ignored. Any help is appreciated. Edit 2: I'm still stuck... I did try the following: smtpd_restriction_classes = header_rewrite header_rewrite = regexp:/etc/postfix/rewrite_headers_domain smtpd_recipient_restrictions = (some checks) check_recipient_access hash:/etc/postfix/rewrite_table, (more checks) In the rewrite_table the following entries exist: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 All it gets me is a NOQUEUE: reject: 451 4.3.5 Server configuration error. I couldn't find any resources on how you would do that but some people saying it wasn't possible. Edit 3: The reason I asked this question was, that we have a customer (lets say customer.com) who uses some aliases that will forward mail to a domain, let's say example.com. The mailserver at example.com does not accept any mail from an external server that come from a sender @example.com. So all mails that are written from example.com to [email protected] will be rejected in the end. An exception on example.com's mailserver is not possible. We didn't really solve this problem, but will try to work around it by using lists (mailman) instead of aliases. This is not really nice though, nor a real solution. I'd appreciate all suggestions how this could be done in a proper way.

    Read the article

  • Could I centralize batch files more efficiently?

    - by PeanutsMonkey
    I am new to the world of batch scripting so please forgive what may appear as basic questions. I am learning as I get assigned different jobs and I am a huge proponent of automation where possible. I have several batch files that perform several tasks. Each of these files had their paths hard-coded e.g. c:\temp. d:\data, etc in the batch file. Initially I moved these to a text file I could call from a batch file e.g. for /f "tokens=1,2 delims==" %%R in (config.txt) do ( if %%R==bdata set bdata=%%S if %%R==cdata set cdata=%%S ) The config.txt file contains these values bdata=c:\temp cdata=d:\data I realized that each time I would need to create a new variable, I would need to update the config.txt file as well the config.bat files. I decided I would move all the values to just the config.bat file as follows set bdata=c:\temp set cdata=d:\data I then updated each of the existing batch files to call the variables rather than the hard-coded paths. I also added the following lines of code to each batch file except config.bat. The only additional line added to the config.bat file is @echo off. @echo off setlocal enableextensions enabledelayedexpansion call config.bat I then have another batch file that centralizes calling all the batch files in sequence. The name of this batch file is start.bat. The reason I am using start /wait is because there have been instances of where the delete.bat runs before compress.bat has had an opportunity to finish. start /wait compress.bat start /wait validate.bat start /wait delete.bat Questions Is this the best way to centralize values and if not, what is a better way? Do I need to specify setlocal enableextensions enabledelayedexpansion in all the existing batch files? Do all the batch files have to have @echo off or is it sufficient for just the config.bat file? Is start /wait the best way to call multiple files? Can I pass values from one batch file to another using the said command? All the batch files have different functions e.g. move, delete, etc however use %%a or %%b. Is this okay? For example The validate.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == "" move /Y "%bdata%\%%~xa" "%bdata%\%done%" and the delete.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == ".txt" del "%%a"

    Read the article

  • IE8 Refuses to run Javascript from Local Hard Drive

    - by Josh Stodola
    I have a problem that just started at work recently and the network manager is certain he did not change anything with the group policy. Anyways, here is a detailed description of the problem. My machine is Windows XP SP3, and I use IE8 to browse. We have McAffee anti-virus software that I am unable to configure. I use the following file to test... <!DOCTYPE html> <html> <head> <title>Javascript Test</title> </head> <body> <script type="text/javascript"> document.write("<h1>PASS</h1>"); </script> <noscript> <h1>FAIL</h1> </noscript> </body> </html> When I open this file from the C: drive, it fails every time. If I execute it anywhere else (local/remote web server or on a mapped network drive), it works just fine. When I am simply browsing the Internet, Javascript on web sites works just fine. It is only failing on files running from my C: drive. Additionally, I have had a couple other programmers in the department try this file on their C: drive, and it works fine for them. So I don't believe it is a group policy thing. I need to fix this because I do extensive testing from my C: drive, and I am accustomed to doing so. I don't want to get into the habit of moving files to a different drive just to test. Things I have tried: Enabled "Allow Active Content to Run Files on My Computer" in Options | Advanced | Security Enabled "Allow Active Scripting" in Options | Security | Custom Level Verified that "Script" was not checked as disabled in Developer Toolbar Added localhost to Trusted Sites in Options Disabled McAffee completely (momentarily, with help from network admin) Used an older DOCTYPE in my test HTML page Re-installed IE8 completely Ran regsvr32 on the JScript.dll Slammed keyboard I am sure that there is a setting somewhere that will fix this problem, possibly in the registry. I would not be surprised if it was related to the developer toolbar. At this point I do not know where else to look. Can anyone help me resolve this problem? EDIT: Regardless of the bounty, this issue is still ongoing.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >