Search Results

Search found 19093 results on 764 pages for 'max path'.

Page 632/764 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • Samba Does Not List Files In Shared Directory

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it?

    Read the article

  • auto-mounting shared folders in VirtualBox

    - by brannerchinese
    I am writing to ask what the effect of the auto-mounting process is in VirtualBox, and where the folders can be accessed within a guest Linux system if auto-mount is used. I have VirtualBox 4.0.4 installed on Mac OS 10.6.7, with Guest Additions apparently running correctly. The guest OS is Ubuntu 10.04, and I observe no apparent problems with it. I find that if the shared folders have "auto-mount" unchecked in the VirtualBox settings, they can then be mounted using the prescribed syntax sudo mount -t vboxsf folder_name path_to_mount_point and all works as it is supposed to. But if the auto-mount option is checked, then I find that I can no longer mount the shared folders manually. I get the error mounting failed with the error: Invalid argument and the folders also do not appear to mount anywhere else accessible to me. Using the syntax sudo mount -t vboxsf without specifying a path installs them in /media, with their names prefixed with sf_, but they are not easily accessible there and I have not been able to change their owner using chown, either. Thanks for your patient explanation.

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • using gmail as email relay for sendmail

    - by Nikita
    I used to be able to send emails using a gmail account & sendmail configured using one of the guides on the Internet, for example: http://appgirl.net/blog/configuring-sendmail-to-relay-through-gmail-smtp/ This is a small server and I've recently moved it to a different house. And sendmail has stop working. The only thing different in the network setup is a new router. What is happening: In the log files, I see the following error: ...stat=Deferred: smtp.gmail.com: No route to host When I run from the command line: strace sendmail -f A -t B -u "Subject" -m "Message" -tls=yes ssl=yes -s smtp.gmail.com:587 -xu A -xp XYZ It hangs on this call: recvfrom(3, "m0\201\203\0\1\0\0\0\0\0\0\4ares\3lan\0\0\34\0\1", 8192, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, [16]) = 26 close(3) = 0 time(NULL) = 1339997943 open("/etc/localtime", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ff000 read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 3477 _llseek(3, -24, [3453], SEEK_CUR) = 0 read(3, "\nEST5EDT,M3.2.0,M11.1.0\n", 4096) = 24 close(3) = 0 munmap(0xb76ff000, 4096) = 0 socket(PF_FILE, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 connect(3, {sa_family=AF_FILE, path="/dev/log"}, 110) = 0 send(3, "<18>Jun 18 01:39:03 sendmail[268"..., 96, MSG_NOSIGNAL) = 96 nanosleep({60, 0}, So it looks like at some point it tries to resolve the DNS name, but I don't have anything running on 53, so it dies out and then just hangs. The other interesting thing is that msmtp works just fine on the same server. Update: ares in strace output is actually the name of my server, but .254 IP address is the address of the router. Could anyone tell me why this is happening or what further steps can I take to investigate the issue? Thanks!

    Read the article

  • MongoDB data directory transfer and upgrade

    - by KPL
    I just transferred my data directory (of Mongo 1.6.5) to a new server and installed Mongo 2.0 on it. I set the data directory path and did sudo server mongod restart. It failed, and the log file output says this - ***** SERVER RESTARTED ***** Sun Oct 9 07:51:47 [initandlisten] MongoDB starting : pid=8224 port=27017 dbpath=/database/mongodb 64-bit host=domU-12-31-39-09-35-81 Sun Oct 9 07:51:47 [initandlisten] db version v2.0.0, pdfile version 4.5 Sun Oct 9 07:51:47 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222 Sun Oct 9 07:51:47 [initandlisten] build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41 Sun Oct 9 07:51:47 [initandlisten] options: { auth: "true", config: "/etc/mongod.conf", dbpath: "/database/mongodb", fork: "true", logappend: "true", logpath: "/var/log/mongo/mongod.log", nojournal: "true" } Sun Oct 9 07:51:47 [initandlisten] couldn't open /database/mongodb/local.ns errno:1 Operation not permitted Sun Oct 9 07:51:47 [initandlisten] error couldn't open file /database/mongodb/local.ns terminating Sun Oct 9 07:51:47 dbexit: Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close listening sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to flush diaglog... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: waiting for fs preallocator... Sun Oct 9 07:51:47 [initandlisten] shutdown: closing all files... Sun Oct 9 07:51:47 [initandlisten] closeAllFiles() finished Sun Oct 9 07:51:47 [initandlisten] shutdown: removing fs lock... Sun Oct 9 07:51:47 dbexit: really exiting now I have already run it with --upgrade once.

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • How to effectively have less php-cgi processes running?

    - by João Pinto Jerónimo
    My server is a Linode 512, and on it I run a Wordpress MU with 3 websites (they don't get a lot of visitors) and a couple of NodeJS apps. I need to switch to Lighttpd because Apache 2 was using about 59% of the server's RAM, and now I have the php-cgi processes taking up about 43.6% of the server's RAM: most often 2 processes use 16.5% of the RAM each, 4 processes use 1.8% of the RAM each, and 4 more processes use 0,8% of the RAM, each How can I have less of these processes ? I'm almost sure they're not all needed for the trafic this server gets... I tried only allowing 2 children, but I still have those 10... This is my fastcgi.server section in lighttpd.conf. fastcgi.server = ( ".php" => ( "localhost" => ( "socket" => "/var/run/lighttpd/php-fastcgi.socket", "bin-path" => "/usr/bin/php-cgi", "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "4000" ) ) ) ) What else can I do to tune lighttpd to use less RAM ?

    Read the article

  • Which isn't working on linode servers (Ubuntu 10.04)?

    - by chrisjlee
    Currently trying to configure a linode server running on ubuntu 10.04. I utilized a stackscript (Default drupal profile) which seemed to run successfully. The log indicate so as well. Then ssh'd into the server (as root) to try to configure php. When i run a which php, which php5 they both return nothing. A which python returns something though. I know where the default path to php but i usually just like to use it as confirmation that php exists. Do i have to modify some configurations to enable which to work? Also tab completion doesn't seem to work for when i apt-get install? Update: Thanks for the suggestions guys. I've ran a couple commands and no luck either: [ root@ ~ ] $ dpkg -l |grep php [ root@ ~ ] $ apt-get install php5-cli Reading package lists... Done Building dependency tree Reading state information... Done Package php5-cli is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package php5-cli has no installation candidate Then i tried installing php and php cli: [ root@ ~ ] $ sudo apt-get install php5 php5-cli sudo: unable to resolve host xxxxxxx Reading package lists... Done Building dependency tree Reading state information... Done Package php5 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package php5 has no installation candidate

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Process PHP files from a network share in a vmware virtual machine

    - by nhinkle
    As a testing environment, I have set up a vmware virtual machine running Windows Server 2008 R2. I have Apache and PHP installed (as part of the xampp package). I am doing the development outside of the VM, and so want Apache to serve PHP files from a VM shared folder (which appears as a network share in the VM). I have done this by creating an NTFS symbolic link in Apache's htdocs directory. I can access this directory from the browser, and plain-text files are readable. However, PHP fails to process files, instead returning the following error: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'C:/xampplite/htdocs/path/to/file.php' (include_path='.;C:\xampplite\php\PEAR') in Unknown on line 0 It appears to be a permissions issue — PHP doesn't seem to be allowed to read the file to process it. However, Apache has no problem opening files in the directory. I cannot figure out how to give PHP the necessary permissions to process the file. Does anybody know of a way to make this work, or else another solution for getting the files into the VM automatically while I develop on the host machine?

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • Mount an VHD on Mac OS X

    - by janm
    Is it possible (how) to mount an VHD file created by Windows 7 in OS X? I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway... EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems. This led me to the following build script infile=vdfuse.c outfile=vdfuse incdir="your/path/to/vbox/headers" INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS" CFLAGS="-pipe" gcc -arch i386 "${infile}" \ "${INSTALL_DIR}"/VBoxDD.dylib \ "${INSTALL_DIR}"/VBoxDDU.dylib \ "${INSTALL_DIR}"/VBoxVMM.dylib \ "${INSTALL_DIR}"/VBoxRT.dylib \ "${INSTALL_DIR}"/VBoxDD2.dylib \ "${INSTALL_DIR}"/VBoxREM.dylib \ -o "${outfile}" \ -I"${incdir}" -I"/usr/local/include/fuse" \ -Wl,-rpath,"${INSTALL_DIR}" \ -lfuse_ino64 \ -Wall ${CFLAGS} You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox. So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.

    Read the article

  • Defining Virtual and Real User Directories with Dovecot & Postfix

    - by blankabout
    Following a wobble described in this question we now have virtual and real users authenticating with Dovecot, the problem now is that the real users (who have been on the system for years) can no longer access their mail. I'm guessing that it is because Dovecot is configured to point to the virtual mailboxes but not the real mail boxes. These are snippets from the config files: /etc/dovecot/dovecot.conf !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.cong passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } userdb { driver=static #args = mail_uid=dovecot mail_gid=dovecot /etc/dovecot/userdb args = uid=vmail gid=vmail home=/var/spool/vhosts/%d/%n /etc/dovecot/userdb } [email protected]:::::/var/spool/vhosts/virtualdomain.com/:/bin/false:: We think the problem is that the Dovecot file 10-auth.conf does not contain a method of accessing the mailboxes for the real users. We have looked around on this site, dovecot.org and done the usual googling but cannot find anywhere that describes how to set up virtual users on alongside legacy real users. Any help would be appreciated, especially by our real users who would like the contents of their inboxes to be available! If any further config is required, please let me know.

    Read the article

  • Configuring a MySQL 5.1 Instance on Windows 7 Professional x64 Fails

    - by Thomas Owens
    I'm trying to set up my laptops to function as mobile development environments. Installing the software on my Linux machine and getting it configured was fairly straightforward, however I'm having trouble getting MySQL 5.1 Server installed and configured on Windows 7 Professional 64-bit. I'm currently using the Windows MSI Installer for the complete MySQL 5.1 system (as opposed to the Essentials installer also available). I've tried to install using both the 32-bit and 64-bit versions of MySQL 5.1 - the same events occur in both. I've installed both the Server Instance Configuration Wizard and Workbench and everything appears to be installed just fine. When I open the Instance Configuration Wizard, I select Detailed Configuration. On the next screen, I select Development Environment, then Multifunctional Database on the next screen. I leave the InnoDB settings unchanged. I select Manual Setting with 5 concurrent connections. I enable TCP/IP Networking on Port 3306 and Enable Strict Mode. I select the Standard Character Set. I check the boxes for Install as a Windows Service (and provide the name "MySQL") and Include the Bin Directory in Windows PATH. On the next screen, I set my root user name and password. I do not enable root access from remote machines and I also do not create an anonymous account. On the final screen of the wizard, when I click "Execute", the first two tasks (Prepare Configuration and Write Configuration File) complete. However, when it reaches Start Service, the wizard hangs and becomes unresponsive ("Not Responding" appears in the title bar and Task Manager). I would really like to be able to use both my Windows and Linux laptops as full-blown mobile development environments, but I can't do that without being able to run MySQL. Has anyone encountered this problem before? What options do I have to correct it?

    Read the article

  • What is the alternative of Apache's global Alias in IIS instead of adding a Virtual Directory to every single sites one by one?

    - by Sk8erPeter
    In Apache, there's a way I can make phpMyAdmin available globally to all VirtualHosts I set up. In Apache, it looks like this: <IfModule mod_alias.c> Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with prepending /phpmyadmin to all my domain names, and I can see phpmyadmin's initial page. (So for example it works for all my domains like this: http://example_1.com/phpmyadmin, http://example_2.com/phpmyadmin, http://example_3.com/phpmyadmin also does work). In IIS, there's an "Add Virtual Directory..." option when right clicking on a given site. Here I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the given domain (e.g. http://example_1.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? Or do I have to add a virtual directory to every given sites one by one? I'm just curious, it's not a hard work to do it, but I'm interested in it if there exists another method to do it. Thanks in advance!

    Read the article

  • Syncing Google Desktop Scratch Pad

    - by Anders Frey
    I'm a long time user of Google Desktop Scratch Pad and I would like to be able to put the note in the cloud and make it accessible from all my electronic units. I'm working towards changing the filepath Scratch Pad uses to retrieve the .txt to lead to a DropBox folder. As the Desktop Scratch Pad is discontinued I've had no luck in retrieving the API, but what I've got so far is this: The scratch pad data is located at: C:\Users[user]\AppData\Local\Google\Google Desktop\a3d83d5fa2e9\scratchpad.txt The registry keys related to Google Desktop is located at: HKEY_CURRENT_USER\Software\Google\Google Desktop I'm guessing the Scratch Pad app itself is located at: HKEY_CURRENT_USER\Software\Google\Google Desktop\Components I have limited experience with the registry, so I'm not able to translate the binary and hexadecimals, but I'm hoping that the path location is in there somewhere. I've tried using a bunch of other noteapps (including the 'new' scratch pad in chrome) but haven't been able to find one that suits my needs as Desktop Scratch Pad. Hence the effort in this matter. I may be way off and I'm not sure if this is possible to do, but I'm looking forward to hearing your thoughts.

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

  • Multiple rack apps on nginx + passenger, one as root, the other not...config help

    - by cannikin
    So I've got two apps I want to run on a server. One app I would like to be the "default" app--that is, all URLs should be sent this app by default, except for a certain path, lets call it /foo: http://mydomain.com/ -> app1 http://mydomain.com/apples -> app1 http://mydomain.com/foo -> app2 My two rack apps are installed like so: /var /www /apps /app1 app.rb config.ru /public /app2 app.rb config.ru /public app1 -> apps/app1/public app2 -> apps/app2/public (app1 and app2 are symlinks to their respective apps' public directories). This is the Passenger setup for sub URIs described here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_rack_to_sub_uri With the following config I've got /foo going to app2: server { listen 80; server_name mydomain.com; root /var/www; passenger_enabled on; passenger_base_uri /app1; passenger_base_uri /app2; location /foo { rewrite ^.*$ /app2 last; } } Now, how do I get app1 to pick up everything else? I've tried the following (placed after the location /foo directive), but I get a 500 with an infinite internal redirect in error.log: location / { rewrite ^(.*)$ /app1$1 last; } I hoped that the last directive would prevent that infinite redirect, but I guess not. /foo gets the same error. Any ideas? Thanks!

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem mapping G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd. [insert ref link here] We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • Error compiling PHP 5.5.9 on CentOS 6.5 during make command

    - by Chris Mancini
    Here is the error message: cc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions. make: *** [ext/fileinfo/libmagic/apprentice.lo] Error 1 The very last thing make was processing is apprentice.lo which appears to be part of the image manipulation libraries (maybe?). I am using Ansible to provision my instance. It is a Digital Ocean single core 512MB VM. I have been using vagrant / ansible with the same config locally for dev and it has compiled fine, this is the first cloud VM I am attempting to provision. The only difference is the base image for my DO server is coming from DO and for my local dev, I built my own Vagrant box via VirtualBox from a stock CentOS basic server install. I pull it down from my DropBox. The problem has been experienced by others and reported as a php bug report My php ansible role up to the error: --- - name: Download php source get_url: url={{ php_source_url }} dest=/tmp register: get_url_result - name: untar the source package command: tar -xvf php-{{ php_version }}.tar.gz chdir=/tmp when: get_url_result.changed or php_reinstall - name: configure php 5.5 command: > ./configure --prefix={{ php_prefix }} --with-config-file-path={{ php_config_file_path }} --enable-fpm --enable-ftp --enable-mbstring --enable-pdo --enable-soap --enable-sockets=shared --enable-zip --with-curl --with-fpm-group={{ nginx_group }} --with-fpm-user={{ nginx_user }} --with-freetype-dir=/usr/lib64/ --with-gd --with-jpeg-dir=/usr/lib64/ --with-libdir=lib64 --with-mcrypt --with-openssl --with-pdo-mysql --with-pear --with-readline --with-tidy --with-xsl --with-zlib --without-pdo-sqlite --without-sqlite3 chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall - name: make clean when reinstalling command: make clean chdir=/tmp/php-{{ php_version }} when: php_reinstall - name: make php command: make chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall Thanks in advance for any help. :)

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >