Search Results

Search found 968 results on 39 pages for 'eli jun'.

Page 31/39 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How can I use the Homebrew Python with Homebrew MacVim on Mountain Lion?

    - by Stephen Jennings
    I originally asked and answered this question: How can I use the Homebrew Python version with Homebrew MacVim? These instructions worked on Snow Leopard using Xcode 4.0.1 and associated developer tools. However, they no longer seem to work on Mountain Lion with Xcode 4.4.1. My goal is to leave the system's version of Python completely untouched, and to only install PyPI packages into Homebrew's site-packages directory. I want to use the vim_bridge package in MacVim, so I need to compile MacVim against the Homebrew version of Python. I've edited the MacVim formula to add these to the arguments: --enable-pythoninterp=dynamic --with-python-config-dir=/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config Then I install with the command: brew install macvim --override-system-vim --custom-icons --with-cscope --with-lua However, it still seems to be somehow using Python 2.7.2 from the system. This seems strange to me because it also seems to be using the correct executable. :python print(sys.version) 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] :python print(sys.executable) /usr/local/bin/python $ /usr/local/bin/python --version Python 2.7.3 $ /usr/local/bin/python -c "import sys; print(sys.version)" 2.7.3 (default, Aug 12 2012, 21:17:22) [GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.60))] $ readlink /usr/local/lib/python2.7/config /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config I've removed everything in /usr/local and reinstalled Homebrew by running these commands: $ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) $ brew install git mercurial python ruby $ brew install macvim (nope, still broken) $ brew remove macvim $ ln -s /usr/local/Cellar/python/..../python2.7/config /usr/local/lib/python2.7/config $ brew install macvim

    Read the article

  • Weird .#filename files on remote ssh-connected systems after mcedit

    - by etranger
    I'm using MacFusion sshfs in combination with Midnight Commander, and when I edit remote text files with mcedit, weird symlinks are created on the remote system. $ ls -l .* lrwxr-xr-x 1 user group 34 Jun 27 01:54 .#filename.txt -> [email protected] where etranger is my local login name, and mbp is a hostname of my notebook running MacOS. symlinks can be removed by running remote rm command, but cannot be deleted on the mac-fuse mounted volume and thus pollutes the filesystem. I cannot figure what part of software is responsible for this, and how I could fix this, any help is appreciated. EDIT: This appears to be mcedit behavior as documented here: https://dev.openwrt.org/ticket/8245 Apparently, sshfs fails to remove symlink to the lock file for some reason (".#" in filename, perhaps), and it pollutes the filesystem. A quick workaround is possible, using another bug of Midnight Commander: editing (F4) the broken symlink effectively converts it to a missing lock file it was supposed to point to, and removes the symlink itself. The newly created file may then be deleted normally. EDIT 2: Unchecking "Follow symlink" in MacFusion apparently allows sshfs to remove dead symlinks, so the problem disappears completely.

    Read the article

  • [Ubuntu 10.04] mdadm - Can't get RAID5 Array To Start

    - by Matthew Hodgkins
    Hello, after a power failure my RAID array refuses to start. When I boot I have to sudo mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 to get mdadm to notice the array. Here are the details (after I force assemble). sudo mdadm --misc --detail /dev/md0: /dev/md0: Version : 00.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 6 Total Devices : 6 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 17 23:02:38 2010 State : active, Not Started Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1249691 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 49 3 active sync /dev/sdd1 4 8 33 4 active sync /dev/sdc1 5 8 17 5 active sync /dev/sdb1 mdadm.conf: # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions /dev/sdb1 /dev/sdb1 # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 level=raid5 num-devices=6 UUID=44a8f730:b9bea6ea:3a28392c:12b22235 Any help would be appreciated.

    Read the article

  • ffmpeg rotate mp4 90º

    - by shox
    Hi, can I rotate(+save / reencode) a .mp4 with ffmpeg? The only thing I found was on the mailinglist saying -vfilters "rotate=90" but ffmpeg says no vfilters. Tried -vf, it says there is no rotate. If I try to do it in VLC it simply does not rotate and kills the audio (did the vlc encoding EVER work? Every single freakin video I throw at it gets fu****d up in some way -_- ) I'm on a MAC and don't have iWork. Any ideas? Thanks FFmpeg version git-svn-r23607, Copyright (c) 2000-2010 the FFmpeg developers built on Jun 14 2010 23:52:55 with gcc 4.2.1 (Apple Inc. build 5659) configuration: libavutil 50.19. 0 / 50.19. 0 libavcodec 52.76. 0 / 52.76. 0 libavformat 52.68. 0 / 52.68. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.11. 0 / 0.11. 0 Hyper fast Audio and Video encoder

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • vhost configuration for owncloud

    - by Razer
    I'm using apache2 for hosting owncloud. I configured a vhost file for owncloud, but every time I go on the site my browser downloads a ruby file. Here is my vhost configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName http://rsserver.fritz.box DocumentRoot /var/www/owncloud/ <Directory /var/www/owncloud/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Apache error log tells me: [Sat Jun 16 20:46:04 2012] [error] [client xx.xx.xx.xx] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/owncloud/core/templates/403.php mod_rewrite is enabled. Where is the problem?

    Read the article

  • What's wrong with this HTTP POST request?

    - by bigboy
    I'm trying to fuzz a server using the Sulley fuzzing framework. I observe the following stream in Wireshark. The error talks about a problem with JSON parsing, however, when I try the same HTTP POST request using Google Chrome's Postman extension, it succeeds. Can anyone please explain what could be wrong about this HTTP POST request? The JSON seems valid. POST /restconf/config HTTP/1.1 Host: 127.0.0.1:8080 Accept: */* Content-Type: application/yang.data+json { "toaster:toaster" : { "toaster:toasterManufacturer" : "Geqq", "toaster:toasterModelNumber" : "asaxc", "toaster:toasterStatus" : "_." }} HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Content-Type: */* Transfer-Encoding: chunked Date: Sat, 07 Jun 2014 05:26:35 GMT Connection: close 152 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <errors xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf"> <error> <error-type>protocol</error-type> <error-tag>malformed-message</error-tag> <error-message>Error parsing input: Root element of Json has to be Object</error-message> </error> </errors> 0

    Read the article

  • uWSGI cannot find "application" using Flask and Virtualenv

    - by skyler
    Using uWSGI to serve a simple wsgi app, (a simple "Hello, World") my configuration works, but when I try to run a Flask app, I get this in uWSGI's error logs: current working directory: /opt/python-env/coefficient/lib/python2.6/site-packages writing pidfile to /var/run/uwsgi.pid detected binary path: /opt/uwsgi/uwsgi setuid() to 497 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uwsgi socket 0 bound to TCP address 127.0.0.1:3031 fd 3 Python version: 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] Set PythonHome to /opt/python-env/coefficient/ *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0xbed3b0 your server socket listen backlog is limited to 100 connections *** Operational MODE: single process *** added /opt/python-env/coefficient/lib/python2.6/site-packages/ to pythonpath. unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** *** uWSGI is running in multiple interpreter mode ***` Note in particular this part of the log: unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) **no app loaded. going in full dynamic mode** This is my Flask app: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello, World, from Flask!" Before I added my Virtualenv's pythonpath to my configuration file, I was getting an ImportError for Flask. I solved this though, I believe (I'm not receiving errors about it anymore) and here is my complete configuration file: uwsgi: #socket: /tmp/uwsgi.sock socket: 127.0.0.1:3031 daemonize: /var/log/uwsgi.log pidfile: /var/run/uwsgi.pid master: true vacuum: true #wsgi-file: /var/www/coefficient/coefficient.py wsgi-file: /var/www/coefficient/flask.py processes: 1 virtualenv: /opt/python-env/coefficient/ pythonpath: /opt/python-env/coefficient/lib/python2.6/site-packages This is how I start uWSGI, from an rc script: /opt/uwsgi/uwsgi --yaml /etc/uwsgi/conf.yaml --uid uwsgi And if I try to view the Flask program in a browser, I get this: **uWSGI Error** Python application not found Any help is appreciated.

    Read the article

  • Website cannot be accessed with google DNS because of unsigned DNS

    - by Sinan Samet
    I get this error: Inconsistent security for stakeholdergame.com - DS found at parent, but no DNSKEY found at child. On http://dnscheck.pingdom.com/?domain=stakeholdergame.com People can't access my site with google public DNS because of this. How do I solve this problem? dig @ns1.haveabyte.nl stakeholdergame.com DS shows me this ; <<>> DiG 9.8.3-P1 <<>> @ns1.haveabyte.nl stakeholdergame.com DS ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42223 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;stakeholdergame.com. IN DS ;; AUTHORITY SECTION: stakeholdergame.com. 14400 IN SOA ns1.haveabyte.nl. hostmaster.stakeholdergame.com. 2014030300 14400 3600 1209600 86400 ;; Query time: 21 msec ;; SERVER: 79.170.93.174#53(79.170.93.174) ;; WHEN: Tue Jun 10 11:20:41 2014 ;; MSG SIZE rcvd: 100

    Read the article

  • Why is OpenSSH not using the user specified in ssh_config?

    - by Jordan Evens
    I'm using OpenSSH from a Windows machine to connect to a Linux Mint 9 box. My Windows user name doesn't match the ssh target's user name, so I'm trying to specify the user to use for login using ssh_config. I know OpenSSH can see the ssh_config file since I'm specifying the identify file in it. The section specific to the host in ssh_config is: Host hostname HostName hostname IdentityFile ~/.ssh/id_dsa User username Compression yes If I do ssh username@hostname it works. Trying using ssh_config only gives: F:\>ssh -v hostname OpenSSH_5.6p1, OpenSSL 0.9.8o 01 Jun 2010 debug1: Connecting to hostname [XX.XX.XX.XX] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa-cert type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa type 2 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debia n-3ubuntu5 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /cygdrive/f/progs/OpenSSH/home/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa debug1: Offering DSA public key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I was under the impression that (as outlined in this question: How to make ssh log in as the right user?) specifying User username in ssh_config should work. Why isn't OpenSSH using the username specified in ssh_config?

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • How do I solve this error: ssh_exchange_identification: Connection closed by remote host

    - by bernie
    data@server01:~$ ssh [email protected] -vvv OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.7.4.1 [10.7.4.1] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/data/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/data/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/data/.ssh/id_rsa-cert type -1 debug1: identity file /home/data/.ssh/id_dsa type -1 debug1: identity file /home/data/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • Spam in Whois: How is it done and how do I protect my domain?

    - by user2964971
    Yes, there are answered questions regarding spam in Whois. But still unclear: How do they do it? How should I respond? What precautions can I take? For example: Whois for google.com [...] Server Name: GOOGLE.COM.ZOMBIED.AND.HACKED.BY.WWW.WEB-HACK.COM IP Address: 217.107.217.167 Registrar: DOMAINCONTEXT, INC. Whois Server: whois.domaincontext.com Referral URL: http://www.domaincontext.com Server Name: GOOGLE.COM.ZZZZZ.GET.LAID.AT.WWW.SWINGINGCOMMUNITY.COM IP Address: 69.41.185.195 Registrar: TUCOWS DOMAINS INC. Whois Server: whois.tucows.com Referral URL: http://domainhelp.opensrs.net Server Name: GOOGLE.COM.ZZZZZZZZZZZZZ.GET.ONE.MILLION.DOLLARS.AT.WWW.UNIMUNDI.COM IP Address: 209.126.190.70 Registrar: PDR LTD. D/B/A PUBLICDOMAINREGISTRY.COM Whois Server: whois.PublicDomainRegistry.com Referral URL: http://www.PublicDomainRegistry.com Server Name: GOOGLE.COM.ZZZZZZZZZZZZZZZZZZZZZZZZZZ.HAVENDATA.COM IP Address: 50.23.75.44 Registrar: PDR LTD. D/B/A PUBLICDOMAINREGISTRY.COM Whois Server: whois.PublicDomainRegistry.com Referral URL: http://www.PublicDomainRegistry.com Server Name: GOOGLE.COMMAS2CHAPTERS.COM IP Address: 216.239.32.21 Registrar: CRAZY DOMAINS FZ-LLC Whois Server: whois.crazydomains.com Referral URL: http://www.crazydomains.com [...] >>> Last update of whois database: Thu, 05 Jun 2014 02:10:51 UTC <<< [...] >>> Last update of WHOIS database: 2014-06-04T19:04:53-0700 <<< [...]

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • Delete ARP cache on Mac OS when moving from one Wifi network to the other

    - by Puneet
    I am facing wireless connectivity problems when I move from one Wifi network to the other. Here is how it happens: I am at my friends place. I connect to his Wifi. His Wifi router ip address is 192.168.0.1. Everything is fine I close my laptop, come back to my house, open my laptop and I connect to the Wifi Network at my place. Different ESSID, but the Wifi router address is the same 192.168.0.1. At this point I cant get to anything on the internet. To debug I try to see if I can ping the router (192.168.0.1), I cant. I get a no route to host. Meanwhile airport tells me Im connected to Wifi. I see the arp cache and I see a permanent entry for 192.168.0.1 ? (192.168.0.1) at 5c:d9:98:65:73:6c on en1 permanent [ethernet] This permanent bit looks problematic. I go ahead and delete the arp cache entry and all is fine with the world until I go back to my friends place where the same situation plays out. Now my question is, why the hell is this happening? If there is no way around it, can I run a script on Wifi connect/disconnect to clear out the arp cache? Im using Mac OS X $uname -a Darwin 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

    Read the article

  • How to allow writing to a mounted NFS partition

    - by Cerin
    How do you allow a specific user permission to write to an NFS partition? I've mounted an NFS share on my localhost (a Fedora install), and I can read and write as root, but I'm unable to write as the apache user, even though all the files and directories in the share on my localhost and remote host are owned by apache. For example, I've mounted it via this line in my /etc/fstab: remotehost:/data/media /data/media nfs _netdev,soft,intr,rw,bg 0 0 And both locations are owned by apache: [root@remotehost ~]# ls -la /data total 24 drwxr-xr-x. 6 root root 4096 Jan 6 2011 . dr-xr-xr-x. 28 root root 4096 Oct 31 2011 .. drwxr-xr-x 4 apache apache 4096 Jan 14 2011 media [root@localhost ~]# ls -la /data total 16 drwxr-xr-x 4 apache apache 4096 Dec 7 2011 . dr-xr-xr-x. 27 root root 4096 Jun 11 15:51 .. drwxrwxrwx 5 apache apache 4096 Jan 31 2011 media However, when I try and write as the apache user, I get a "Permission denied" error. [root@localhost ~]# sudo -u apache touch /data/media/test.txt' touch: cannot touch `/data/media/test.txt': Permission denied But of course it works fine as root. What am I doing wrong?

    Read the article

  • Local references to old server name remain after Windows 2003 server rename

    - by imagodei
    I have a standalone Win 2003 server with Windows Sharepoint Services (WSS3) running on it. I had to rename the server and I had bunch of problems resulting from this. Note that the server is not in AD environment. Most obvious problems were with Sharepoint, which didn't work. I was somewhat naive to think it will work in the first place, but OK - I've solved this using step 1 & 3 from this site (TNX) Other curious behavior/problems remain. Most disturbing is that Sharepoint isn't able to send email notifications to participants. I noticed there are several references to old server name everywhere I look: in Registry, in Windows Internal Database (MICROSOFT##SSEE). I see instances of old server name in the Sharepoint Central Administration - Operations - Servers in farm. There is reference to a servers: oldname.domain.local oldname.local On one of those servers there is also Windows SharePoint Services Outgoing E-Mail Service (Stopped). Also, when I try to telnet locally to the mail server (Simple Mail Transfer Protocol (SMTP) service), I get a response: 220 oldname.domain.local Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Tue, 15 Jun 2010 13:56:19 +0200 IMO these strange naming problems are also the reason why email notifications from within Sharepoint don't work. Can anyone tell me how to correct/replace those references to oldservername? Why is the email service insisting on old name? Of course I would like to try it without reinstalling the server. TNX!

    Read the article

  • Trouble with backslash characters and rsyslog writing to postgres

    - by Flimzy
    I have rsyslog 4.6.4 configured to write mail logs to a PostgreSQL database. It all works fine, until the log message contains a backslash, as in this example: Jun 12 11:37:46 dc5 postfix/smtp[26475]: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:\pmta\spool\B\00000414, status = ERROR_DISK_FULL in "DATA" (in reply to end of DATA command)) The above is the log entry, as written to /var/log/mail.log. It is correct. The trouble is that the backslash characters in the file name are interpreted as escapes when sent to the following SQL recipe: $template dcdb, "SELECT rsyslog_insert(('%timereported:::date-rfc3339%'::TIMESTAMPTZ)::TIMESTAMP,'%msg:::escape-cc%'::TEXT,'%syslogtag%'::VARCHAR)",STDSQL :syslogtag, startswith, "postfix" :ompgsql:/var/run/postgresql,dc,root,;dcdb As a result, the rsyslog_insert() stored procedure gets the following value for as msg: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:pmtaspoolB The \p, \s, \B and \0 in the file name are interpreted by PostgreSQL as literal p, s, and B followed by a NULL character, thus early-terminating the string. This behavior can be easiily confirmed with: dc=# SELECT 'd:\pmta\spool\B\00000414'; ?column? -------------- d:pmtaspoolB (1 row) dc=# Is there a way to correct this problem? Is there a way I'm not finding in the rsyslog docs to turn \ into \\?

    Read the article

  • apache: can't renew ssl certificate

    - by Caballero
    I have Godaddy SSL certificate for one website on my dedicated server running Centos 5.3 / Apache 2.2.3. I have renewed certificate on Godaddy recently, however now it's showing as expired on my website. I've re-keyed certificate since and reuploaded domain.key, domain.crt and bundle.crt (example file names) files to the server, restarted apache, but the sertificate still shows as expired. I'm running out of clues. I've tried replacing content of .crt files with jiberish and restart apache - it's still showing that certificate is expired, even though it shouldn't be picked up at all. I eventually rebooted dedicated server, still no luck. I'm using free SSL check tool http://www.digicert.com/help/ which clearly shows all the green checks except one - certificate is expired. Has someone any idea what might be causing this? Could there be some kind of caching going on here? UPDATE: after running openssl x509 -in domain.crt -noout -enddate I'm getting this output: notAfter=Jun 2 08:16:51 2013 GMT So I asume this means I have the right certificate on the server and yet the old expired one shows on the web...

    Read the article

  • SVN Authentication with LDAP and Active Directory

    - by Alex Holsgrove
    I am having a few problems getting SVN authentication to work with LDAP / Active Directory. My SVN installation works fine, but after enabling LDAP in my apache vhost, I just can't get my users to authenticate. I can use a selection of LDAP browsers to successfully connect to Active Directory, but just can't seem to get this to work. SVN is setup in /var/local/svn Server is svn.domain.local For testing, my repository is /var/local/svn/test My vhost file is as follows: <VirtualHost *:80> ServerAdmin [email protected] ServerAlias svn.domain.local ServerName svn.domain.local DocumentRoot /var/www/svn/ <Location /test> DAV svn #SVNListParentPath On SVNPath /var/local/svn/test AuthzSVNAccessFile /var/local/svn/svnaccess AuthzLDAPAuthoritative off AuthType Basic AuthName "SVN Server" AuthBasicProvider ldap AuthLDAPBindDN "CN=adminuser,OU=SBSAdmin Users,OU=Users,OU=MyBusiness,DC=domain,DC=local" AuthLDAPBindPassword "admin password" AuthLDAPURL "ldap://192.168.1.6:389/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=domain,DC=local?sAMAccountName?sub?(objectClass=*)" Require valid-user </Location> CustomLog /var/log/apache2/svn/access.log combined ErrorLog /var/log/apache2/svn/error.log </VirtualHost> In my error.log, I don't seem to get any bind errors (should I be looking elsewhere?), but just the following: [Thu Jun 21 09:51:38 2012] [error] [client 192.168.1.142] user alex: authentication failure for "/test/": Password Mismatch, referer: http://svn.domain.local/test/ At the end of "AuthLDAPURL", I have seen people using TLS and NONE but neither seem to help in my case. I have the ldap modules loaded and have checked as much as I know, so any help would be most welcome. Thanks

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • tomcat 'document base does not exist' error (but it does)

    - by SpliFF
    Gentoo / Tomcat 6 INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 Sep 8, 2009 10:34:51 AM org.apache.catalina.core.StandardContext resourcesStart SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base /www/rivervalley/site does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(Unknown Source) at org.apache.catalina.core.StandardContext.resourcesStart(Unknown Source) at org.apache.catalina.core.StandardContext.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) at org.apache.catalina.core.StandardHost.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) oh really? then how come: ls -la /www/rivervalley/site/ drwxr-xr-x 12 tomcat tomcat 4096 Sep 8 09:56 . drwxr-xr-x 16 tomcat tomcat 4096 Jun 29 16:22 .. -rwxr--r-- 1 tomcat tomcat 520 Jul 3 02:15 Application.cfm drwxr-xr-x 2 tomcat tomcat 4096 Sep 8 09:56 WEB-INF and ... tomcat 18916 1.0 5.5 1159188 167892 ? Ssl 10:37 0:11 /opt/sun-jdk-1.5.0.18/bin/java -Djava.util.loggin Hell, ANY account can read that directory so the claim is utter nonsense. What else can cause this? Here's my relevant server.xml section: <Host name="rivervalley" appBase="webapps" unpackWARs="false" autoDeploy="false" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/www/rivervalley/site" /> </Host>

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • DPMS does not work: the monitor is not switched off

    - by bortzmeyer
    I have a monitor which was properly switched off by my Debian PC when unused. I attached it to another machine and, this times, it is never switched off. In /etc/X11/xorg.conf, I have: Section "Monitor" Identifier "Generic Monitor" Option "DPMS" It is recognized when X11 starts: (II) Loading extension DPMS ... (II) VESA(0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display ... (**) Option "dpms" (**) VESA(0): DPMS enabled The operating system is Debian stable "lenny". The graphics card is: 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrate d Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 2a6f Flags: bus master, fast devsel, latency 0, IRQ 5 Memory at fe900000 (32-bit, non-prefetchable) [size=512K] I/O ports at b080 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fe800000 (32-bit, non-prefetchable) [size=1M] Capabilities: [90] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable-Capabilities: [d0] Power Management version 2 X11 is: X.Org X Server 1.4.2 Release Date: 11 June 2008 X Protocol Version 11, Revision 0 Build Operating System: Linux Debian (xorg-server 2:1.4.2-10.lenny2) Current Operating System: Linux ludwigVII 2.6.26-2-686 #1 SMP Sun Jun 21 04:57:3 8 UTC 2009 i686 Build Date: 08 June 2009 09:12:57AM

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >