Search Results

Search found 29619 results on 1185 pages for 'external script'.

Page 526/1185 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • JBOD with PERC H810

    - by primero
    I'm wondering if anybody has ever used Dell storage products like the MD3220 array in a JBOD configuration. From what I can tell only perc h810 will work for external JBOD but that is not terribly specific, and for some reason I couldn't find many examples on the web of people configuring dell storage products as JBOD. My question is: Is it possible to connect to am MD3220 array, or other Dell arrays using a PERC h810 controller and use it as JBOD, and if so do I have to configure every disk in the array as a RAID 0 volume?

    Read the article

  • Can't Connect to IIS Ftp Site under Amazon EC2

    - by h3n
    IIS 7.5: Ftp Firewall Suport: Data Ranges 49152-65535 using external Ip of Amazon EC2 static IP Ftp IPv4 Restriction: allow: Amazon EC2 static IP Ftp Authentication: Anonymous: Enabled, Basic: Disabled, IISMgr: Enabled Ftp Authorization: Allow All Users: Read/Write Windows Firewall (Inbound): Open port 21 Open port ranges: 49152-65535 (Outbound) Open port: 20 Amazon EC2 Security Group: Custom TCP Rule: 21 Custom TCP Rule: 49152-65535 It works on Internet Explorer when I typed the address: ftp://localhost on the server but when I entered the Amazon EC2 Static IP (ftp://IPADRESS) it doesnt connect. I cant connect also to FileZilla

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

  • Booting the server redis no errors

    - by Tylër
    The redis but usually begins with the following errors: tyler @ tyler-vortex: ~ / pens $. / src / redis-server [3690] Dec 01 10:56:05 # Warning: the specified config file, using the default config. In order to Specify a config file use 'redis-server / path / to / redis.conf' [3690] Dec 01 10:56:05 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max configuration to 992 clients. Others errors founds: tyler@tyler-vortex:~/redis$ sudo ./utils/install_server.sh Welcome to the redis service installer This script will help you easily set up a running redis server Please select the redis port for this instance: [6379] Selecting default: 6379 Please select the redis config file name [/etc/redis/6379.conf] Selected default - /etc/redis/6379.conf Please select the redis log file name [/var/log/redis_6379.log] Selected default - /var/log/redis_6379.log Please select the data directory for this instance [/var/lib/redis/6379] Selected default - /var/lib/redis/6379 Please select the redis executable path [/usr/local/bin/redis-server] cat: ./redis.conf.tpl: Arquivo ou diretório não encontrado cat: ./redis_init_script.tpl: Arquivo ou diretório não encontrado ERROR: Could not write init script to /tmp/6379.conf. Aborting! Furthermore, I would like to know how to configure it not to consume so much RAM. Follow the memory configuration of our website, but the settings of "vm-*" does not exist in the file redis.conf. http://redis.io/topics/virtual-memory You have to create them? * Edit: I installed. After that, I believe that I no longer have access via. / Src / redis-server, because it happens: tyler@tyler-vortex:~$ cd redis/ tyler@tyler-vortex:~/redis$ ./src/redis-server [2616] 01 Dec 22:29:30 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [2616] 01 Dec 22:29:30 # Opening port 6379: bind: Address already in use tyler@tyler-vortex:~/redis$ But there's another detail, the redistribution starts with the system .. redis 127.0.0.1:6379> exit tyler@tyler-vortex:~/redis$ ./src/redis-cli redis 127.0.0.1:6379> exit ... but how can I now see that the communication had before you installed from. sh?

    Read the article

  • setting up synergy (multi-computer shared input devices) new to mac

    - by pedalpete
    I've just gotten a mac and the first thing I'm trying to do in setting it up for dev is to set-up synergy. I've run into countless issues, and am making progress, but that 'mac's just work' theory is getting tired really fast. I'd prefer running the mac as the server, as it's a desktop. why use my pc(laptop) as the server. So i downloaded synergy, and tried to edit the 'synergy.conf' file. First with textmate, then with apple script. Neither of these applications will open the file. Even after changing textmate to use plain text format and switching off a few other things to make it into a plain text editor. No good. Uh, isn't that what these script/text editors are for? Then I found SyneryKM which is a package to run synergy on osx. Took LOTS of fiddling, but I think I finally kinda got it figured out. I've got my pc & mac both running synergy. However, my pc will only connect to synergy using the ip address. No problem, however, my mac won't connect to itself as a synergy server using the IP address. If I use the ip address as a screen in the 'server configuration file', i get a 'Error: unknown screen name mymac'. I know this shouldn't be super complicated, and possibly not the place to find answers about synergy, but I couldn't find a better place, and there has been some talk of synergy here. Any chance somebody can help with this?

    Read the article

  • How to use cyg-wrapper to fork a new tab in win32 gvim

    - by Peter Nore
    I would like to set up an alias in my cygwin .bashrc that translates pathnames unix-to-dos and passes them to windows gvim in a new tab of an existing instance. I am trying to use Luc Hermitte's cyg-wrapper script for running native win32 applications from Cygwin as per this vim tip. Luc's example of how to use his script is: alias vi= 'cyg-wrapper.sh "C:/Progra~1/Edition/vim/vim63/gvim.exe" --binary-opt=-c,--cmd,-T,-t,--servername,--remote-send,--remote-expr' I do not understand this example because most of these vim parameters (-c,--cmd,--servername,--remote-send,--remote-expr, etc) require more information, and I have not found any example of how to supply the additional information to cyg-wrapper.sh. For example, calling C:/Progra~1/Edition/vim/vim63/gvim.exe --servername=GVIM --remote-tab-silent file1 & will open file1 in a new tab of existing (or non existing) instance GVIM, but calling gvim --servername accomplishes nothing on its own. Unfortunately, though, the corresponding cyg-wrapper phrase does not work: cyg-wrapper.sh "C:/Progra~1/Edition/vim/vim63/gvim.exe" --binary-opt=--servername=GVIM,--remote-tab-silent --fork=2 file1 If ran twice, this actually opens up two instances of gvim; it is as if the servername 'GVIM' is being stripped and ignored. How do you supply a servername to gvim --servername or a .vimrc to gvim -u using cyg-wrapper.sh? Furthermore, why is it that programs must be passed to cyg-wrapper.sh in the relatively obscure "mixed form?" For example, if I try cyg-wrapper.sh "/cygdrive/c/path/to/GVimPortable.exe" --binary-opt=--servername=GVIM,--remote-tab-silent --fork=2 I get "Invalid switch - "/cygdrive"." See also: getting-gvim-to-automatically-translate-a-cygwin-path alias-to-open-gvim-cream-version-from-cygwin-shell

    Read the article

  • ssl_error_rx_record_too_long connection error with SSL site in Firefox

    - by Thomas
    I am trying to set up SSL in Apache but when I go to the server in Firefox I get the following error message: An error occurred during a connection to sludge.home. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) My virtual host config file looks like this. <IfDefine SSL> <VirtualHost *:443> ServerName sludge.home SSLEngine on SSLCertificateFile /usr/local/apache/cert/server.crt SSLCertificateKeyFile /usr/local/apache/cert/server.pem SSLProtocol -SSLv2 SSL CipherSuite HIGH:!ADH:!EXP:!MD5:!NULL DocumentRoot "/usr/local/apache/htdocs" ServerAdmin [email protected] <Location /> AuthType Digest AuthName "private area" AuthDigestDomain / AuthDigestProvider file AuthUserFile /usr/local/apache/passwd/digest_pw Require valid-user </Location> <Directory /usr/local/apache/htdocs/bugz> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /usr/local/apache/htdocs/bugzilla> AddHandler cgi-script .cgi Options +Indexes +ExecCGI DirectoryIndex index.cgi AllowOverride Limit FileInfo Indexes </Directory> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "/usr/local/apache/htdocs"> Options Indexes FollowSymLinks Allow Override None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> </VirtualHost> </IfDefine> A telnet to this server reveals that server is sending plain HTML back. Further more it seems as though mod_ssl is not loaded/working even though when I httpd -l it shows up as being statically compiled in. I have exhausted most avenues I can think of.

    Read the article

  • How to permanently "renice" a process on Mac OS X (or iOS, etc)?

    - by mralexgray
    I use a nice (free) process manager called ATMonitor for Mac OS X that has a lot of cool hidden features... one of which is being able to click on a running process.. and set the "renice" from +20 (less priority) to -20 (highest priority). The best part.... it sticks between restarts... SO you want XYZ to get full attention all the time.. you set it once and it's done... I want to do the same thing (renice a process) on an iPad running a particular daemon.. and I don't know how to set a renice permanently. I can do it once, and it works fine... But the setting is lost on a reboot. I read somewhere.. Now, as for permanently resetting the priority of a process, this can't be done directly. You can fake it, however, with a shell script that starts the app and then immediately renice's it. Give that script a ".command" extension and it will be double-clickable in the GUI. Not very elegant, but it gets the job done. But as it says.. not very elegant, and I dont think this is how ATMonitor does it.... I found this thread.... http://superuser.com and they gave a way to do it as a launch argument, but no apparent way to save it as a persistent value... for instance - if the program wasn't going to be started by launchd... How do I set a permanent renice level, per executable binary, independent of it's PID, when, how or why it was launched?

    Read the article

  • Seeing 10.x.x.x addresses in my BitTorrent client

    - by Legend
    Off late, I am seeing a number of IP addresses starting with 10.x.x.x in my bittorrent peer list. Aren't these IP addresses supposed to be private ones? I don't understand how I am able to see the IP address instead of the WAN (external IP) of the client... Any one knows why?

    Read the article

  • Process runs slower as a scheduled task than it does interactively

    - by Charlie
    I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • Weblogic 12 and the CLI

    - by Rig
    I am working with WebLogic on Fedora 19 and am attempting to use the CLI tools to no avail. It appears these were deprecated as far back as WebLogic 9 however I was assured they are still there and still functional. As it stands I have a need to use them if they are in fact functional. What appears to the case is that the weblogic jar file is not being loaded correctly to the classpath by this script after trying to manually add it to the classpath as it fails when trying to add those jars via java -classpath <path>. I've spent a lot of time so far trying to get this sorted out but I'm wondering what I may be missing here. My Java runtime is version 7, Fedora is 19, and WebLogic is 12.1. When I run env after running the provided set environment script it appears to have no impact from what I can see. (I'll add that later when I get back to that machine). I'm mostly a Windows developer so some of this is a topic I'm not well versed in. [foo@localhost bin]$ ./setDomainEnv.sh [foo@localhost bin]$ java weblogic.Admin -url t3://localhost:7001 -username <username> -password <password> HELP Error: Could not find or load main class weblogic.Admin [foo@localhost bin]$ ls -ltar total 72 drwxr-x--- 2 foo foo 4096 Jun 2 07:36 service_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 server_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 nodemanager -rwxr-x--- 1 foo foo 1267 Jun 2 07:36 setStartupEnv.sh -rwxr-x--- 1 foo foo 1105 Jun 2 07:36 startNodeManager.sh -rwxr-x--- 1 foo foo 5765 Jun 2 07:36 startWebLogic.sh -rwxr-x--- 1 foo foo 2001 Jun 2 07:36 stopWebLogic.sh -rwxr-x--- 1 foo foo 3170 Jun 2 07:36 startManagedWebLogic.sh -rwxr-x--- 1 foo foo 2776 Jun 2 07:36 stopManagedWebLogic.sh -rwxrwxrwx 1 foo foo 14136 Jun 2 07:36 setDomainEnv.sh -rwxr-x--- 1 foo foo 2060 Jun 2 07:36 startComponent.sh drwxr-x--- 5 foo foo 4096 Jun 2 07:36 . -rwxr-x--- 1 foo foo 1726 Jun 2 07:36 stopComponent.sh drwxr-x--- 12 foo foo 4096 Jun 2 07:45 ..

    Read the article

  • Opening Time-Machine OSX backup files on Windows 7?

    - by user39279
    Hi, Have Time Machine backups on a Western Digital External HD. The Time Machine backups were done on my now dead Mac G4 running OSX Leopard- I am waiting on a new iMac but in the meantime I need to access some of my backup files urgently. I have a laptop running Windows 7 so is there any safe way of accessing some of the files from the Time Machine backup on my laptop and still be able to do a full restore when the iMac arrives? Thanks -

    Read the article

  • IIS reverse proxy to windows authenticated internal site

    - by keithwarren7
    I have an internal windows authenticated website that I need to expose anonymously to external users. extern: http://foo.com/ (public) intern: http://privatefoo/ (requires windows auth) I want people hitting foo.com to see no security prompt, just get access to privatefoo - I know this is possible in a simple reverse proxy setup but does anyone know how to make the proxy provide windows credentials?

    Read the article

  • Firewire 800 data transfer too slow on Macbook Pro Unibody and Win7 RC

    - by dtmunir
    I'm trying to back up some data on an external hard drive and am finding the transfer rate to be unbearably slow. My environment is as follows: Macbook Pro Unibody (late 2008) Windows 7 RC, 64-bit Lacie, rugged 500GB portable hard drive I have tried using a number of methods including simple copying in Explorer, Teracopy, Crashplan, and Windows backup. I am averaging around 1MB/s which seems terribly slow. How do I identify what is the cause of this slow file transfer, and then how do I go about addressing the issue.

    Read the article

  • How to install GNU make in Windows 7?

    - by Azhar
    I am trying to install GNU make-3.82 on Windows 7. I downloaded the make-3.82.tar.gz setup but it does not have any setup file. There is process given on GNU site. But when I reach to the folder in command prompt and after extraction write ./configure it throws error is not recognized as internal or an external command, operable program or batch file. The installation procedure is given below but not able to understand how to make it. please help

    Read the article

  • How to run some commands after booting from ArchLinux disk? Or how to change some settings in .iso before booting?

    - by Alexander Ovchinnikov
    How to install Arch Linux with traditional installer with only ssh-access to server? There is nice guide: https://wiki.archlinux.org/index.php/Install_from_SSH I try test this on my home vps: Start VPS with any linux bootable cd and login to remote server (vps) wget http://mirrors.kernel.org/archlinux/iso/latest/archlinux-2010.05-netinstall-x86_64.iso dd if=archlinux-2010.05-netinstall-x86_64.iso of=/dev/sda reboot ... I see, it works but without ssh connection... I need make script, which will send this commands after reboot: aif -p partial-configure-network (and write some information about my server ip etc.) /etc/rc.d/sshd start (need to start sshd) echo "sshd: ALL" /etc/hosts.allow (to allow me login to server, by default deny all) passwd (by default its empty, can't login via ssh with empty password) Can I edit .iso or may be /dev/sda? May be I need write script, which will start after system boot and do this things or may be I can set this settings by default and system will start with correct settings (i think its possible at least in 2. and 3.). Thank you!

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    Hi Folks, When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • How to register an agent with launchd

    - by Konrad Rudolph
    I’m unable to schedule a periodic launch with launchctl/launchd on OS X (Leopard). Basically, I’m unable to find a step-by-step list of instructions on the web and the intuitive approach doesn’t work. The sync.plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>net.madrat.utils.sync</string> <key>Program</key> <string>rsync</string> <key>ProgramArguments</key> <array> <string>-ar</string> <string>/path/to/folder/</string> <string>/path/to/backup/</string> </array> <key>StartInterval</key> <integer>7200</integer> </dict> </plist> I’ve put this script inside the path ~/Library/LaunchAgents. Next, I’ve registered the script using launchctl load ~/Library/LaunchAgents/sync.plist Finally, to test that it works, I started the job: launchctl start net.madrat.utils.sync – Nothing happened. Manually executing the rsync command in the terminal yields the expected result. I’m fairly sure that the job was registered correctly because if I try to start a non-existing job, I get an error message (which I didn’t get in the above command). What did I do wrong?

    Read the article

  • memcached append() php ubuntu - bad protocol

    - by awongh
    I am running ubuntu gutsy(7.1) , php5 and I am trying to get memcached running locally. I installed everything as per the docs: memcached daemon, php PECL extension, libevent, etc. But now I can only run half of the example script for memcached append(): <?php $m = new Memcached(); $m->addServer('localhost', 11211); $m->setOption(Memcached::OPT_COMPRESSION, false); $m->set('foo', 'abc'); $m->append('foo', 'def'); var_dump($m->get('foo')); ?> the script terminates @ append() with an RES_BAD_PROTOCOL error message. It still runs the get(). I don't know why memcached would otherwise be working fine (connect, set, get - with the correct value of 'abc') and not work for append. it also doesnt work with prepend. I believe I have the setup correct, but I am not sure. Maybe there are compatibility problems between the versions of the dependecies? thanks much

    Read the article

  • wget not working with domain on local machine

    - by user568829
    Basically - I have some PHP scripts that need to be run as cron jobs. Lets say the script needing to be run is: http://admin.somedomain.com/cron_jobs/get_stats If I run the script from the local machine it gives me a 404 Not Found error. So I entered the following into /etc/hosts XX.XX.XX.45 admin.somedomain.com Now wget works fine from the local machine to that domain. However when I restart Apache that domain no longer works. Here is the config for that site in /etc/apache2/sites-available NameVirtualHost XX.XX.XX.45:80 <VirtualHost XX.XX.XX.45:80> ServerName admin.somedomain.com DocumentRoot /var/www/admin.somedomain.com/ <Directory "/var/www/admin.somedomain.com"> allowoverride all Options Indexes order deny,allow allow from all </Directory> ErrorLog /var/log/apache2/admin.somedomain.com-error_log CustomLog /var/log/apache2/admin.somedomain.com-access_log combined </VirtualHost> It just goes to the default site config showing "It Works". If I take out that setting in /etc/hosts and restart apache the website at that domain works fine again. Can anyone point me in the right direction here? Thanks

    Read the article

  • Force Windows Local Subnet Traffic through a Gateway

    - by Beerey
    Hi all, We are attempting to route all traffic from a certain machine to a gateway. This works ok for traffic destined for subnets outside of the machine's subnet. However, traffic to machines in the same subnet as the source machine goes through an On-Link gateway in Windows. This means that the default gateway is ignored, and traffic in a subnet (for example, 192.168.50.10 - 192.168.50.11) flows. Destination Netmask Gateway Interface Metric 192.168.50.0 255.255.255.0 On-link 192.168.50.214 276 This route can be deleted from Windows, but when the machine is rebooted it always comes back. Adding a persistant static route to the gateway with a lower metric doesn't work, since it will still try the On-Link gateway after the persistant route fails. Adding each machine in a VLAN isn't an option due to the setup we have Adding a startup script to delete the gateway isn't a great option either, since users will have full admin access to the machine and might disable the script. We cannot transperantly intercept all network traffic on the subnet using Gratuitous ARPs or transparent proxying, since there are other machines on the subnet which use a different gateway The only way we have gotten it to work is by adding a persistant route to the gateway for the subnet traffic, and deleting the On-link route on reboot. The question is then. Is there a way to permanently remove this On-link route If not, is there a way to otherwise force even local subnet traffic to go through a gateway?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >