Search Results

Search found 23627 results on 946 pages for 'alter script'.

Page 434/946 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • Default or fink python and lxml under 10.6.8

    - by songdogtech
    Ah, confusion. I'm trying to install a python library called lxml as needed by a python script. I've been through numerous SU quesitons and answers. I haven't been able to make much progress. I run easy_install lxml and get: Processing lxml-3.0.1-py2.6-macosx-10.6-universal.egg lxml 3.0.1 is already the active version in easy-install.pth Using /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg Processing dependencies for lxml Finished processing dependencies for lxml but when I run my script, I get: File "scraper.py", line 3, in import lxml.html File "/Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/html/init.py", line 42, in from lxml import etree ImportError: dlopen(/Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so, 2): Symbol not found: _htmlParseChunk Referenced from: /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so Expected in: flat namespace in /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so I think that maybe I'm not using the correct python install? I've installed python with fink, but should I use OS X's python? This is in my .profile: test -r /sw/bin/init.sh && . /sw/bin/init.sh which points to the fink install. echo $PATH gives me: /sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin Should I change that to point to snow leopard's python? (Which is 2.6.1) In Library/, there is: which are the lxml libaries I need, it appears, as well as requests. And whereis python gives me /usr/bin/python What do I do? How do I get python to use these libraries. And which python?

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • SSH session closing whilst virtualenv session stays open (I think)

    - by ing0
    I've been developing some sites using Flask recently (running on debian within a virtualenv), and when I am testing I can run it on a port, let's say post 5000. So I run the script like so: . env/bin/activate <- go into virtual environment python file.py <- run python script And I will be given this message: Running on http://0.0.0.0:5000/ So this all works great and I can access my site on this port fine. However... my rubbish ISP always does this thing where it resets something around 1am every morning. I have no idea what this is, everything runs like normal but I always get disconnected from any SSH sessions open. This leaves it running and all I can do is call: lsof -i Which will show me the process but if I kill it and then rerun it things get weird. The: Running on http://0.0.0.0:5000 message still shows but I cannot connect to it anymore. I've tried changing the port number and it seems the only thing that works is trying again later on or on another day. Now I'm assuming that something on my server resets inbetween these times and I would like to think it was maybe that virtualenv session timing out, but I cannot find out how to do this manually, does anyone know?

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • upstart config to start sync daemon as non-root user

    - by Rudiger Wolf
    I am planning to use inosync to sync data from master server to several client servers. I have created a user called rsyncuser in both master and slaves with access permissions and passwordless ssh access from master to slave servers. Inosync is working when I use it from the command line as rsyncuser. Next I want this to start automatically when server is turned on. I figured upstart is the way to get this working. I am unable to find the right upstart command to get this working. Here is my upstart conf file. The problem seems to be around running "inosync -d -c /etc/inosync/inosync_rsyncuser.py" as a given user. As you can see I have tried a number of various options! description "start inosync to sync data to other CDN Servers as rsyncuser" console output #start on startup #stop on shutdown start on (net-device-up and local-filesystems) stop on runlevel [016] #start on runlevel [2345] #stop on runlevel [!2345] #kill timeout 30 env RUN_AS_USER=rsyncuser expect fork script echo "Inosync updtart job seems to have started" /tmp/upstart.log # exec sudo -u rsyncuser -c "ls -la" /tmp/upstart.log 2&1 # LOGFILE=/var/log/logfile.`date +%Y-%m-%d`.log # exec su - $RUN_AS_USER -c "inosync -d -c /etc/inosync/inosync_rsyncuser.py" $LOGFILE 2&1 # exec su -c "ls -la" /tmp/upstart.log 2&1 # emit inosync_running end script

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • Apache + mod_fcgid + perl = error 500

    - by f-aminov
    Hi guys! I'm trying to setup Apache2.2 with mod_fcgid and libapache2-mod-perl2 with no luck. I've created a fcgi-bin directory in the root directory of my website and put there a test.fcgi file with the following content: #!/usr/bin/perl use CGI; print "This is test.fcgi!\n"; While trying to access it via http://www.website.dom/fcgi-bin/test.fcgi I get error 500 (Internal Server Error). Here is my vhost config: <VirtualHost 95.131.29.226:8080> ServerName website.com DocumentRoot /var/www/data/website.com SuexecUserGroup user group ServerAlias www.website.com AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml <Directory "/var/www/data/website.com/fcgi-bin/"> Options +ExecCGI Allow from all Order allow,deny AddHandler fcgid-script .fcgi </Directory> </VirtualHost> fcgid.conf: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi SocketPath /var/lib/apache2/fcgid/sock IdleTimeout 3600 ProcessLifeTime 7200 MaxProcessCount 8 DefaultMaxClassProcessCount 2 IPCConnectTimeout 8 IPCCommTimeout 60 </IfModule> SuExec log: [2010-04-06 03:02:47]: uid: (500/equ) gid: (502/equ) cmd: test.fcgi Apache error log: test! test! [Tue Apr 06 03:02:51 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26267) exit(communication error), terminated by calling exit(), return code: 0 [Tue Apr 06 03:02:53 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26261) exit(server exited), terminated by calling exit(), return code: 0 I've no clue why I'm getting error 500, but when I'm trying to access this file using console ($ perl /var/www/data/website.com/fcgin-bin/test.fcgi) everthing works fine without any errors... Any suggestions on how to solve this problem would be greatly appreciated. Thank you!

    Read the article

  • Netcat (nc) traditional package for RHEL 6.x?

    - by HTTP500
    I'm trying to use the Percona Apache Monitoring [Cacti] Template for Memcached. They do indeed warn that you can't use the openbsd version of the package and provide a solution for Ubuntu/Debian users, i.e.: You need nc on the server. Some versions of nc accept different command-line options. You can change the options used by configuring the PHP script. If you don’t want to do this for some reason, then you can install a version of nc that conforms to the expectations coded in the script’s default configuration instead. On Debian/Ubuntu, netcat-openbsd does not work, so you need the netcat-traditional package, and you need to switch to /bin/nc.traditional... Since the RHEL 6.x version indeed comes from openbsd (confirmed by rpm -qi nc) how does one go about getting this installed on RHEL/CentOS? Anyone else running these Percona templates on RHEL/CentOS? What did you do? alien the Debian package? Update 1: FWIW, I tried to use GNU netcat by compiling it from source but it doesn't seem to have the exact options required by the Cacti template either (i.e. there is no analogy for -C or -q1 so it seems) Update 2: I alien[ed] the netcat-traditional_1.10-38_amd64.deb package to make a .tgz and it does produce a binary "nc.traditional" and that version has the -q option but no -C Cheers

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • Replacing latex with unicode symbols

    - by Elazar Leibovich
    Often, during a conversation or an email, or at a forum, I would like to type some math, but I don't need full equation support. Unicode symbols should suffice. What I need is an easy way to type math related unicode symbols. Since I already know latex, it makes sense to use the latex symbol mnemonics to type the math symbols. What I currently did is to write an AutoHotKey script which automatically replaces \latexSymbol with the corresponding unicode symbol, using the "hotstrings" AutoHotKey feautres. However, the AutoHotKey hotstrings proved unstable for many strings. Having a couple of tens lines would cause AHK to fail recognizing the strings from time to time. Any other solution? (No, Alt+unicode number isn't convenient enough) Attached is my AHK script. The PutUni function is taken from here. ::\infty:: PutUni("e2889e") return ::\sum:: PutUni("e28891") return ::\int:: PutUni("e288ab") return ::\pm:: PutUni("c2b1") return ::\alpha:: PutUni("c991") return ::\beta:: PutUni("c992") return ::\phi:: PutUni("c9b8") return ::\delta:: PutUni("ceb4") return ::\pi:: PutUni("cf80") return ::\omega:: PutUni("cf89") return ::\in:: PutUni("e28888") return ::\notin:: PutUni("e28889") return ::\iff:: PutUni("e28794") return ::\leq:: PutUni("e289a4") return ::\geq:: PutUni("e289a5") return ::\sqrt:: PutUni("e2889a") return ::\neq:: PutUni("e289a0") return ::\subset:: PutUni("e28a82") return ::\nsubset:: PutUni("e28a84") return ::\nsubseteq:: PutUni("e28a88") return ::\subseteq:: PutUni("e28a86") return ::\prod:: PutUni("e2888f") return ::\N:: PutUni("e28495") return

    Read the article

  • ghettoVCB issue

    - by romgo75
    Hi, I setup ghettoVCB script in order to backup 3 VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folder, one for each VM. For each Folder I have th following files : -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finished the process. When I read the logs concerned by this folder I get : 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run anymore ghetto VCB because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle vm abckup rotate ? Any idea ? Thanks !

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything. I am already having cwRsync installed on my widows and running it in daemon mode using --damemon option. And I am able to login using ssh from windows machine to linux machine. When I issue bellow command, it just blocks for 120 seconds (timeout I specified in command) and exits saying there is timeout. rsync -e "ssh -l piyush" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" After starting rsync on widows, I checked, rsyc is running. And widows firewall setting are set to minimal, and on Linux machine stopped iptables service so that port 873 (default rsync port) is not blocked. What can be the possible reason that Linux machine is not able to connect to rsync-daemon on windows machine?

    Read the article

  • Fedora, ssh and sudo

    - by Ricky Robinson
    I have to run a script remotely on several Fedora machines through ssh. Since the script requires root priviliges, I do: $ ssh me@remost_host "sudo touch test_sudo" #just a simple example sudo: no tty present and no askpass program specified The remote machines are configured in such a way that the password for sudo is never asked for. For the above error, the most common fix is to allocate a pseudo-terminal with the -t option in ssh: $ ssh -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Let's try to force this allocation with -t -t: $ ssh -t -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Nope, it doesn't work. In /etc/sudoers of course I have this line: #Defaults requiretty ... but I can't manually change it on tens of remote machines. Am I missing something here? Is there an easy fix? EDIT: Here is the sudoers file of a host where ssh me@host "sudo stat ." works. Here is the sudoers file of a host where it doesn't work. EDIT 2: Running tty on a host where it works: $ ssh me@host_ok tty not a tty $ ssh -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. $ ssh -t -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. Now on a host where it doesn't work: $ ssh me@host_ko tty not a tty $ ssh -t me@host_ko tty not a tty Connection to host_ko closed. $ ssh -t -t me@host_ko tty not a tty Connection to host_ko closed.

    Read the article

  • Windows 7 - system error 5 problem

    - by Ian
    My wife has just had a new computer for Christmas (with an upgrade from VISTA to Windows 7), and has joined the home network. We are using a mix of WindowsXP and Ubuntu boxes linked via a switch. We are all in the same workgroup. (No domain). Internet access, DHCP, and DNS server is an SME server that thinks it is domain controller (although we are not using a domain). I need to run a script to back up my wife's machine (venus). In the past the script creates a share on a machine with lots of space (leda), and then executes the line. PSEXEC \\venus -u admin -p adminpassword -c -f d:\Progs\snapshot.exe C: \\leda\Venus\C-drive.SNA With the wife's old XP machine, this would run the sysinternals utility, copy shapshot,exe to her machine and run it, which would then back up her C: drive to the share on leda. I cannot get this to work with Windows 7, nor can I link through to the C$ share on her machine. This gives me a permissions error (system error 5). The admin account is a full admin account. And yes - I do know the password. The ordinary shares on her machine work fine! I guess I'm missing something that Microsoft have built into Windows 7 - but what? The machine is running Windows 7 business, with windows firewall, AVG anti virus, and all the crap-ware you get with a new PC removed. Thanks

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Adding network share as the system account breaks Win7 backup to network

    - by ChrisBenn
    (On Win7 Ultimate x64 SP1) So I've been using Win7 backup to \192.168.0.100\Backup\main-desktop\ for awhile without issue. Yesterday I tried to setup crashplan to synchronize my dropbox folder and a network share. I then found out that crashplan, as it runs under the system account, can't see my user mapped drives. So I created a startup script net use O: \192.168.0.100\Documents /USER:192.168.0.100\username password and set it to run, on startup, after the network interface is up, as the system account. (the username & password are the same for the net use script above, the locally logged in user, and the explicit username/password entered in windows backup). I woke up this morning to find error flags from the windows backup and get "Network location cannot be used" (0x800704B3). If I disable the startup task & reboot then windows backup works fine. I'm not sure why having a mapped drive for another user is killing windows backup (same server, different folder). I can work around the issue by using another program to synchronize the two folders, but I'm completely in the dark as to why this happens (and it's 100% repeatable). Uninstalling the crashplan client doesn't change anything - it's the net use run under the system account that breaks win7 backup (to a network location).

    Read the article

  • Skip all warning prompts on ACPI shutdown?

    - by N Rahl
    When I issue an ACPI shutdown command to a Windows XP guest machine from the host VM server, I want Windows to shutdown. The problem is, Windows always wants to ask some question or another, rather than just shutting down. I need shutdown to be reliable, no matter what is running or going on, so I can automate shutdowns from the host machine. But I want it to be as graceful as possible, rather than just pulling the plug. Some problems: If a user is logged in, ACPI shutdown causes a box to appear that says, "are you sure you want to shutdown while other users are logged in"? And this prevents shutdown until someone connects to the machine and clicks "yes". In this case, it should try its best to gracefully log out all users, using force if necessary, and then shutdown without promoting. Busy or non-responding programs or programs asking to save data can prevent Windows from shutting down until a user answers a prompt. This should attempt to save data, wait maybe 30 seconds for non-responding programs, but should get aggressive with stubborn programs. "nope, time's up! 3,2,1, Goodbye!" Is there a registry setting that I can change from: ACPI_Shutdown: "Shut down if Windows feels like it" to ACPI_Shutdown: "Just do it. Kill programs, bump users, try to be graceful about it, but when I come back, I expect you to be off." This should respond to the ACPI shutdown command, and not be a script on windows, unless that script is triggered by the ACPI power button. I'm hoping this can be changed with registry options.

    Read the article

  • apache-user & root access

    - by ahmedshaikhm
    I want to develop few scripts in php that will invoke following commands; using exec() function service network restart crontab -u root /xyz/abc/fjs/crontab etc. The issue is that Apache executes script as apache user (I am on CentOS 5), regardless of adding apache into wheel or doing good, the bad and the ugly group assignment does not run commands (as mentioned above). Following are my configurations; My /etc/sudoers root ALL=(ALL) ALL apache ALL=(ALL) NOPASSWD: ALL %wheel ALL=(ALL) ALL %wheel ALL=(ALL) NOPASSWD: ALL As I've tried couple of combination with sudoer & httpd.conf, the recent httpd.conf look something as follows; my httpd.conf User apache Group wheel my PHP script exec("service network start", $a); print_r($a); exec("sudo -u root service network start", $a); print_r($a); Output Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Without any surprise, when I invoke restart network services via ssh, using similar user like apache, the command successfully executes. Its all about accessing such commands via HTTP Protocol. I am sure cPanel/Plesk kind of software do use something like sudoer or something and what I am trying to do is basically possible. But I need your help to understand which piece I am missing? Thanks a lot!

    Read the article

  • Solaris SMF to Upstart on RHEL6

    - by aaa90210
    I am planning a migration from Solaris/x86 to RHEL6. Part of this migration will be migrating services from SMF to the RHEL6 equivalent, which appears to be upstart. While init.d scripts still seem to be supported, I want to take advantage of a more sophisticated init daemon, especially for features like job supervision (restarting etc). I would like to gather some thoughts on a few points: 1) Is upstart an adequate job supervisor, i.e. does it preclude the need for stand-alone managers like daemontools/supervise? 2) Upstart scripts seem very bare-bones compared to a typical init.d script. If I was porting an init.d script to Upstart, is it OK to just "exec /etc/init.d/myjob start"? This include RHEL installed programs like httpd. 3) Does upstart do anything is regards to pid files, and what are it's expectations in regards to the forking model of the process? 4) Are there any straightforward guides to the process management aspect of Upstart...and by that I mean the conditions around controlling restarting? e.g. how many times to restart the process before it goes into a maintenance state, or to ignore errors/core dumps in child processes of the supervised process. Any other relevant ideas or guides would be appreciated. TIA

    Read the article

  • How do you install .net4 on a Server 2008 r2 machine through psremoting in powershell?

    - by Jake
    I need to write a script that installs .net 4 remotely using powershell to a group of Server 2008 R2 machines. I based my script off of http://social.technet.microsoft.com/Forums/en-US/winserverpowershell/thread/3045eb24-7739-4695-ae94-5aa7052119fd/. enter-pssession -computername localhost $arglist = "/q /norestart /log C:\Users\tempuser\Desktop\dotnetfx4" $filepath = "C:\Users\tempuser\Desktop\dotNetFx40_Full_setup.exe" Start-Process -FilePath $filepath -ArgumentList $arglist -Wait -PassThru After running the command I would get the following log errors (running the same lines locally would install .net without error): Action: Downloading Item Failed to CreateJob : hr= 0x80200014 Action: Performing actions on all Items Action: Performing Action on Exe at C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe Exe (C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe) succeeded. Exe Log File: dd_SetupUtility.txt Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_32 ServiceControl operation succeeded! Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_64 ServiceControl operation succeeded! Action complete Action: Performing Action on Exe at C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu Exe (C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu) failed with 0x5 - Access is denied. . PerformOperation on exe returned exit code 5 (translates to HRESULT = 0x5) Action complete OnFailureBehavior for this item is to Rollback. Action: Performing actions on all Items Action complete Action complete Action: Downloading http://go.microsoft.com/fwlink/?LinkId=164184&clcid=0x409 using WinHttp WinHttpDetectAutoProxyConfigUrl failed with error: 12180 Unable to retrieve Proxy information although WinHttpGetIEProxyConfigForCurrentUser called succeeded Action complete C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe: Verifying signature for netfx_Core.mzz C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe Signature verified successfully for netfx_Core.mzz Action complete Decompression completed with code: 16389 Decompression of payload failed: C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\netfx_Core.mzz Action complete Final Result: Installation failed with error code: (0x80074005) (Elapsed time: 0 00:00:28). Is there some security setting or perhaps something else I've missed?

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot trying to redirect to /owa

    - by cparker4486
    I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >