Search Results

Search found 5695 results on 228 pages for 'logoff scripts'.

Page 61/228 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • can a python script know that another instance of the same script is running... and then talk to it?

    - by Justin Grant
    I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original insance before the new instance commits suicide. How can I do this in a cross-platform way? Specifically, I'd like to enable the following behavior: "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it. every few minutes the same script is launched again, but with different command-line parameters when launched, the script should see if any other instances are running. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform. So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another? Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible. I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option. More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them. This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing. But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

    Read the article

  • Google docs spreadsheet not loading

    - by Pythonista's Apprentice
    I have a Google spreadsheet with a lot of very important data and some scripts that where working well. At some point, the brownser crash and I reload the page. After that, I can't acess that (only that!) spreadsheet any more! I try it from other Google accounts but it doesn't work. All I get is the "Loading..." message in the brownser tab and nothing more (the loading process never completes). I also can't: copy the file or download it! Ie, I can lose all the information in my spreadsheet and also lose all my scripts! (I never think something like this could hapen with a Google Product) How can I solve this problem? Thanks in advance for any help!

    Read the article

  • %sessionname% returns incorrect session name

    - by Samuel Walker
    I have a virtualised Windows XP SP3 machine, which I am connecting to over Remote Desktop. One of my scripts needs to use the %sessionname% variable. However this returns incorrect information. C:\>%sessionname% constantly returns RDP-Tcp#5, instead of the value for the currently connected session (RDP-Tcp#35 or similar), as shown in Task Manager This causes my scripts to contain incorrect information. What can I do to resolve this? Edit Further Information: A restart appears to solve the problem for the first connection, but then subsequent connections have the numbers fall out of sync again.

    Read the article

  • How to automatically set default quota limits for users on XFS filesystem, when the new account is created

    - by acidburn2k
    I guess the title explains the problem pretty well. Do you have an idea for a mechanism, which will automatically assign default quota values for every new account created (sort as the skel scheme works, but in this area)? Now, I am looking for a generic clean solution, not some ugly cron based scripts, or wrapper scripts for creating users. I would also like to avoid any external, unmaintained stuff (like forgotten pam modules, and such). Anything what could lead to overhead and extra work in future isn't really the solution, nor is checking for new accounts every minute.

    Read the article

  • LSB Script: how do i know if something goes wrong?

    - by ianaz
    How do I know if a LSB script fails to load or where do I check the log of the lsbs scripts? I added two scripts with the following command: update-rc.d scriptname defaults And just one launches the things I need. It does not seem to be a script error since if I launch it with /etc/init.d/scriptname it works. This is my script: #!/bin/bash ### BEGIN INIT INFO # Provides: nodes # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts all node apps # Description: Starts all node apps like AAM, AMT,... ### END INIT INFO echo "Launch Node applications with forever" export PATH=/usr/local/bin:$PATH # Starts the redis server redis-server # Starts AAM forever -o /var/log/AAM.log -e /var/log/AAM.log --spinSleepTime 2000 -m 5 start /var/nodejs/AAM/app.js

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • How to allow Mac OS X's native Apache/PHP installation to access WebServer directories?

    - by Martin Bean
    I have a problem bugging me with Mac OS X's native Apache/PHP installation. With my PHP scripts, I have to alter the file permissions on each folder I want to access. For example, in an upload script I would have to set the destination directory to 'read & write' for the group 'everyone'. However, I believe this is not the best practice and would like all of my directories to be readily writable to PHP. My scripts are stored in /Library/WebServer/Documents/, which is Mac OS X's default directory to serve web pages locally.

    Read the article

  • CentOS 5.5 Package documentation

    - by fthinker
    Usually when I install a common package like PostgreSQL or MySQL or Python etc using Yum it installs the files held within those packages into locations specific to CentOS itself. It may also install scripts specific to CentOS only. These paths may not be the same as the defaults found within the source distributions found on the PostgreSQL, MySQL, Python etc project websites and the scripts are usually unique to CentOS. Recently when I installed PostgreSQL under Ubuntu I found some very nice distribution specific information about how the install was organized and how to use the package in a Ubuntu way. I found this information in /usr/share/doc/ Is there any such information included within CentOS?

    Read the article

  • monitor a folder and send files via ftp to clients

    - by user73109
    I am looking for software that will monitor a specific folder and when a file is created in it send that file off via ftp to a client associated with that folder by the software. I have tried software such as smart FTP and cute FTP and they don't seem to monitor folders very consistently. Some of the options with them were to write scripts to delete duplicated files from the transfer queue. I really don't want to have to write scripts for software I purchase. I am not opposed to needing scripting or writing it but I feel I shouldn't have to write scripting to make there software properly do some thing it says it does out of the box. I am currently trying to do this on a Windows XP box though running on a Server 2003 is an option if it would make things easier. I really just want pointed in the correct direction this is all fairly foreign to me

    Read the article

  • PHP Mail Not Sending Messages

    - by Kenton de Jong
    I realize this question title is pretty overused, but I couldn't find an answer to my problem. This could be because either I'm not too good with PHP and I don't understand the problem, or because I have a different issue, but I thought I would post it and see if somebody can help me. I developed a website for a local church in my city and I made the site on my computer, put it onto my website as a sub directory and tested it all. It worked great. One of the things the client wanted was there to be an email form that can send emails. I made it and all was good. I then uploaded it onto the church server and thought it went good too. But then we decided to try the email form out, and for some reason it didn't work. I made the email form by having the user select a recipient (pastor, office manager, etc.) with a radio button, and that would change the action of the email form. I just did something like this: if (recipent == "pastor") { document.forms[0].action = "../scripts/php/pastor_contact.php"; } else if (recipent == "pastoralAssist") { document.forms[0].action = "../scripts/php/pastoral_assist_contact.php"; } else if (recipent == "famMinistry") { document.forms[0].action = "../scripts/php/sacra_assist_contact.php"; } else if (recipent == "sacraAssist") { document.forms[0].action = "../scripts/php/fam_ministry_contact.php"; } I know this isn't the cleanest, but it works great. The php files then, all look very similar to this (just a different email)" <?php $name = $_POST['name']; $email = $_POST['email']; $phone = $_POST['phone']; $message = $_POST['message']; $formcontent="From: $name \n Email: $email \n Phone Number: $phone \n Message: $message"; $recipient = "[email protected]"; $subject = $_POST['subject']; $mailheader = "$subject \r\n"; mail($recipient, $subject, $formcontent, $mailheader) or die("There seems to be an error with this form. Sorry about the inconveince. We are working to get this fixed."); header('Location: ../../quickylinks/message_sent.html') ; ?> What this does, briefly, is collect the information from the email form, submit it as an email and then redirect the user to a "Message Sent" page. This works on my server, but not theirs so I believe it's something to do with their server. You can see their server information here and mine here. When the user sends the message, they get "There seems to be an error with this form. Sorry about the inconveince. We are working to get this fixed." and the email doesn't go through, although the code is the same on my server and it works fine there. My initial thought was that PHP wasn't installed on their server (rare, but it does happen). But it was. So then I thought maybe it was installed, but the "mail" function was disabled. So I tried the following php code: <?php if (function_exists('mail')) { echo 'mail() is available'; } else { echo 'mail() has been disabled'; } ?> And it came back with "mail() is available". So now I'm stuck and I don't know the problem could be. As I said, I'm not very good at PHP yet so if somebody could give a detailed answer, I would be really really thankful! Thank you so much!

    Read the article

  • How do I allow a (local) user to start/stop services with a scheduled task?

    - by Mulmoth
    Hi, on a Windows 2008 R2 server I have two small .cmd-scripts to start/stop a certain service. They look like this net start MyService and net stop MyService I want to execute these script via scheduled task, and I thought it would be best to create a local user for this job. The user is not member of the Administrators group. But the scripts fail with exit code 2. When I logon with this local user and try to execute these script in command line, I see a message like (maybe not exactly translated from german to english): Error code 5: Access denied It doesn't matter whether I start the command line as Administrator or not. How can this local user gain rights to do the job?

    Read the article

  • how to check if something is in the queue of torque?

    - by kloop
    I want to re-run some jobs that completed prematurely under torque. These jobs are run through .job scripts (using qsub). However, I don't want to re-run a job which is already in the queue. Given a script filename, how can I know whether it is already in torque's queue (using qstat?) or not? I prefer to do it programmatically, of course, so any oneliner that searches for a given script name would be great. I will note that I can grep submit_args in qstat -f, but I can't get it to display the whole script name when it is too long. This is crucial. EDIT: I managed to solve it using the following command: qstat -x | perl -pi -e 's/\<\//\n/g' | grep job$ | grep -v submit_args | perl -pi -e 's/Job_Id\>\<Job_Name\>//' works because all my scripts end in the string "job".

    Read the article

  • who deleted my files?

    - by akalter
    I have some linux servers. On two of our server we have MySQL. We have daily backup on both machine. But the scripts are different. I saw both scripts. On one of them I saw the "delete older files" algorithm, but in the other this is happening but not from the script. I am trying to discover who deletes my files, because of that I want to use same script on both machine because of that in the script with the deletion I also copy the files to the another server, and I want to do that in both servers. Who have an idea who deleted my older backups? Thank you!

    Read the article

  • OS X - Automatically Set Execute Permissions for New Files?

    - by i help X u
    I'm using OS X 10.6.4 and am trying to set a folder to automatically enable execute permissions on new script files copied or created in a directory. I have used Sandbox 2 to set every permission for the folder to enabled with sticky bits and the inherit flag set but I still have to manually set the execute flag using chmod for every new flag. I've done: chmod -R a+rwxs ~/scripts I've done: chmod 7777 ~/scripts And the permissions for the folder show as: drwsrwsrwt+ for the folder. But if I add a new script file it's set to "-rw-r--r--+" (the default) I looked at setting "unmask 000" in the .profile file but the default value for files is 666 with an unmask of 022 so that's not relevant since I would need a default value of 777 for files. I have figure out how to use chmod in an AppleScript triggered by a folder action to automate this but I'm wondering if there is a simple ACL or chmod setting I'm missing. So, is there a way to automatically set execute permission for new files? (Without using a folder action and AppleScript?)

    Read the article

  • How can I get bash to perform tab-completion for my aliases?

    - by dstarh
    I have a bunch of bash completion scripts set up (mostly using bash-it and some manually setup). I also have a bunch of aliases setup for common tasks like gco for git checkout. Right now I can type git checkout dTab and develop is completed for me but when I type gco dTab it does not complete. I'm assuming this is because the completion script is completing on git and it fails to see gco. Is there a way to generically/programmatically get all of my completion scripts to work with my aliases? Not being able to complete when using the alias kind of defeats the purpose of the alias.

    Read the article

  • Comparing two specific properties of a CSV using Compare-Object isn't giving the expected results

    - by MDMarra
    I have a list of users from two separate domains. These lists are in CSV format and I only care about the SAMAccountName, which is a field in these CSVs. The code that I'm working with is currently: $domain1 = Import-CSV C:\Scripts\Temp\domain1.xxx.org.csv | Select-Object SAMAccountName $domain2 = Import-CSV C:\Scripts\Temp\domain2.xxx.org.csv | Select-Object SAMAccountName Compare-Object ($domain1) ($domain2) This is returning only a handful of results (which aren't accurate) in this format: @{samaccountname=SomeUser} => Obviously, Compare-Object isn't evaluating the objects as strings. How do I make this work?

    Read the article

  • Batch file to open multiple cmd prompts

    - by JHarris
    I am trying to write a batch file that will automate the following manual process: Open a new cmd prompt (prompt1) Run a bat file (b1) Run a program (that will continue to run) Minimize prompt1 Open a new cmd prompt (prompt2) Run a bat file (b1) Run a different program (that will continue to run) Minimize prompt2 I've found ways to open multiple instances of cmd to run different things, but after I've run the first thing (b1), I then need to run a program in that same cmd window. I currently have start /min cmd /k C:\Users\db2admin\python_environment\Scripts\activate.bat start /min cmd /k C:\Users\db2admin\python_environment\Scripts\activate.bat This opens the two windows and runs the bat, great, but now I need to execute another command (running a python file) in each of the cmd windows. How do I send commands to each prompt?

    Read the article

  • Practical way of keeping up-to-date backup servers?

    - by ftkg
    What is the approach generally used when you want to have backup physical servers? Currently I have a Linux server running a database, a samba share, a webapp and some scripts; and a Windows Server, running some third-party software. What I would like was to be able to have a ready backup server to enter in production in case of failure, but how to keep them up-to-date? I've seen some expensive solutions for Windows; for Linux I've wondered if I really have to build an array of scripts.

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • Avoid putty ssh terminal to crash when disconnecting from server

    - by JBoy
    I'm connecting via ssh to a remote 'live' server where i have some bash scripts automated via the crontab, when an error happens in some of the automation scripts within the server, the connection to the server is killed, this is fine to me, but the problem is that Putty closes the entire window, which is a behavior i don't want. I have checked all around the web, unfortunately the putty site does not have a support page, but nothing. Under putty's option i have tried all the menus expanding all options, but still i cant find the right one, i would expect it to be under Windowbehaviour Do you have an advice? Thx

    Read the article

  • Can I use Cygwin as a replacement for Ubuntu, for bash script testing?

    - by Jeroen De Meerleer
    Next wednesday i'm having an exam on Operating Systems. In this exam there will also be a part bash-scripting. The teacher itself will test the scripts in a Virtual Machine running Ubuntu. Myself, however, I'm having serious troubles with running the latest Ubuntu (14.04 LTS) on a Virtual Machine (there are troubles with gnome running very slow). So I'm thinking about using Cygwin, which is doing the job great for another course. The teacher already confirmed I can use that, but I'm thinking he doesn't know it at all. I've already tested the scripts we made in class and they're all running without errors. But I'm quite sure there are some things I have to mind on. My question: would you use Cygwin as a replacement for the Ubuntu VM? Or should I stick it with the VM (maybe by using a different config/platform).

    Read the article

  • Effectively managing crontab

    - by jakenoble
    My crontab looks something like this; 1 * * * * /var/www/cron/site1.sh > /dev/null 2>&1 0 * * * * /var/www/cron/site2.sh > /dev/null 2>&1 3 * * * * /var/www/cron/site3.sh > /dev/null 2>&1 This works great and lets me place all the nasty little script calls into one place, rather than making crontab harder to read than it already is. But, this fails massively when site2.sh needs one script to run once a day, another to run once a week and another to run every 5 minutes. And of course it gets worse as new scripts get added with different timings. Is there a better way? EDIT By better I mean making it more manageable, having a large crontab is not manageable, but neither is having scripts all over the place. Not a GUI necessarily.

    Read the article

  • Ubuntu upstart hangs on interactive start & stop

    - by danorton
    How do I get Ubuntu upstart to not hang on interactive start & stop? I have created many upstart scripts that work fine during init, but often hang when I enter them at the console. If I CTRL+C out, all that happens is that the job changes state. The script is never run. I’m running Ubuntu Lucid on a Xen virtual server with a Linux 2.6.39 kernel. Below is merely a representative example of many scripts that behave this way: description "apache2" start on local-filesystems \ and (net-device-up IFACE=lo) \ and (runlevel [2345]) stop on runlevel [016] respawn respawn limit 10 5 expect daemon script . /etc/apache2/envvars /usr/sbin/apache2ctl start end script

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >