Search Results

Search found 5564 results on 223 pages for 'scripts'.

Page 58/223 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • iis php internal server error

    - by user1633206
    I developed a website, using php/mysql running at IIS Server as CGI Server API. Suddenly it gives me error 500 after two weeks. It has lot of scripts but index.php home is working. But other script that has header redirection What's wrong with my scripts. livehttp addon of firefox says.. GET /allplans.php?lang=ar&cat=1 HTTP/1.1 Host: www.myhost.com... [edited] User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20100101 Firefox/14.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cache-Control: max-age=0 HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 03 Sep 2012 08:09:13 GMT Content-Length: 1208 Connection: Keep-Alive

    Read the article

  • Missing menu items for Azure SQL tables within SQL Server Management Studio?

    - by Sid
    I have a table (say Table1) that is replicated via SQL Data Sync Agent across a local SQL Server 2012 as well as an Azure SQL Server (part of Microsoft Azure). Everything about Table1 (schema, table values etc ) is identical to the best of my understanding. However, when I list and right click Table1 from Microsoft SQL Server Management Studio 2012 (SSMS), I get some very different menu options, even for seemingly basic stuff. Lets focus only on the 'Design' menu item: It is visible for Table1 on the local SQL server in SSMS It is missing for Table1 on Azure SQL via SSMS It is visible for Table1 (as Open Table Definition) on Azure SQL when reaching it via Visual Studio 2012 (Server Explorer - Data connections) This is seen in the screenshots below: Now I use scripts from some real stuff (esp when I need to check in the SQL scripts etc) but this difference concerns me to some extent. Am I witnessing just a tools artifact in SQL Server Management Studio when connecting to Azure SQL? or is it something more serious about limitations of Azure SQL itself (although, just seeing the Design surface is so basic!)?

    Read the article

  • All terminal commands (like ls, cd, edit, open) are returning errors on my Mac

    - by park
    From what I can tell from reading other questions/answers is that my .bash_profile file may be corrupt. If I type echo $PATH in terminal the result is: /usr/local/git/bin From what I've read, that's not what the result is supposed to be. But I also can't get any of the commands (like edit or subl, for Sublime Text 2) to open the .bash_profile file to edit it. I was able to open the file in TextEdit using "cmd-shift-.", and here's what's in the file: [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" PATH=$PATH:~/bin export PATH export PATH=/usr/local/git/bin But the file is LOCKED, so I can't edit it there either. I'm very new to programming and in the middle of trying to install everything on my Mac to go through a Ruby on Rails tutorial. I can't even check my version of ruby, since even ruby -v returns -bash: ruby: command not found Any help would be greatly appreciated. Thanks.

    Read the article

  • How do I stop GNU Freetalk from automatically filling in the buddy name?

    - by Journeyman Geek
    I'm using GNU Freetalk along with expect in order to send notifications to my phone – Freetalk has a readline interface, and I use expect to make a series of non interactive scripts that send information to another Jabber account. I'd like to have these scripts end freetalk 'properly' – that is to say user@domainname message one user@domainname message two /quit which would print out message one message two then quit. However Freetalk 'helpfully' adds user@domainame automatically so I get message one message two /quit as the output. The expect script still ends, but there's a delay. How would I stop Freetalk from adding in the 'buddy' address automatically?

    Read the article

  • can a python script know that another instance of the same script is running... and then talk to it?

    - by Justin Grant
    I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original insance before the new instance commits suicide. How can I do this in a cross-platform way? Specifically, I'd like to enable the following behavior: "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it. every few minutes the same script is launched again, but with different command-line parameters when launched, the script should see if any other instances are running. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform. So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another? Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible. I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option. More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them. This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing. But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

    Read the article

  • Setting up sendmail to perform mail routing

    - by Diden
    Is sendmail is able to do the following: Forward many user emails to office 365:- [email protected] -> [email protected] [email protected] -> [email protected] [email protected] -> [email protected] [email protected] -> [email protected] Forward the following to a separate server to run php scripts:- [email protected] [email protected] An autoresponder will be sent to the sender. Does anyone know of any sample configuration I could get this started on? Is there a good autoresponder for sendmail? Our emails are hosted on Office365 and it does not allow us to run scripts. Therefore I was considering this option. Is this viable? Please refer to the diagram for more information. Thank you.

    Read the article

  • Google docs spreadsheet not loading

    - by Pythonista's Apprentice
    I have a Google spreadsheet with a lot of very important data and some scripts that where working well. At some point, the brownser crash and I reload the page. After that, I can't acess that (only that!) spreadsheet any more! I try it from other Google accounts but it doesn't work. All I get is the "Loading..." message in the brownser tab and nothing more (the loading process never completes). I also can't: copy the file or download it! Ie, I can lose all the information in my spreadsheet and also lose all my scripts! (I never think something like this could hapen with a Google Product) How can I solve this problem? Thanks in advance for any help!

    Read the article

  • %sessionname% returns incorrect session name

    - by Samuel Walker
    I have a virtualised Windows XP SP3 machine, which I am connecting to over Remote Desktop. One of my scripts needs to use the %sessionname% variable. However this returns incorrect information. C:\>%sessionname% constantly returns RDP-Tcp#5, instead of the value for the currently connected session (RDP-Tcp#35 or similar), as shown in Task Manager This causes my scripts to contain incorrect information. What can I do to resolve this? Edit Further Information: A restart appears to solve the problem for the first connection, but then subsequent connections have the numbers fall out of sync again.

    Read the article

  • LSB Script: how do i know if something goes wrong?

    - by ianaz
    How do I know if a LSB script fails to load or where do I check the log of the lsbs scripts? I added two scripts with the following command: update-rc.d scriptname defaults And just one launches the things I need. It does not seem to be a script error since if I launch it with /etc/init.d/scriptname it works. This is my script: #!/bin/bash ### BEGIN INIT INFO # Provides: nodes # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts all node apps # Description: Starts all node apps like AAM, AMT,... ### END INIT INFO echo "Launch Node applications with forever" export PATH=/usr/local/bin:$PATH # Starts the redis server redis-server # Starts AAM forever -o /var/log/AAM.log -e /var/log/AAM.log --spinSleepTime 2000 -m 5 start /var/nodejs/AAM/app.js

    Read the article

  • How to automatically set default quota limits for users on XFS filesystem, when the new account is created

    - by acidburn2k
    I guess the title explains the problem pretty well. Do you have an idea for a mechanism, which will automatically assign default quota values for every new account created (sort as the skel scheme works, but in this area)? Now, I am looking for a generic clean solution, not some ugly cron based scripts, or wrapper scripts for creating users. I would also like to avoid any external, unmaintained stuff (like forgotten pam modules, and such). Anything what could lead to overhead and extra work in future isn't really the solution, nor is checking for new accounts every minute.

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • How to allow Mac OS X's native Apache/PHP installation to access WebServer directories?

    - by Martin Bean
    I have a problem bugging me with Mac OS X's native Apache/PHP installation. With my PHP scripts, I have to alter the file permissions on each folder I want to access. For example, in an upload script I would have to set the destination directory to 'read & write' for the group 'everyone'. However, I believe this is not the best practice and would like all of my directories to be readily writable to PHP. My scripts are stored in /Library/WebServer/Documents/, which is Mac OS X's default directory to serve web pages locally.

    Read the article

  • CentOS 5.5 Package documentation

    - by fthinker
    Usually when I install a common package like PostgreSQL or MySQL or Python etc using Yum it installs the files held within those packages into locations specific to CentOS itself. It may also install scripts specific to CentOS only. These paths may not be the same as the defaults found within the source distributions found on the PostgreSQL, MySQL, Python etc project websites and the scripts are usually unique to CentOS. Recently when I installed PostgreSQL under Ubuntu I found some very nice distribution specific information about how the install was organized and how to use the package in a Ubuntu way. I found this information in /usr/share/doc/ Is there any such information included within CentOS?

    Read the article

  • monitor a folder and send files via ftp to clients

    - by user73109
    I am looking for software that will monitor a specific folder and when a file is created in it send that file off via ftp to a client associated with that folder by the software. I have tried software such as smart FTP and cute FTP and they don't seem to monitor folders very consistently. Some of the options with them were to write scripts to delete duplicated files from the transfer queue. I really don't want to have to write scripts for software I purchase. I am not opposed to needing scripting or writing it but I feel I shouldn't have to write scripting to make there software properly do some thing it says it does out of the box. I am currently trying to do this on a Windows XP box though running on a Server 2003 is an option if it would make things easier. I really just want pointed in the correct direction this is all fairly foreign to me

    Read the article

  • PHP Mail Not Sending Messages

    - by Kenton de Jong
    I realize this question title is pretty overused, but I couldn't find an answer to my problem. This could be because either I'm not too good with PHP and I don't understand the problem, or because I have a different issue, but I thought I would post it and see if somebody can help me. I developed a website for a local church in my city and I made the site on my computer, put it onto my website as a sub directory and tested it all. It worked great. One of the things the client wanted was there to be an email form that can send emails. I made it and all was good. I then uploaded it onto the church server and thought it went good too. But then we decided to try the email form out, and for some reason it didn't work. I made the email form by having the user select a recipient (pastor, office manager, etc.) with a radio button, and that would change the action of the email form. I just did something like this: if (recipent == "pastor") { document.forms[0].action = "../scripts/php/pastor_contact.php"; } else if (recipent == "pastoralAssist") { document.forms[0].action = "../scripts/php/pastoral_assist_contact.php"; } else if (recipent == "famMinistry") { document.forms[0].action = "../scripts/php/sacra_assist_contact.php"; } else if (recipent == "sacraAssist") { document.forms[0].action = "../scripts/php/fam_ministry_contact.php"; } I know this isn't the cleanest, but it works great. The php files then, all look very similar to this (just a different email)" <?php $name = $_POST['name']; $email = $_POST['email']; $phone = $_POST['phone']; $message = $_POST['message']; $formcontent="From: $name \n Email: $email \n Phone Number: $phone \n Message: $message"; $recipient = "[email protected]"; $subject = $_POST['subject']; $mailheader = "$subject \r\n"; mail($recipient, $subject, $formcontent, $mailheader) or die("There seems to be an error with this form. Sorry about the inconveince. We are working to get this fixed."); header('Location: ../../quickylinks/message_sent.html') ; ?> What this does, briefly, is collect the information from the email form, submit it as an email and then redirect the user to a "Message Sent" page. This works on my server, but not theirs so I believe it's something to do with their server. You can see their server information here and mine here. When the user sends the message, they get "There seems to be an error with this form. Sorry about the inconveince. We are working to get this fixed." and the email doesn't go through, although the code is the same on my server and it works fine there. My initial thought was that PHP wasn't installed on their server (rare, but it does happen). But it was. So then I thought maybe it was installed, but the "mail" function was disabled. So I tried the following php code: <?php if (function_exists('mail')) { echo 'mail() is available'; } else { echo 'mail() has been disabled'; } ?> And it came back with "mail() is available". So now I'm stuck and I don't know the problem could be. As I said, I'm not very good at PHP yet so if somebody could give a detailed answer, I would be really really thankful! Thank you so much!

    Read the article

  • How do I allow a (local) user to start/stop services with a scheduled task?

    - by Mulmoth
    Hi, on a Windows 2008 R2 server I have two small .cmd-scripts to start/stop a certain service. They look like this net start MyService and net stop MyService I want to execute these script via scheduled task, and I thought it would be best to create a local user for this job. The user is not member of the Administrators group. But the scripts fail with exit code 2. When I logon with this local user and try to execute these script in command line, I see a message like (maybe not exactly translated from german to english): Error code 5: Access denied It doesn't matter whether I start the command line as Administrator or not. How can this local user gain rights to do the job?

    Read the article

  • who deleted my files?

    - by akalter
    I have some linux servers. On two of our server we have MySQL. We have daily backup on both machine. But the scripts are different. I saw both scripts. On one of them I saw the "delete older files" algorithm, but in the other this is happening but not from the script. I am trying to discover who deletes my files, because of that I want to use same script on both machine because of that in the script with the deletion I also copy the files to the another server, and I want to do that in both servers. Who have an idea who deleted my older backups? Thank you!

    Read the article

  • how to check if something is in the queue of torque?

    - by kloop
    I want to re-run some jobs that completed prematurely under torque. These jobs are run through .job scripts (using qsub). However, I don't want to re-run a job which is already in the queue. Given a script filename, how can I know whether it is already in torque's queue (using qstat?) or not? I prefer to do it programmatically, of course, so any oneliner that searches for a given script name would be great. I will note that I can grep submit_args in qstat -f, but I can't get it to display the whole script name when it is too long. This is crucial. EDIT: I managed to solve it using the following command: qstat -x | perl -pi -e 's/\<\//\n/g' | grep job$ | grep -v submit_args | perl -pi -e 's/Job_Id\>\<Job_Name\>//' works because all my scripts end in the string "job".

    Read the article

  • OS X - Automatically Set Execute Permissions for New Files?

    - by i help X u
    I'm using OS X 10.6.4 and am trying to set a folder to automatically enable execute permissions on new script files copied or created in a directory. I have used Sandbox 2 to set every permission for the folder to enabled with sticky bits and the inherit flag set but I still have to manually set the execute flag using chmod for every new flag. I've done: chmod -R a+rwxs ~/scripts I've done: chmod 7777 ~/scripts And the permissions for the folder show as: drwsrwsrwt+ for the folder. But if I add a new script file it's set to "-rw-r--r--+" (the default) I looked at setting "unmask 000" in the .profile file but the default value for files is 666 with an unmask of 022 so that's not relevant since I would need a default value of 777 for files. I have figure out how to use chmod in an AppleScript triggered by a folder action to automate this but I'm wondering if there is a simple ACL or chmod setting I'm missing. So, is there a way to automatically set execute permission for new files? (Without using a folder action and AppleScript?)

    Read the article

  • How can I get bash to perform tab-completion for my aliases?

    - by dstarh
    I have a bunch of bash completion scripts set up (mostly using bash-it and some manually setup). I also have a bunch of aliases setup for common tasks like gco for git checkout. Right now I can type git checkout dTab and develop is completed for me but when I type gco dTab it does not complete. I'm assuming this is because the completion script is completing on git and it fails to see gco. Is there a way to generically/programmatically get all of my completion scripts to work with my aliases? Not being able to complete when using the alias kind of defeats the purpose of the alias.

    Read the article

  • Comparing two specific properties of a CSV using Compare-Object isn't giving the expected results

    - by MDMarra
    I have a list of users from two separate domains. These lists are in CSV format and I only care about the SAMAccountName, which is a field in these CSVs. The code that I'm working with is currently: $domain1 = Import-CSV C:\Scripts\Temp\domain1.xxx.org.csv | Select-Object SAMAccountName $domain2 = Import-CSV C:\Scripts\Temp\domain2.xxx.org.csv | Select-Object SAMAccountName Compare-Object ($domain1) ($domain2) This is returning only a handful of results (which aren't accurate) in this format: @{samaccountname=SomeUser} => Obviously, Compare-Object isn't evaluating the objects as strings. How do I make this work?

    Read the article

  • Practical way of keeping up-to-date backup servers?

    - by ftkg
    What is the approach generally used when you want to have backup physical servers? Currently I have a Linux server running a database, a samba share, a webapp and some scripts; and a Windows Server, running some third-party software. What I would like was to be able to have a ready backup server to enter in production in case of failure, but how to keep them up-to-date? I've seen some expensive solutions for Windows; for Linux I've wondered if I really have to build an array of scripts.

    Read the article

  • Batch file to open multiple cmd prompts

    - by JHarris
    I am trying to write a batch file that will automate the following manual process: Open a new cmd prompt (prompt1) Run a bat file (b1) Run a program (that will continue to run) Minimize prompt1 Open a new cmd prompt (prompt2) Run a bat file (b1) Run a different program (that will continue to run) Minimize prompt2 I've found ways to open multiple instances of cmd to run different things, but after I've run the first thing (b1), I then need to run a program in that same cmd window. I currently have start /min cmd /k C:\Users\db2admin\python_environment\Scripts\activate.bat start /min cmd /k C:\Users\db2admin\python_environment\Scripts\activate.bat This opens the two windows and runs the bat, great, but now I need to execute another command (running a python file) in each of the cmd windows. How do I send commands to each prompt?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >