Search Results

Search found 23627 results on 946 pages for 'alter script'.

Page 46/946 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Looking for a way execute a task on all files in a directory (recursively) on Windows

    - by stzzz1
    I have a huge number of mp4 video files that needs to have a volume boost. I need a way to execute a ffmpeg audio filter on all files in a specified base directory (and in subdirectories as well). My problem is that I'm working on a Windows computer and I have no knowledge of its shell syntax. I would like to do the equivalent of what this bash script does : TARGET_FILES=$(find /path/to/dir -type f -name *.mp4) for f in $TARGET_FILES do ffmpeg -i $f -af 'volume=4.0' output.$f done I spent quite some time this afternoon looking for a solution but the recursive nature of what I need (that is so simple with find!) isn't too clear. Any help would be greatly appreciated!

    Read the article

  • Some $_SERVER parameters missing when accessing PHP script via Cron

    - by Jakobud
    I have a script that I need to run with PHP via cron. The original author of the script made a lot of user of certain $_SERVER parameters (like REQUEST_URI). But it appears that certain variables don't exist when running PHP via command line or via CRON. For example, there is no request uri, so it makes sense that the REQUEST_URI parameter wouldn't be available. Is there any way around this other than to completely rewrite the script in order to avoid using special $_SERVER parameters that aren't universally available?

    Read the article

  • Linux- passwordless ssh from system (root) script

    - by redmoskito
    What's the easiest way to have a system script (running as root) execute remote commands over ssh? I've written some scripts that execute commands remotely via ssh, and they work great when I run them as myself, as I've set up ssh-agent and keys for passwordless login. I'd like to call these when my laptop docks and undocks. I've been successful at running arbitrary scripts when docking/undocking, but since the ACPI event scripts run as root, trying to run my ssh script fails during authentication. I tried using sudo with the -u and -i flags to simulate running the script as my user, e.g.: sudo -u redmoskito -i /home/redmoskito/bin/remote_command which successfully finds my private key and tries to use it, but the ssh-agent credentials are still missing, so it still needs my passphrase.

    Read the article

  • Replace single character in windows filenames

    - by Matt Rogish
    I have a Win2k3 server that has a whole bunch of filenames that need renamed. Basically, I just need all - (dashes) replaced with _ (underscores), no matter where they are in the string. Assume that there are no duplicates. I can do this on my mac with a little script but the files are too large and crazy to transfer to my mac, rename, then go back to the server. Would love to do this in a command shell and not have to download a renamer or any add'l software. Thanks!

    Read the article

  • How to run scripts within a telnet session?

    - by wenzi
    I want to connect to a remote host using telnet there is no username/password verification just telnet remotehost then I need to input some commands for initialization and then I need to repeat the following commands: cmd argument argument is read from a local file, in this file there are many lines, each line is a argument and after runing one "cmd argument", the remote host will output some results it may output a line with string "OK" or output many lines, one of which is with string "ERROR" and I need to do something according to the results. basically, the script is like: initialization_cmd #some initial comands while read line do cmd $line #here the remote host will output results, how can I put the results into a variable? # here I want to judge the results, like if $results contain "OK";then echo $line >>good_result_log else echo $line >> bad_result_log fi done < local_file the good_result_log and bad_result_log are local files is it possible or not? thanks! NOTE: I can't control B, I can only run initial cmds and cmd $line on B

    Read the article

  • Need help creating a batch file to replace "." with "-"

    - by Brandon Ogle
    Like the title says, I need help creating a batch file to replace "." with "-", however I need to perserve the file extensions and it needs to work through the subfolders. I found another post on here that got the effect I wanted but it also swapped the . in my file extensions. I am a completely ignorant regarding shell script so please be detailed in your response. I have no idea what any of the switches mean or even how to specify the path for that matter. Thanks!

    Read the article

  • Documenting software-updates semi-automatically in MacOS X by parsing log files?

    - by Martin
    I'd like to document changes I made to my computer (running MacOS 10.6.8) to be able to identify the sources of eventual problems. Mostly I install updates when a software notifies me about a newer version and offers me a dialog to download and install the update. Currently I'm documenting those updates "by hand" by noting in a text file, when I have e. g. installed a Flash-Player update or updated another 3rd party software ... I wonder if I could achieve that easier and semi-automatically by parsing system logfiles for certain texts like "install" and that way directly get the relevant information: what has been installed (Software and version) when has been installed where has it been installed/what has changed Is there a way to extract such information by a script from the existing logfiles?

    Read the article

  • Set one homepage for all browsers simultaneously

    - by MorganTiley
    For testing purposes, I would like to set the homepage of all browsers installed on my computer — Internet Explorer, Chrome, Firefox and Safari — to the same page. I'd like to do this all at the same time. I imagine that there's an application that has an input box for the URL I want to use as homepage and an OK button to set it in all browsers. Where can I get a utility or script that can do this?

    Read the article

  • Launch script after SFTP disconnect

    - by Mates
    I'm currently using Caja (basically the same as Nautilus) to connect using SSH to my server and work with files. What I'm looking for is a way to launch a simple script when I disconnect - I can launch a script after disconnecting from the TTY by putting it into ~/.bash_logout file, but that is not executed when disconnecting from a file manager. The only idea I have is to set up a cronjob which would be checking for existing sftp-server or sshd processes periodicaly and launched the script when there's no such process running. Is there any easier way to do this?

    Read the article

  • How can I modify the BAT file in this post so it will randomly select 1 background contained in a folder containing multiple backgrounds?

    - by Radical924
    Is there a way to modify the bat file from this post here: How do I set the desktop background on Windows from a script? so that it will randomly select 1 background image from a folder containing multiple images??? AND I would also like the background to randomly change to one of the backgrounds randomly contained in the same folder. If this is possible how would I modify the bat file below??? @echo off reg add "hkcu\control panel\desktop" /v wallpaper /t REG_SZ /d "" /f reg add "hkcu\control panel\desktop" /v wallpaper /t REG_SZ /d "C:\[LOCATION OF WALLPAPER HERE]" /f reg delete "hkcu\Software\Microsoft\Internet Explorer\Desktop\General" /v WallpaperStyle /f reg add "hkcu\control panel\desktop" /v WallpaperStyle /t REG_SZ /d 2 /f RUNDLL32.EXE user32.dll,UpdatePerUserSystemParameters exit Also I noticed that this bat file won't work usually (9 times out of 10)... I receive an "ERROR: The system was unable to find the specified registry key or value." I have Windows 7 64-BIT Home Premium Service Pack 1

    Read the article

  • Best 'free' option for alert notifications other than email/SMS

    - by Eureka Ikara
    Looking for a Linux script solution that can send alerts to a service such as Twitter, Skype or Google Talk and sends to Android and iPhone clients. Have found twurl for Twitter with previous Bash scripts using curl no longer supported. But twurl looks promising. But haven't seen how to get Android Twitter client to make a distinctive sound when a tweet arrives. Found some info about Skype4Py from several years ago that supports Skype Chats. But doesn't look like it is currently supported. Have tried a few CLI clients for XMPP/Google Talk including xmpp4r-simple and freetalk, but found xmpp4r-simple buggy and freetalk succeeded in sending one chat message, but most never arrived. Whatever is used needs to support Android and iPhone clients. Reason why email is problematic is that Gmail gets very upset when emails start flooding in every minute as a result of alerts. Any suggestions?

    Read the article

  • Run script when Varnish starts

    - by kipusoep
    I'd like to run a script when Varnish starts. This script should execute a webrequest to a webserver (its backend), which then makes sure Varnish's cache gets filled with all pages residing on this webserver. So this script makes sure everyting is in Varnish's cache when Varnish (re)starts, because we're using Varnish as cache and fail-over (the webserver should be able to be down for let's say a week for example, without any consequences). What are the possibilities to do this? We can't just edit /etc/init.d/varnish and /usr/sbin/varnishd because they can het overwritten when updating varnish? Thanks!

    Read the article

  • Adding custom script on ESXi 5.0

    - by Quzar
    I have an ESXi server that I would like to have run a custom script on every boot that contains esxcli and other commands. I have tried adding the script into init.d and creating an rc.local.d folder with a script, but the etc folder gets rebuilt on startup. I've also tried modifying state.tgz and local.tgz in the /bootbank folder in order to force these files to appear, but that does not seem to work either. Is there any way I can run custom commands on boot? Note: I've tried the advice here ESXi boot process / state storage to no avail. Seems the system was changed between 4.1 and 5.0

    Read the article

  • Client-Side script to upload attachments to the Sharepoint 2007 list

    - by Clone of Anton Makrushin
    Hello. I have no good script-writing experience. So, I have a list created on MOSS 2007 with about 1000 elements and attachments enabled. I need to attach to each list item file (*.jpg) from a local folder. I doesn't have administrator privileges at MOSS server, only contributor rights Here is my script: $web = new-Object system.Net.WebClient $web.Credentials = [System.Net.CredentialCache]::DefaultCredentials $web.Headers.Add("user-agent", "PowerShell Script") $web.UploadFile('http://ruglbsrvsps/IT/Lists/Test1/', 'C:\temp\Attachments\14\Img1.jpg' ) Test1 - target list; Item1, Item2, Item3 - list items, without attachments, created manually When I run script, it returns byte array and does not upload file to the list item. Can you fix my script or advice better solution for my task (attach bulk of files to the MOSS list items, only contributor rights for target Sharepoint 2007 list) Thank you.

    Read the article

  • jquery script removal

    - by VictorS
    I am working on a page(ASP.NET 3.5) that has alert when "Save" button is pressed, i.e.this in page code behind: Page.ClientScript.RegisterStartupScript(this.GetType(), "alertMsg", "alert('" + Message + "');", true); So when I look at the page after sucessful save I see a script tag added: <script type="text/javascript"> //<![CDATA[ alert('Save Sucessful.'); </script> The problem is that there is another button that redirects to another page and on that page there is a button to jump back to this page, i.e. javascript:history.go(-1); So if you save then go to another page and come back you see alert again. Unless there is a better way of handling this situation, I think I need to remove that script when I redirect from the page, can I do it with jQuery, i.e. on redirect button click remove above script from the page?

    Read the article

  • How to prevent code/option injection in a bash script

    - by asmaier
    I have written a small bash script called "isinFile.sh" for checking if the first term given to the script can be found in the file "file.txt": #!/bin/bash FILE="file.txt" if [ `grep -w "$1" $FILE` ]; then echo "true" else echo "false" fi However, running the script like > ./isinFile.sh -x breaks the script, since -x is interpreted by grep as an option. So I improved my script #!/bin/bash FILE="file.txt" if [ `grep -w -- "$1" $FILE` ]; then echo "true" else echo "false" fi using -- as an argument to grep. Now running > ./isinFile.sh -x false works. But is using -- the correct and only way to prevent code/option injection in bash scripts? I have not seen it in the wild, only found it mentioned in ABASH: Finding Bugs in Bash Scripts.

    Read the article

  • Birt 2.5.2 report generates empty table data when run from a cron job

    - by Trueblood
    I've got a shell script that runs genReport.sh in order to create a .pdf formatted report, and it works perfectly when it's run from the command line. The data source for the report is a ClearQuest database. When it's run from a CRON job the .pdf file is created, except that only the various report and column headers are displayed, and the data of the report is missing. There are no errors reported to STDERR during the execution of the script. This screams "environment variable" to me. Currently, the shell script is defining the following: CQ_HOME BIRT_HOME ODBCINI ODBCINST LD_LIBRARY_PATH If it's an environmental thing, what part of the environment am I missing?

    Read the article

  • Running Perl Scripts on servers that don't have the modules

    - by envinyater
    I need to run a perl script to gather system information that will be deployed and executed on different unix servers. Right now I am writing it and testing it, and I'm receiving this error. Can't locate XML/DOM.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at test.pl line 7. BEGIN failed--compilation aborted at test.pl line 7. So I am simply using XML::DOM which should be part of Perl but it isn't for this version on this particular server which is 5.10.1. Anyways, is there a way I can create and design my script and package modules into it while keeping the .pl extension, which is the requirement for this script?

    Read the article

  • SWFUpload is it possible to upload multiple files to a single php script execution

    - by user176333
    Hello, I'm trying to implement SWFUpload into an existing PHP upload funcitonality. My current backend script however expects 2 fiels to be uploaded in a single php script execution. (e.g. it excepts the $_FILES parameters to contain 2 entries). So i'm queueing 2 files with SWFUpload and start uploading them. However it appears SWFLUpload calls the php backend script for each queued file. I'd rather modify SWFUpload to send the files with a single backend script execution instead on having to adjust the backend script. Is anyone familiar with this? I've searched various resources (like the SWFUploads docs and forum, but have not found similiar topics. Thanks in advance

    Read the article

  • PHP set timeout for script, set_time_limit not working

    - by tehalive
    I have a command-line PHP script that runs a wget request using each member of an array with foreach. This wget request can sometimes take a long time so I want to be able to set a timeout for killing the script if it goes past 15 seconds for example. I have PHP safemode disabled and tried set_time_limit(15) early in the script, however it continues indefinitely. I've given up troubleshooting set_time_limit() and was trying to find other ways to kill the script after 15 seconds of execution. However, I'm not sure if it's possible to check the time a script has been running while it's in the middle of a wget request at the same time (a do while loop did not work). Thanks for any tips!

    Read the article

  • Powershell 2.0 error handling - Command line call vs. ISE

    - by Gromix
    Hi, In the context of deployment scripts, I would like to capture any error than happens and stop immediately. I have notice some significant differences between the following calls: powershell.exe -File Script.ps1 powershell.exe -Command "& '.\Script.ps1'" powershell.exe .\Script.ps1 For example, the -File call will handle errors in the exact same way as the ISE. The other two seem to ignore the $ErrorActionPreference variable, and do not seem to catch Write-Error in try/catch blocks. Could someone help me understand the implications of each one, and why they are behaving differently? Thanks, Romain

    Read the article

  • Communicate between content script and options page

    - by Gaurang Tandon
    I have seen many questions already and all are about background page to content script. Summary My extension has an options page, and a content script. The content script handles the storage functionality (chrome.storage manipulation). Whenever, a user changes a setting in the options page, I want to send a message to the content script to store the new data. My code: options.js var data = "abcd"; // let data chrome.tabs.query({ active: true, currentWindow: true }, function (tabs) { chrome.tabs.sendMessage(tabs[0].id, "storeData:" + data, function(response){ console.log(response); // gives undefined :( }); }); content script js chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) { // not working }); My question: Why isn't the approach not working? Is there any other (better) approach for this procedure.

    Read the article

  • Problem with Ruby script output being stored into a file

    - by nickf
    I have a Ruby script that outputs a heap of text. As an example: puts "line 1" puts "line 2" puts "line 3" # etc... (obviously, this isn't how my script works..) There's not a lot of data - perhaps about 8kb of character data in total. When I run the script on the command line, it works as expected: $ ./my-script.rb line 1 line 2 line 3 But, when I push it into a file, the output is truncated at exactly 4096 bytes: $ ./my-script.rb > output.txt What would cause it to stop at 4kb?

    Read the article

  • Auto SSH and execute script

    - by rohanbk
    I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script. Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script? I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.

    Read the article

  • [R] multiple functions in one R script

    - by Philipp
    Hi, I guess it's a stupid question, but I don't get it :-( I wrote an R script, which creates heatmaps out of xls files. I am calling this R script with a Perl system call and pass over all the arguments. This all works fine. Now I wanted to make the R script less confusing by writing different functions in the R script, for example: args <- commandArgs(TRUE) parsexls <- function(filepath) { data <- read.xls(...) assign("data", data, globalenv()) } reorder <- function(var) { data <- data[order...] assign("data", data, globalenv()) } When I want to call the functions with parsexls(args[1]) reorder(args[2]) nothing happens. But when I place the parsexls(args[1]) in the script between the two functions shown above, the file is parsed correctly! The reorder(args[2]) seems never to be read. Any ideas what I am doing wrong? Phil

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >