Search Results

Search found 47712 results on 1909 pages for 'looking for a script'.

Page 275/1909 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • rename multiple files with unique name

    - by psaima
    I have a tab-delimited list of hundreds of names in the following format old_name new_name apple orange yellow blue All of my files have unique names and end with *.txt extension and these are in the same directory. I want to write a script that will rename the files by reading my list. So apple.txt should be renamed as orange.txt. I have searched around but I couldn't find a quick way to do this.I can change one file at a time with 'rename' or using perl "perl -p -i -e ’s///g’ *.txt", and few files with sed, but I don't know how I can use my list as input and write a shell script to make the changes for all files in a directory. I don't want to write hundreds of rename command for all files in a shell script. Any suggestions will be most welcome!

    Read the article

  • Do all domains on the same shared hosting server have the same IP or ID

    - by silow
    Here's what I've got: siteA.com and siteB.com are hosted on hostgator. They're hosted on the same account of a shared hosting server (not VPS or dedicated). script.php is an external site that each of these 2 sites are accessing. I noticed that when siteA.com or siteB.com access the outside script.php, the script identifies them both as 1a.12.12ab.static.theplanet.com (apparently because hostgator uses theplanet.com servers). The fact that they're identified as the same value isn't surprising because after all they're hosted on the same account /home/user123/public_html. What I'm wondering about is how about other websites that are hosted on the same shared hosting server, but under other accounts. Basically other websites that are under another developer's control, but just happen to share the same hardware (hosting server). Do they also have the exact same identifier 1a.12.12ab.static.theplanet.com or that changes by account?

    Read the article

  • How to get the Host value inside ~/.ssh/config

    - by iconoclast
    Within a ~/.ssh/config or ssh_config file, %h will give you the HostName value, but how do you get the Host ("alias") value? Why would I want to do that? Well, here's an example Host some_host_alias HostName 1.2.3.4 User my_user_name PasswordAuthentication no IdentityFile ~/.ssh/some_host_alias.rsa.id LocalCommand some_script.sh %h # <---- this is the critical line If I pass %h to the script, then it uses 1.2.3.4, which fails to give it all the options it needs to connect to that machine. I need to pass some_host_alias, but I can't find the % variable for that. (And: yes! I'm aware of the risk of recursion. That's solved inside the script.) UPDATE: Kenster pointed out that I could just hard-code the Host value as an argument to the script. Of course this will work in the example I gave, but it won't work if I'm using pattern matching for the Host.

    Read the article

  • Is it possible to keep only one Database for both web and desktop applications?

    - by B4NZ41
    I'm experiencing a trouble with my business model, let me explain better. I'm developing a software for 1 year and few months, it's for the food industry, more exactly a software to: Delivery, Take Way, Table Reservation, POS, Accounts Payable and Receivable, Prints(receipt), Kitchen Monitors Orders, Customers Orders Control and Fiscal Area. Well, I had separated the software mainly in two areas, one is web area and the other is desktop area (Used by Admins only) and local installed. 1 - Web Area (Basically do the follow:) Show Catalog with the products Customers Make Orders Customers Pay for the Orders etc ... as mentioned above 2 - Desktop Area Manage Orders Manage Customers Manage Suppliers Manage Accounts Payable and Receivable etc ... as mentioned above The web area is hosted in an online web server (scripts and database are online). The Desktop area is hosted locally in a Linux machine with a local database and local scripts files. My question is: Is it possible to keep only one Database for both applications? If YES, please what is the best approach? Follow my technical specification environment Database: Actually I have two databases working and I would love to keep only one. Operating System: Linux (Kernel 2.6.X and above) or Windows (XP and above) For the Web Area Apache, PHP, Python, Java Script, Shell Script and MySQL. For the Desktop Area: PHP-GTK2, Apache, PHP, MySQL and Shell Script.

    Read the article

  • How do I get debuild to put the binary in /usr/bin?

    - by SammySP
    I have been recently trying to package a small Python utility to put on my PPA and I've almost got it to work, but I'm having problems in making the package install the binary (a chmod +x Python script) under /usr/bin. Instead it installs under /. I have this directory structure - http://db.tt/0KhIYQL. My package Makefile is like so: TARGET=usr/bin/txtrevise make: chmod +x $(TARGET) install: cp -r $(TARGET) $(DESTDIR) I've used $(DESTDIR), as I understand it to place the file under the debian subdir when debuild is run. I have the txtrevise script, my executable, under usr/bin folder under the root of my package. I also have the Makefile and usr/bin/textrevise in my tarball: txtrevise_1.1.original.tar.gz. However when I build this and look inside of the Debian package, txtrevise is always at the root of the package instead of under usr/bin and will be installed to / instead of /usr/bin. How can I get debuild to put the script in the right place? Thanks. Any help would be greatly appreciated. I'm stumped.

    Read the article

  • Throttling bandwidth on a per group basis

    - by Robreylen
    I am wondering if it is possible to create a bandwidth shaping/throttling script that shapes traffic based on user group. That is, if user1, user2, are in user group group1, they will have 1mb/s download and 1mb/s upload, whilst if user3 and user4 are in group2, they will have 256kb/s download and 256kb/s upload. I've read a bit about this and I found some iptables and TC implementations of a per user solution, but I have not seen anything for a user group. Hopefully it can be simply implemented in form of a custom iptables rules and script running with TC or the like. Here is a script I was looking into that does a system wide throttle: http://atmail.com/kb/2009/throttling-bandwidth/ I assume it is possible to do user group throttling since it is possible for throttling on a per user basis. Thanks for any info you can provide for this question.

    Read the article

  • init.d service died

    - by jerluc
    Adapting some code from a linux forum, I've added a service script to /etc/init.d on my ubuntu natty server to start/stop/restart node.js It literally was working the first day I made it, but then today, after viewing my website this morning, the server threw a 404, and upon further inspection, the node.js process was gone. So I went to start the service again, only this time, node.js didn't start at all, and ever since I haven't been able to get my service script working. Below is the entire script: #!/bin/sh # # Node Server Startup # case "$1" in start) echo -n "Starting node: " daemon node /usr/local/www/server.js echo touch /var/lock/subsys/node ;; stop) echo -n "Shutting down node: " killall node echo rm -f /var/lock/subsys/node rm -f /var/run/node.pid ;; status) status node ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading node: " killall node -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 Thanks for any help!

    Read the article

  • Automate setup of constrained kerberos delegation in AD

    - by Grhm
    I have a web app that uses some backend servers (UNC, HTTP and SQL). To get this working I need to configure ServicePrincipalNames for the account running the IIS AppPool and then allow kerberos delegation to the backend services. I know how to configure this through the "Delegation" tab of the AD Users and Computers tool. However, the application is going to be deployed to a number of Active Directory environments. Configuring delegation manually has proved to be error prone and debugging the issues misconfiguration causes is time consuming. I'd like to create an installation script or program that can do this for me. Does anyone know how to script or programmatically set constrained delegation within AD? Failing that how can I script reading the allowed services for a user to validate that it has been setup correctly?

    Read the article

  • how to check if something is in the queue of torque?

    - by kloop
    I want to re-run some jobs that completed prematurely under torque. These jobs are run through .job scripts (using qsub). However, I don't want to re-run a job which is already in the queue. Given a script filename, how can I know whether it is already in torque's queue (using qstat?) or not? I prefer to do it programmatically, of course, so any oneliner that searches for a given script name would be great. I will note that I can grep submit_args in qstat -f, but I can't get it to display the whole script name when it is too long. This is crucial. EDIT: I managed to solve it using the following command: qstat -x | perl -pi -e 's/\<\//\n/g' | grep job$ | grep -v submit_args | perl -pi -e 's/Job_Id\>\<Job_Name\>//' works because all my scripts end in the string "job".

    Read the article

  • who deleted my files?

    - by akalter
    I have some linux servers. On two of our server we have MySQL. We have daily backup on both machine. But the scripts are different. I saw both scripts. On one of them I saw the "delete older files" algorithm, but in the other this is happening but not from the script. I am trying to discover who deletes my files, because of that I want to use same script on both machine because of that in the script with the deletion I also copy the files to the another server, and I want to do that in both servers. Who have an idea who deleted my older backups? Thank you!

    Read the article

  • YouSendIt Alternative?

    - by user4855
    Looking for a reasonably priced alternative to YouSendIt's exorbitant pricing for an embedded, unbranded (i.e. no "Uploads by SomeCompany" or at the very least, discrete, subtle co-branding) file upload solution for my client's print shop Website. To do what we want to do with YouSendIt, we're looking at a corporate account of $995 USD plus $29.99 USD monthly fee, that is only sold pro-rated, so you have to buy the entire year's worth. To me, this is just unacceptable considering the commodity pricing of storage and bandwidth nowadays. For data, we're looking at roughly 10MB per upload, with perhaps 250-1000 uploads per month, with transient data storage of no more than 30 days (and more than likely 1-2 business days) for a total of 10 GB transfer (upload) and 10 GB transfer (download, to the print shop) at the very max each month. Any ideas? Everything I've found through searching seems to be geared more towards personal file sharing and not for embedding into Websites. Thanks

    Read the article

  • Run a service after networking is ready on Ubuntu?

    - by TK Kocheran
    I'm trying to start a service that depends on networking being started, whenever the computer is rebooted. I have a few questions: Is this easily possible from an /etc/init.d script? I have tried creating a script here (conforming to the standards), but I'm really doubtful that it's even running on boot, let alone working. When I test it manually, it works. I've seen the new Upstart service, but as far as how that actually works, I'm completely in the dark. How can I make a script that runs on boot which runs after networking has been started? If I could run it after connected to wireless network, even better :)

    Read the article

  • File type actions for ruby scripts

    - by Kovags
    Hello, I just installed the Ruby interpreter and created the file test.rb. In the Folder Options, I created the rb file type and an action called Run and assigned the application C:\Ruby192\bin\ruby.exe "%1"" So It's possible for to get into the Windows XP command line and run the script simply by doing this: C:\>test.rb But when I need to send parameters to the script, I can't simply do the following: C:test.rb parameter1 parameter2 I'll have to do the following instead: C:\Ruby192\bin\ruby.exe c:\test.rb parameter1 parameter2 I just noticed that I'm able to edit the action the following way to pass more parameters: C:\Ruby192\bin\ruby.exe "%1" "%2" "%3"" That allows me to give 2 parameters to the script, but for some cases I need to pass a handful of parameters and it doesn't seem right for me to append "%5" "%6" "%7" ad nauseam. What's the canonical way to do it?

    Read the article

  • Windows, Apache and MSSQL Authentication

    - by user1114330
    I have a create database script written in perl. I remember it working just fine another machine. A couple years later using a Vista machine I am trying to use it again and it keeps failing. The main difference is that now I am using Apache instead of IIS. In the script the IUSR account is granted permissions as it needs to write to the database as a part of another program. IIS has been uninstalled on this machine but the IUSR account still exists. The NT AUTHORITY\IUSR is also seen in the logins drop down in MSSQL(2012). The machine is running Vista Home Edition. However when running the script I get errors that say that NT AUTHORITY\IUSR cannot be found. I tried also with COMPUTERNAME\IUSR just for the heck of it and of course it was not found. I also tried with IUSR alone and for some reason the user isn't being "found"? Any ideas?

    Read the article

  • Kill child process when the parent exits

    - by kolypto
    I'm preparing a script for Docker, which allows only one top-level process, which should receive the signals so we can stop it. Therefore, I'm having a script like this: one application writes to syslog (bash script in this sample), and the other one just prints it. #! /usr/bin/env bash set -eu tail -f /var/log/syslog & exec bash -c 'while true ; do logger aaaaaaaaaaaaaaaaaaa ; sleep 1 ; done' Almost solved: when the top-level process bash gets SIGTERM -- it exists, but tail -f continues to run. How do I instruct tail -f to exit when the parent process exits? E.g. it should also get the signal. Note: Can't use bash traps since exec on the last line replaces the process completely.

    Read the article

  • Ideas to automate customer order processing? [on hold]

    - by user2753657
    i am looking for a way to automate the order processing in my webshop. Normally, a user buys a product in my webshop, then, i receive an order confirmation email with order details, address etc. After receiving the order email, I login to my suppliers website and input the order details manually. My supplier then ships the item to the address specified by me. I am looking for ideas how to automate this process, especially in the case if i receive for example 4-5 order emails at one time (and not one by one with several hours between)... I was looking at the program Winautomation, but i am not sure if this fits my needs. Any ideas are appreciated. thanks!

    Read the article

  • Need ability to set configuration options using single method which will work across multiple server configurations.

    - by JMC Creative
    I'm trying to set post_max_size and upload_max_filesize in a specific directory for a web application I'm building. I've tried the following in a .htaccess file in the script directory. (upload.php is the script that needs the special configuration) <Files upload.php> php_value upload_max_filesize 9998M php_value post_max_size 9999M </Files> That doesn't work at all. I've tried it without the scriptname specificity, where the only thing in the .htaccess file is: php_value upload_max_filesize 9998M php_value post_max_size 9999M This works on my pc-based xampp server, but throws a "500 Misconfiguration Error" on my production server. I've tried also creating a php.ini file in the directory with: post_max_size = 9999M upload_max_filesize = 9998M But this also doesn't always work. And lastly using the following in the php script doesn't work either, supposedly because the settings have already been compiled by the time the parser reaches the line (?): <?php ini_set('post_max_size','9999M'); ini_set('upload_max_filesize','9998M'); ?>

    Read the article

  • Copy any file with a specific file extension in subfolders into a folder

    - by Onyxius
    I found a script on here that would use 7zip and extract all the files in all the sub-folders of a specific folder and put them in their own folder using the script below. What I need is add to it or maybe use another script if i have to and specify where i want those files to go instead of putting them in their own folder within the folder. I don't know how to do this and hope someone would be able to help. Thanks for the help @echo on FOR /D /r %%F in ("*") DO ( pushd %CD% cd %%F FOR %%X in (*.rar *.zip *.tar) DO ( "C:\Program Files\7-zip\7z.exe" x -o"%%~nX" "%%X" ) popd )

    Read the article

  • Run a specific command from a directory

    - by Cameron Kilgore
    I have a bash script where I need to run an init utility within a directory with a configuration file defined. I don't think it's possible to explicitly tell the utility to run the file as an argument, so what I need to do is go to the directory with the config file, and then run the command. I have some logic in place, but its not working -- the utility never runs. Is there any way I can tell the script to go to this directory, and then run the script? cd /var/www/testing-dev.example.co eval "standardprofile"

    Read the article

  • Remote location's status "Disconnected", how to keep it connected (OK)? [net-use-command]

    - by AZ
    I have an script that keeps running, under some scenarios, it needs to contact a server (\\us-sign). From time to time, if this server remains uncontacted for a while, the next time my script needs it, it will ask for my credentials. What I found is that, after such thing happening and using net use, such server will be displayed as disconnected. if I type "net use \\us-sign", it won't ask me neither user nor password, what makes me believe, my credentials for such server are still "valid", nonetheless, its status will remain "disconnected". This script is supposed to help us automate some procedures, but the need to keep a watch on it shall it request for credentials, it kind of defeats the purpose. How can I keep its status "OK" no matter how long it is not being contacted?

    Read the article

  • Solutions for scheduling cronjob

    - by Shamsul
    I have setup a list of corn, Some of the corn script takes long time like 1-5 hours, and its increasing day by day. I do not want to run two corn script at the same time, its not for the dependency, bu its because my server memory is not that much so that it can not handle two big operation, so i need to find out a solution so that the scheduled scripts will not start until the other previous script no finished. i have 10-15 corn job in the list. and 5 of them i do not want to overlap. Anyone help me find out any solutions?

    Read the article

  • Citrix has issues resolving network shares

    - by George
    We are having this weird issue with our Citrix (version 4.5) server (sitting on Windows 2003 r2), where a couple users have issues resolving single shared network drive. We use a logon script to map all shared drives. The weird part is that of 3 shared drives, users can access 2, but the 3rd one goes to the old server (even though the logon script points to the new server). And that issues is limited to a few users. I had them log off and re-loggin to no success. It happens just in Citrix. The file server, that is being accessed, is Windows 2008 R2. Like I said we use a logon script to map the network drive. I understand I might be a little confusing, I will gladly reword the post.

    Read the article

  • Unable to debug javascript?

    - by linkme69
    I’m having some problems debugging an encoded javacscript. This script I’m referring to given in this link over here. The encoding here is simple and it works by shifting the unicodes values to whatever Codekey was use during encoding. The code that does the decoding is given here in plain English below:- <script language="javascript"> function dF(s){ var s1=unescape(s.substr(0,s.length-1)); var t=''; for(i=0;i<s1.length;i++)t+=String.fromCharCode(s1.charCodeAt(i)-s.substr(s.length-1,1)); document.write(unescape(t)); } I’m interested in knowing or understanding the values (e.g s1,t). Like for example when the value of i=0 what values would the following attributes / method would hold s1.charCodeAt(i) and s.substr(s.length-1,1) The reason I’m doing this is to understand as to how a CodeKey function really works. I don’t see anything in the code above which tells it to decode on the basis of codekey value. The only thing I can point in the encoding text is the last character which is set to 1 , 2 ,3 or 4 depending upon the codekey selected during encoding process. One can verify using the link I have given above. However, to debug, I’m using firebug addon with the script running as localhost on my wamp server. I’m able to put a breakpoint on the js using firebug but I’m unable to retrieve any of the user defined parameters or functions I mentioned above. I want to know under this context what would be best way to debug this encoded js.

    Read the article

  • Ubuntu upstart hangs on interactive start & stop

    - by danorton
    How do I get Ubuntu upstart to not hang on interactive start & stop? I have created many upstart scripts that work fine during init, but often hang when I enter them at the console. If I CTRL+C out, all that happens is that the job changes state. The script is never run. I’m running Ubuntu Lucid on a Xen virtual server with a Linux 2.6.39 kernel. Below is merely a representative example of many scripts that behave this way: description "apache2" start on local-filesystems \ and (net-device-up IFACE=lo) \ and (runlevel [2345]) stop on runlevel [016] respawn respawn limit 10 5 expect daemon script . /etc/apache2/envvars /usr/sbin/apache2ctl start end script

    Read the article

  • Play sound on menu items on hover [migrated]

    - by Mahalia Samuels
    How can I go about making a sound clip play when an element is hovered over? I'm using the latest Wordpress, and the parent theme I am using gives me an option to paste scripts which it will embed to the <head> section. Is there a script which I can just add to my <head> that will play the sound when the cursor moves over my menu item class .menu li? I spent several hours trying to figure this out the other day with no avail. I've done this before in a simple HTML website: <head> <script language="javascript" type="text/javascript"> function playSound(soundfile) { document.getElementById("dummy").innerHTML= "<embed src=\""+soundfile+"\" hidden=\"true\" autostart=\"true\" loop=\"false\" />"; } </script> </head> <li><a href="index.html" onmouseover="playSound('lib/hover.mp3');" onclick="playSound('lib/click.mp3');"><span>Home</span></a></li> Not sure how to deploy this on Wordpress though.

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >