Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 88/916 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • control a bash script with variables from an external file

    - by perler
    I would like to control a bash script like this: #!/bin/sh USER1=_parsefromfile_ HOST1=_parsefromfile_ PW1=_parsefromfile_ USER2=_parsefromfile_ HOST2=_parsefromfile_ PW2=_parsefromfile_ imapsync \ --buffersize 8192000 --nosyncacls --subscribe --syncinternaldates --IgnoreSizeErrors \ --host1 $HOST1 --user1 $USER1 --password1 $PW1 --ssl1 --port1 993 --noauthmd5 \ --host2 $HOST2 --user2 $USER2 --password2 $PW2 --ssl2 --port2 993 --noauthmd5 --allowsizemismatch with parameters from a control file like this: host1 user1 password1 host2 user2 password2 anotherhost1 anotheruser1 anotherpassword1 anotherhost2 anotheruser2 anotherpassword2 where each line represents one run of the script with the parameters extracted and made into variables. what would be the most elegant way of doing this? PAT

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Scripts on UNC paths take very long to run

    - by Álvaro G. Vicario
    I have several scripts in UNC paths (from Windows batch files to PHP scripts). No matter how I run them (double click on explorer, my editor's run command menu or Windows command prompt) they take really long to start running (like 14 seconds). Once they get started they run normally. This doesn't happen if I run them from mapped drives. I'm using Windows XP Professional SP3 inside an Active Directory domain and files are hosted in a Windows Server box (not sure about the version, it's an HP dedicated file server with bundled OS). Why does it happen? Is there a way to speed up things while using UNC paths?

    Read the article

  • CentOS Client - Unable to Establish iSCSI connection with multiple interfaces on the initiator

    - by slashdot
    So after upgrading to CentOS 6.2, I am seemingly no longer able to login into my iSCSI targets. I have multiple interfaces on different subnets on the system, and I first thought that it had to do with the fact that I may not be binding correct interfaces, which seems to be the case when looking at netstat, as this is clearly wrong: [root]? netstat -na|grep .90 tcp 0 1 10.10.100.60:42354 10.10.8.90:3260 SYN_SENT tcp 0 1 10.10.100.60:40777 10.10.9.90:3260 SYN_SENT I then went ahead and disabled all but one interface, and so as a result netstat appears to be correct, but the issue with login remains. I am positive that the target never sees a packet, because I see nothing by SYN_SENT. I know the problem is on my client, because the target is servicing multiple systems, none of which are CentOS 6.2. At this point I am pretty confident that some things changed between CentOS 6.0/6.1 and 6.2. So, if anyone have any thoughts, or ran into this, I would very much like to hear your thoughts. [root]? iscsiadm --mode node --targetname iqn.2011-12.dom.homer:01:lab-centos-servers-00001 --portal 10.10.8.90:3260,2 --interface=sw-iscsi-0 --login Logging in to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260] (multiple) iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals [root]? netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.8.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2.7 10.10.9.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3.7 10.10.100.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2.7 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3.7 0.0.0.0 10.10.100.1 0.0.0.0 UG 0 0 0 eth0 Output of ip addr show for the two interfaces involved: [root]? for i in 2.7 3.7; do ip addr show eth$i; done 6: eth2.7@eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:8d brd ff:ff:ff:ff:ff:ff inet 10.10.8.60/24 brd 10.10.8.255 scope global eth2.7 inet6 fe80::20c:29ff:fe94:5b8d/64 scope link valid_lft forever preferred_lft forever 7: eth3.7@eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:97 brd ff:ff:ff:ff:ff:ff inet 10.10.9.60/24 brd 10.10.9.255 scope global eth3.7 inet6 fe80::20c:29ff:fe94:5b97/64 scope link valid_lft forever preferred_lft forever Update 01/06/2012: This issue is getting even more interesting by the day it seems. I went a few weeks back and grabbed a snapshot of this system from before upgrading to 6.2. I spun up a new system from the snapshot, and reconfigured interface info and host keys, as well as iSCSI initiator and iscsi interface info to match new MACs. Changed nothing else. Then, I attempted to connect to my targets, and no issues at all. I cannot say that this was unexpected. I then went ahead and compared sysctl settings from both systems and there were differences after the upgrade, but nothing seemingly relevant to iSCSI or IP that could contribute to this. I also noticed that by default now two sessions per connection were enabled after the upgrade, but I changed it back to 1 session in /etc/iscsi/iscsid.conf. On the problematic system we can see that source interface is seemingly wrong, but even when I disable the 10.10.100 interface, problems persist. So, while this may be relevant, I could not validate it for certain. Needless to say, further research is necessary. Something is clearly different between releases. Working system is on 6.1, and non-working is 6.2. ::Working System:: tcp 0 0 10.10.8.210:39566 10.10.8.90:3260 ESTABLISHED tcp 0 0 10.10.9.210:46518 10.10.9.90:3260 ESTABLISHED [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.210 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.210 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.210 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 ::Non-working System:: tcp 0 1 10.10.100.60:44737 10.10.9.90:3260 SYN_SENT tcp 0 1 10.10.100.60:55479 10.10.8.90:3260 SYN_SENT [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.60 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.60 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.60 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 And the result is still same: [root]? iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not login to [iface: sw-iscsi-1, target: iqn.2011-12.dom.homer:02:lab-centos-servers-00001, portal: 10.10.9.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals Update 01/08/2012: I believe I have been able to figure out the answer to my issue. It is quite obscure and I doubt this will happen to anyone else any time soon. It turns out that setting iface.iscsi_ifacename and iface.hwaddress in the interfaces configuration file is not legal. When one manually adds an iscsi target, such as below, all settings from the interface config file are copied into the node config file, that gets created by the below command. Result is parameters iface.iscsi_ifacename and iface.hwaddress together in the same config file. These parameters are seemingly mutually exclusive, which does not exactly make sense, or there is perhaps an oversight in the codepath. Perhaps I will investigate further. # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:01:lab-centos-servers-00001 -p 10.10.8.90,3260,2 -I sw-iscsi-0 # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:02:lab-centos-servers-00001 -p 10.10.9.90,3260,2 -I sw-iscsi-1 Notice, below I commented out iface.hwaddress and iface.ipaddress, after which I re-added targets, with same command as above. All works just fine. [root]? cat * # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-0 iface.net_ifacename = eth2.6 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.8.60 iface.transport_name = tcp iface.vlan_id = 6 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-1 iface.net_ifacename = eth3.7 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.9.60 iface.transport_name = tcp iface.vlan_id = 7 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD Again, chances of this happening to someone else are slim to none, so likely waste of time typing this up. But, if someone does encounter this issue, I hope this post will help.

    Read the article

  • Bash Completion Script Help

    - by inxilpro
    So I'm just starting to learn about bash completion scripts, and I started to work on one for a tool I use all the time. First I built the script using a set list of options: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions="change configure create disable enable show" COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf This works fine. Next, I decided to dynamically create the list of available actions. I put together the following command: zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' Which basically does the following: Grabs all the text in the "zf" command after "Providers and their actions:" Grabs all the lines that start with "zf" (I had to do some fancy work here 'cause the ZF command prints in color) Grab the second piece of the command and remove any spaces from it (the spaces part is probably not needed any more) Sort the list Get rid of any duplicates Escape dashes (I added this when trying to debug the problem—probably not needed) Trim all new lines Trim all leading and ending spaces The above command produces: $ zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' change configure create disable enable show $ So it looks to me like it's producing the exact same string as I had in my original script. But when I do: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions=`zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/'` COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf My autocompletion starts acting up. First, it won't autocomplete anything with an "n" in it, and second, when it does autocomplete ("zf create" for example) it won't let me backspace over my completed command. The first issue I'm completely stumped on. The second I'm thinking might have to do with escape characters from the colored text. Any ideas? It's driving me crazy!

    Read the article

  • Spamassassin command to tag & move mail with an X-Spam-Score of 10+ to a new dir?

    - by ane
    Have a maildir with tens of thousands of messages in it, about 70% of which are spam. Would like to: Run /usr/local/bin/spamassassin against it, tagging each message if the score is 10 or greater Have a tcsh shell or perl one-liner grep all mails with a spam score of over 10 and move those mails to /tmp/spam What commands can I run to accomplish this? Pseudocode: /usr/local/bin/spamassassin ./Maildir/cur/* -tagscore10 grep "X-Spam-Score: [10-100]" ./Maildir/cur/* | mv %1 /tmp/spam

    Read the article

  • Windows file compare (FC) spurious differences

    - by user165568
    I'm getting differences like this: a.txt Betty Davis Cathy Edwards b.txt Betty Davis Cathy Edwards There are only two lines listed in the diff (which doesn't make sense). No CR/LF/Newline funnies. The difference just moves down if I delete lines. Same problem on Win7 and Win2K. The difference seems to go away if I remove all empty lines from the files. The empty lines are correctly terminiated too. Using /C /W (ignore case, ignore whitespace) Has anyone seen this before? What am I doing wrong? How can I fix it? There are real diffs in the file -missing, extra, or re-spelled names- but the files are byte-for-byte identical at the listed diff.

    Read the article

  • Windows PowerShell ISE doesn't import PSCX 2.0 module

    - by Alexander
    Hi, i am using Powershell 2.0 with the PSCX 2.0 module. When writing PS scripts inside Windows PowerShell ISE no Cmdlets from the PSCX module are availible. For example running "Get-DriveInfo" from Windows PowerShell ISE would cause an error. Running "Get-DriveInfo" from Powershell works fine. I guess Windows PowerShell ISE doesn't load my PS profile (this would be mad). Does anyone know why and what to do to get it work?

    Read the article

  • Implementing dry-run in bash scripts

    - by Apikot
    How would one implement a dry-run option in a bash script? I can think of either wrapping every single command in an if and echoing out the command instead of running it if the script is running with dry-run. Another way would be to define a function and then passing each command call through that function. Something like: function _run () { if [[ "$DRY_RUN" ]]; then echo $@ else $@ fi } _run mv /tmp/file /tmp/file2 DRY_RUN=true _run mv /tmp/file /tmp/file2 Is this just wrong and there is a much better way of doing it?

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

  • Process files in a folder that haven't previously been processed

    - by Paul
    I have a series of files in a directory that I need to carry an action out on using a script. Once the action is done, then I want to keep a log that the file has been processed, so that the next time the script is run, it does not attempt to carry out the action again. So lets say I can find all the files that should be processed like this: for i in `find /logfolder -name '20*.log'` ; do process_log $i echo $i >> processedlogsfile done So I have a file containing the logs I have processed, and my goal would be to modify the for loop such that these processed logs are not processed a second time. Doing a manual scan each time seems inefficient, particularly as the processedlogfiles gets bigger: if grep -iq "$i" processdlogfiles ; then continue; fi It would be good if these files could be excluded when setting up the for loop. Note that the OS in question is a linux derivative, a managment appliance, with a limited toolset (no attr command for example) and so no way to install additional utilities (well it is possible but not an option). Most common bash shell commands are available though. Also, the filenames and locations of the processed files must remain where they are - they can't be altered to reflect their processed status

    Read the article

  • Windows Handling Piped Comands Error Redirection

    - by jpmartins
    Warning: I am no expert on building scripts, and sorry for lousy English. In an case of generating a CSV from a database query I'm using the following commands. ... CALL java.exe -classpath ... com.xigole.util.sql.Jisql -user dmfodbc -pf pwd.file -driver com.sybase.jdbc3.jdbc.SybDriver -cstring %constr% -c ; -input 42.sql -formatter csv -delimiter ; 2%LOGFILE% | CALL grep -v -e "SELECT right" -e "executing: " -e " rows affect" %FicheiroR% 2%LOGFILE% ... I'm using windows implementation of grep. The 2%LOGFILE% in both java and grep command is causing an error message indicating the file is being use by another process. The Ugly workaround i have came up with is to put grep error redirect to a temporary %LOGFILE%.aux java ... | grep ... 2%LOGFILE%.aux type %LOGFILE%.aux % %LOGFILE% del %LOGFILE%.aux What is a better solution?

    Read the article

  • Why does this batch script terminate unexpectedly?

    - by neurolysis
    This batch script terminates when %CHECKCONTINUE% is given a null value by not inputting anything on line 13 (SET /p CHECKCONTINUE=Okay to continue? (y/n):), why is this? @ECHO OFF SETLOCAL TITLE Registry restore script REM Restores registry settings and disables the cloud SET %CHECKCONTINUE%= :listaction ECHO I'm about to... ECHO 1.) Remove the registry data that specifies settings for TF2 ECHO 2.) Forcibly disable Steam Cloud. ECHO. SET /p CHECKCONTINUE=Okay to continue? (y/n): REM No? IF %CHECKCONTINUE%==n GOTO exit IF %CHECKCONTINUE%==no GOTO exit REM Yes? IF %CHECKCONTINUE%==y GOTO start IF %CHECKCONTINUE%==yes GOTO start REM Did they put something else? IF DEFINED %CHECKCONTINUE% GOTO loop-notvalid REM Did they not put anything at all? IF NOT DEFINED %CHECKCONTINUE% GOTO loop-noreply :start REM Delete application specific data REG DELETE HKEY_CURRENT_USER\Software\Valve\Source\tf\Settings /f REG DELETE HKEY_CURRENT_USER\Software\Valve\Steam\Apps\440 /f REM Disable Steam Cloud for TF2 REG ADD HKEY_CURRENT_USER\Software\Valve\Steam\Apps\440 /v Cloud /t REG_DWORD /d "0x0" /f :exit ENDLOCAL EXIT :loop-notvalid ECHO. ECHO That's not a valid reply. Try again. ECHO. SET %CHECKCONTINUE%= GOTO listaction :loop-noreply ECHO. ECHO You must enter a reply. ECHO. SET %CHECKCONTINUE%= GOTO listaction

    Read the article

  • Automate SQL Server 2008 backup script failing to run

    - by Techboy
    I have created a maintenance plan but when I try to execute I get the error: Message [298] SQLServer Error: 15404, Could not obtain information about Windows NT group/user 'XX\Administrator', error code 0x534. [SQLSTATE 42000] (ConnIsLoginSysAdmin) I have given administrator db owner access but still get the error, what am I doing wrong?

    Read the article

  • netsh doesn't provide commands

    - by Petr Marek
    According to Netsh Commands for Wired Local Area Network (LAN) in Windows Server 2008 and Windows Server 2008 R2, netsh should provide commands such as netsh add profile filename="profile.xml" interface="Local Area Connection" but that's an unknown command for my netsh. Even if I enter netsh show /? it shows me only two options: 'show alias' and 'show helper'. Maybe some library/modules or something is missing? I tested with admin permissions in PowerShell.

    Read the article

  • Script to monitor free space in Hdd

    - by s.mihai
    I have a server app that crushes when the HDD free space it's a multiple of 4Gb (on a Windows Server 2003). In general i keep track myself o that weekly since i use the machine from time to time. Can you point out an app or script (i don't wanna install powershell, is this doable???) that copies some larger files from one folder to another to get the free space out of the multiple of 4Gb range. Best regards, Mike

    Read the article

  • Can I find which script outputs which error?

    - by ibrahim
    There is a script which calls other scripts and they call others... I don't know exactly which scripts are called and how many of them. I only know that some of them are adding iptables rules and I get this error when I call root script. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. My problem is that I can not find which script outputs this errors. Is there any way or tool to learn that?

    Read the article

  • How to check that all ZFS snapshots within a pool are without holds before destroying that pool

    - by Graham Perrin
    Question Already I can check each snapshot of a filesystem individually, manually. I would prefer to check all at once (all with a single command or script). Please: can that be done with a script? Background From the man page for zfs(8): zfs holds [-H] [-r] snapshot… … -r Specifies that a hold with the given tag is applied recursively to the snapshots of all descendent file systems. I wondered whether recent snapshots are treated as descendants of older snapshot. No: Last login: Sat Dec 8 09:02:26 on ttys003 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-12-08-081957 NAME TAG TIMESTAMP macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$ zfs hold experiment gjp22@2012-12-08-081957 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-12-08-081957 NAME TAG TIMESTAMP gjp22@2012-12-08-081957 experiment Sat Dec 8 9:04 2012 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$

    Read the article

  • Running ssh-agent from a shell script

    - by Dan
    I'm trying to create a shell script that, among other things, starts up ssh-agent and adds a private key to the agent. Example: #!/bin/bash # ... ssh-agent $SHELL ssh-add /path/to/key # ... The problem with this is ssh-agent apparently kicks off another instance of $SHELL (in my case, bash) and from the script's perspective it's executed everything and ssh-add and anything below it is never run. How can I run ssh-agent from my shell script and keep it moving on down the list of commands?

    Read the article

  • pam_exec.so PAM module does not export variable PAM_USER as stated in the documentation

    - by davidparks21
    I'm trying to use the pam_exec.so PAM module to execute a script which needs to know the username/password coming from the application (OpenVPN in this case). I have a script that executes printenv >>afile, but I don't see all the environment variables that the man pages states that pam_exec.so exports (namely PAM_USER I think), I only see the following: PAM_SERVICE=openvpn PAM_TYPE=auth PWD=/usr/local/openvpn/bin SHLVL=1 A__z="*SHLVL I do successfully pick up the password off of STDIN and output it with this same script. But for the life of me I can't get the username. Any thoughts on what I should try next?

    Read the article

  • Run Linux command as predefined user

    - by vijay.shad
    Hi all, I have created a shell script to start a server program. startup.sh start When above command will executes, it will try starts the server as adminuser. To achieve this my script has written like this. SUBIT="su - adminuser -c " SERVER_BOX_COMMAND_A="Server" ############## # Function to start cluster function start(){ $SUBIT "$SERVER_BOX_COMMAND_A" } When i execute the command it asks for password. Is there any other way to do this so, it will not ask for password. I have seen this behavior in Jboss startup script provided on jboss. That script changes the user to jboss and then starts the jboss server. I wanted my script to behave same way.

    Read the article

  • How to power off a hard drive to essentially "hot swap"

    - by Brandon
    I'm looking for either freeware or the programming basics to power off/on a hard drive. Mounting and unmounting a hard drive is simple enough just using the command prompt in Windows XP. Now I need to be able to power down the hard drive so it will not become damaged when being unplugged. I would prefer this to be a simple doable in the command prompt, a simple script, or at worst C++/C#. Freeware that does this exact requirement would also do the job. This script/program will run on Windows XP with .NET 2.0 SP1.

    Read the article

  • Disable "Windows Firewall with Advanced Security" for all profiles(Domain,Public,Standard) in local GP using script help! Windows 7 Clients

    - by JoBo
    We need Windows7 with windows firewall to be turned off , so the GOLD image has windows firewall turned off for all profiles(Domain,Public,Standard) and Windows Service disabled No the same GOLD image deployed with MDT (Apply local GPO) has enabled Windows Firewall under "Windows Firewall with Advanced Security" as part of task sequence Now we need to remove it. "These machines are now on Domain where in we have no rights/control on the domain level GPO", we have local admi rights on these machines We have a requirement do set the "Windows Firewall with Advanced Security" to "NOT Configured" or "OFF "on these machines In gpedit.msc if we manually go to "Windows Firewall with Advanced Security" after enabling Windows Firewall Services then can Clear the settings Do do the same manually on all machines is extra effort Changing values in registry will get reverted on machine restart as its getting applied from local GPO Also using GPMC can connect to remote computer and can manually or using wfw file we can make it not configured but we are looking for a script or a less effort method to accomplish this Please suggest NB: CIA has already reported similar issue//How do I turn off Windows 7 Firewall via script or through automation?// , but doing netsh advfirewall set allprofiles state off on already deployed machines did not make change (FW service on all machine is disabled in GOLd image)// Thanks and Regards Jose

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >