Search Results

Search found 6392 results on 256 pages for 'bash history'.

Page 37/256 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Kill child process when the parent exits

    - by kolypto
    I'm preparing a script for Docker, which allows only one top-level process, which should receive the signals so we can stop it. Therefore, I'm having a script like this: one application writes to syslog (bash script in this sample), and the other one just prints it. #! /usr/bin/env bash set -eu tail -f /var/log/syslog & exec bash -c 'while true ; do logger aaaaaaaaaaaaaaaaaaa ; sleep 1 ; done' Almost solved: when the top-level process bash gets SIGTERM -- it exists, but tail -f continues to run. How do I instruct tail -f to exit when the parent process exits? E.g. it should also get the signal. Note: Can't use bash traps since exec on the last line replaces the process completely.

    Read the article

  • Improving grepping over a huge file performance

    - by rogerio_marcio
    I have FILE_A which has over 300K lines and FILE_B which has over 30M lines. I created a bash script that greps each line in FILE_A over in FILE_B and writes the result of the grep to a new file. This whole process is taking over 5+ hours. I'm looking for suggestions on whether you see any way of improving the performance of my script. I'm using grep -F -m 1 as the grep command. FILE_A looks like this: 123456789 123455321 and FILE_B is like this: 123456789,123456789,730025400149993, 123455321,123455321,730025400126097, So with bash I have a while loop that picks the next line in FILE_A and greps it over in FILE_B. When the pattern is found in FILE_B i write it to result.txt. while read -r line; do grep -F -m1 $line 30MFile done < 300KFile Thanks a lot in advance for your help.

    Read the article

  • How to get current gnome keyboad layout from terminal

    - by ftiaronsem
    For usage in a bash script, I need to get the gnome keyboard layout the user is currently using. For example if the user sets its keyboard layout to en-us , I need a bash command that prints me this. How can I get that information? Update: setxkbmap -query is unfortunatelly not working. Below is the ouput with the en (first command) and the de (second command) layout activated. Switching keyboard layout seems to be have some relation with gnome session configuration setxkbmap -query rules: evdev model: pc105 layout: us,de variant: , options: terminate:ctrl_alt_bksp,lv3:ralt_switch,grp:alts_toggle setxkbmap -query rules: evdev model: pc105 layout: us,de variant: , options: terminate:ctrl_alt_bksp,lv3:ralt_switch,grp:alts_toggle

    Read the article

  • Setup CRON weekly backup

    - by sadmicrowave
    I want to make a backup of my /var/lib/mysql and /var/www folders and save them as tar.gz files to my mounted network file server (uslons001). Here is my bash file located in: /etc/cron.weekly/mysqlbackup.sh #!/bin/bash mkdir ~/uslons001/`date +%d%m%y` tar -czf ~/uslons001/`date +%d%m%y`/mysql.tar.gz /var/lib/mysql tar -czf ~/uslons001/`date +%d%m%y`/www.tar.gz /var/www tar -czf ~/uslons001/`date +%d%m%y`.tar.gz ~/uslons001/`date +%d%m%y` echo Backup Completed `date` >> ~/backuplog Which works PERFECTLY fine when I execute it in a cmd shell but when I setup the cron job it never runs, so I'm not setting the cron job up properly. My cron job looks like this. 30 7 * * fri /etc/cron.weekly/mysqlbackup.sh Which should execute at 7:30AM every Friday... What am I doing wrong? UPDATE1 - change the cron job line to the following: 44 8 * * 5 /etc/cron.weekly/mysqlbackup.sh with still no luck...is there a cron error log file that I can read to help pin point where the problem is?

    Read the article

  • /etc/profile not being sourced

    - by Marc
    For 11.04, I did a fresh install of my system. Part of that install was to install rvm, which sticks a rvm.sh in /etc/profile.d/. This doesn't work as /etc/profile (which loads each +r in /etc/profile.d/*.sh) is not being loaded. According to the documentation, the profile is only sourced if bash is run in login. To verify this, I invoked bash --login, after which rvm was available. This has worked for me in previous versions of Ubuntu without any configuration. That is, a fresh install of 10.10 will correctly source profile/.d. My question is: is there anything I'm doing wrong, or are there some new assumptions being made in Natty that have broken this? My current workaround is to source /etc/profile in ~/.bashrc (which is awful as profile is meant to load before bashrc's, but does the trick).

    Read the article

  • How can I determine whether a shellscript runs as root or not?

    - by EvilPhoenix
    This is something I've been curious about. I make a lot of small bash scripts (.sh files) to do tasks that I routinely do. Some of those tasks require everything to be ran as superuser. I've been curious: Is it possible to, within the BASH script prior to everything being run, check if the script is being run as superuser, and if not, print a message saying You must be superuser to use this script, then subsequently terminate the script itself. The other side of that is I'd like to have the script run when the user is superuser, and not generate the error. Any ideas on coding (if statements, etc.) on how to execute the aforementioned?

    Read the article

  • Any way to list similar commands?

    - by Septagram
    When you write the command name wrong, bash often does this: septi@norbert:~$ good No command 'good' found, did you mean: Command 'gold' from package 'binutils' (main) Command 'gmod' from package 'gmod' (universe) Command 'goo' from package 'goo' (universe) Command 'god' from package 'god' (universe) Command 'geod' from package 'proj-bin' (universe) Command 'gord' from package 'scotch' (universe) good: command not found Or sometimes it does this: septi@norbert:~$ nftp No command 'nftp' found, but there are 23 similar ones nftp: command not found Is there any way to ask bash to show these 23 similar commands for me? And, is there a way to show similar commands, including those that aren't yet installed, instead of running the application, ftp for example?

    Read the article

  • How can a script detect if the user is idle

    - by josinalvo
    I want to check, inside a bash script (*), how much time the user of a X session has been idle The user himself does not have to be using bash, but just X. If the user just moved the mouse, for example, a good answer would be "idle for 0 seconds". If he has not touched the computer in 5 minutes, a good answer would be "idle for 300 seconds" The reason to not use xautolock straight away is to be able to implement some complex behavior. For example, if the user is idle for 10 minutes, try to suspend, if he is idle for more 5 minutes, shutoff (I know it sounds odd, but suspend does not always work here ...) (*)or could be another language.

    Read the article

  • Help writing server script to ban IP's from a list

    - by Chev_603
    I have a VPS that I use as an openvpn and web server. For some reason, my apache log files are filled with thousands of these hack attempts: "POST /xmlrpc.php HTTP/1.0" 404 395 These attack attempts fill up 90% of my logs. I think it's a WordPress vulnerability they're looking for. Obviously they are not successful (I don't even have Wordpress on my server), but it's annoying and probably resource consuming as well. I am trying to write a bash script that will do the following: Search the apache logs and grab the offending IP's (even if they try it once), Sort them into a list with each unique IP on a seperate line, And then block them using the IP table rules. I am a bash newb, and so far my script does everything except Step 3. I can manually block the IP's, but that's tedious and besides, this is Linux and it's perfectly capable of doing it for me. I also want the script to be customizable so that I (or anyone else who wants to use it) can change the variables to suit whatever situation I/they may deal with in the future. Here is the script so far: #!/bin/bash ##IP LIST GENERATOR ##Author Chev Young ##Script to search Apache logs and list IP's based on custom filters ## ##Define our variables: DIRECT=~/Script ##Location of script&where to put results/temp files LOGFILE=/var/log/apache2/access.log ## Logfile to search for offenders TEMPLIST=xml_temp ## Temporary file name IP_LIST=ipstoban ## Name of results file FILTER1=xmlrpc ## What are we looking for? (Requests we want to ban) cd $DIRECT if [ ! -f $TEMPLIST ];then touch $TEMPLIST ##Create temp file fi cat $LOGFILE | grep $FILTER1 >> $DIRECT/$TEMPLIST ## Only interested in the IP's, so: sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d $DIRECT/$TEMPLIST | sort | uniq > $DIRECT/$IP_LIST rm $TEMPLIST ## Clean temp file echo "Done. Results located at $DIRECT/$IP_LIST" So I need help with the next part of the script, which should ban the IP's (incoming and perhaps outgoing too) from the resulting $IP_LIST file. I don't care if it utilizes UFW or IPTables directly, as long as it bans the IP's. I'd probably run it as a cron task. What I'm having trouble with is understanding how to use line of the result file as a seperate variable to do something like: ufw deny $IP1 $IP2 $IP3, ect Any ideas? Thanks.

    Read the article

  • What's the difference between set, export and env and when should I use each?

    - by Oli
    Every so often I'll bash out a bash script and it strikes me there are a few ways of setting a variable: key=value env key=value export key=value When you're inside a script or a single command (for instance, I'll often chain a variable with a Wine launcher to set the right Wine prefix) these seem to be completely interchangeable but surely that can't be the case. What's the difference between these three methods and can you give me an example of when I would specifically want to use each one? Definitely related to What is the difference between `VAR=...` and `export VAR=...`? but I want to know how env fits into this too, and some examples showing the benefits of each would be nice too :)

    Read the article

  • sed problem ....

    - by moata_u
    hello there ... am facing problem in sed command , i was trying write a bash script that do the following : 1. search for the line that contain :@ , 2.then save the line that contained :@ and replace it with new line ....as following : ! /bin/bash echo "Please enter the ip address of you file" read ipnumber find=grep ':@' application.properties # find the line input="connection.url=jdbc\racle\:thin\:@$ipnumber\:1521\:billz" # preparing new line echo sed "s/'${find}'/'${input}'/g" application.properties # replace old with new line **Problem is nothing happen !!!! * I already tried to use "${find}" instead of '${find}'

    Read the article

  • can't control pianobar after using echo

    - by Ubuntuusr22222
    I have a script that starts pianobar (pandora player) and autoloads into tty2 after booting. I'm running Ubuntu Precise 12.04. it's a pretty simple script: #!bin/bash sleep 5 echo "2" | pianobar This works, it selects station 2 and begins playing... but when I try to type in commands it doesn't work (like pushing "p" for pause.) It'll show the letter for a second, then hide it. If I try to exit with ctrl+z it just sits there and I can't use it at all. If I run this it works fine but doesn't auto-select the second station: #!bin/bash sleep 5 pianobar Is there anyway to write this so it will automatically input "2" and then allow me to control from there? Or am I stuck with having to select 2 every time I boot up?

    Read the article

  • Run two shell file with thread

    - by user1149157
    How i can run two file shell in parallel and do not shared the same jvm. may be i use thread but how i run two file shell bu two thread ? File 1: #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # param="parameter1" another="parameter2" for ((i = 10; i >= 0; i -= 1)) do echo "run my file with param another " done File 2 : #!/bin/bash # # Script for running several experimentations one the same JVM # Usage : TRACE_DIR NB_EXPE Factories... # a="101" b="400" c="500" echo "run my programme with a b c "

    Read the article

  • How to auto-restart a python script on fail?

    - by norm
    This post describes how to keep a child process alive in a BASH script: http://stackoverflow.com/questions/696839/how-do-i-write-a-bash-script-to-restart-a-process-if-it-dies This worked great for calling another BASH script. However, I tried executing something similar where the child process is a Python script: #!/bin/bash PYTHON=/usr/bin/python2.6 function myprocess { $PYTHON daemon.py start } NOW=$(date +"%b-%d-%y") until myprocess; do echo "$NOW Prog crashed. Restarting..." >> error.txt sleep 1 done Now the behaviour is completely different. It seems the python script is no longer a child of of the bash script but seems to have 'taken over' the BASH scripts PID - so there is no longer a BASH wrapper round the called script...why?

    Read the article

  • Rewriting git history to convert master branch to development branch?

    - by gct
    I'm looking to rewrite my git repo to use a new branching model I came across: http://nvie.com/git-model But right now all my history lives in the master branch. I'd like to rewrite it (possible using git-filter-branch?) So that all that history is in a branch called development now. Is this possible? It's definitely beyond my limited git skills.

    Read the article

  • Linux Commands: Making Bash Error Messages Friendlier

    <b>Linux Planet:</b> "Bash error messages, like so many error messages, can be more cryptic than helpful. But the good news is bash has a built-in mechanism for creating your own customized error messages, and you don't have to be an ace programmer to do it. Ubuntu and openSUSE already use this; Akkana Peck shows us how to do it ourselves."

    Read the article

  • Tutorial: Linux Commands: Making Bash Error Messages Friendlier

    Bash error messages, like so many error messages, can be more cryptic than helpful. But the good news is bash has a built-in mechanism for creating your own customized error messages, and you don't have to be an ace programmer to do it. Ubuntu and openSUSE already use this; Akkana Peck shows us how to do it ourselves.

    Read the article

  • Given a main function and a cleanup function, how (canonically) do I return an exit status in Bash/Linux?

    - by Zac B
    Context: I have a bash script (a wrapper for other scripts, really), that does the following pseudocode: do a main function if the main function returns: $returncode = $? #most recent return code if the main function runs longer than a timeout: kill the main function $returncode = 140 #the semi-canonical "exceeded allowed wall clock time" status run a cleanup function if the cleanup function returns an error: #nonzero return code exit $? #exit the program with the status returned from the cleanup function else #cleanup was successful .... Question: What should happen after the last line? If the cleanup function was successful, but the main function was not, should my program return 0 (for the successful cleanup), or $returncode, which contains the (possibly nonzero and unsuccessful) return code of the main function? For a specific application, the answer would be easy: "it depends on what you need the script for." However, this is more of a general/canonical question (and if this is the wrong place for it, kill it with fire): in Bash (or Linux in general) programming, do you typically want to return the status that "means" something (i.e. $returncode) or do you ignore such subjectivities and simply return the code of the most recent function? This isn't Bash-specific: if I have a standalone executable of any kind, how, canonically should it behave in these cases? Obviously, this is somewhat debatable. Even if there is a system for these things, I'm sure that a lot of people ignore it. All the same, I'd like to know. Cheers!

    Read the article

  • WebLogic history an interview with Laurie Pitman by Qualogy

    - by JuergenKress
    All those years that I am working with WebLogic, the BEA and Oracle era are the most well known about WebLogic evolving into a worldwide Enterprise platform for Java applications, being used by multinationals around the globe. But how did it all begin? Besides from the spare info you find on some Internet pages, I was eager to hear it in person from one of the founders of WebLogic back in 1995, before the BEA era, Laurie Pitman. Four young people, Carl Resnikoff, Paul Ambrose, Bob Pasker, and Laurie Pitman, became friends and colleagues about the time of the first release of Java in 1995. Between the four of them, they had an MA in American history, an MA in piano, an MS in library systems, a BS in chemistry, and a BS in computer science. They had come together kind of serendipitously, interested in building some web tools exclusively in Java for the emerging Internet web application market. They found many things to like about each other, some overlap in our interests, but also a lot of well-placed differences which made a partnership particularly interesting. They made it formal in January 1996 by incorporating. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic history,Qualogy,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Learn How Ancestry.com Helps Families Uncover Their History with Oracle WebCenter

    - by Christie Flanagan
    Delivering Exceptional Online Customer ExperiencesAncestry.com is the world’s largest online family history resource, providing an engaging and interactive customer experience to more than 1.7 million members. With smart search technology, a wealth of learning resources, and a worldwide community of family history enthusiasts, Ancestry.com helps people discover their roots and tell their unique family stories. Key to Ancestry.com’s success has been the delivery of an online customer experience that converts site visitors into paying subscribers and keeps them coming back. To help achieve this goal, Ancestry.com turned to Oracle’s Web experience management solution, Oracle WebCenter Sites. Join us as executives from Ancestry.com and Oracle discuss how Oracle’s Web experience management solution is helping them deliver engaging online experiences. Learn how: Ancestry.com selected Oracle WebCenter Sites to meet their demanding Web experience management requirements The company was able to get up and running quickly despite a complex technology stack and challenging integration requirements with legacy systems Ancestry.com empowered business users to manage the online experience and significantly reduce time to market for their online campaigns and initiatives Register now for the Webcast. REGISTER NOW Thursday,June 28, 201210 a.m. PT / 1 p.m. ET Presented by: Blane Nelson Chief Architect–Applications,Ancestry.com Christie FlanaganDirector of Product Marketing, Oracle WebCenter Sites,Oracle

    Read the article

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >