Search Results

Search found 239 results on 10 pages for 'wc'.

Page 5/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Executing system command in php, differs in using broswer and in using command line

    - by Amit
    Hi, I have to execute a Linux "more" command in php from a particular offset, format the result and display the result in Browser. My Code for the above is : <html> <head> <META HTTP-EQUIV=REFRESH CONTENT=10> <META HTTP-EQUIV=PRAGMA CONTENT=NO-CACHE> <title>Runtime Access log</title> </head> <body> <?php $moreCommand = "more +3693 /var/log/apache2/access_log | grep -v -e '.jpg' -e '.jpeg' -e '.css' -e '.js' -e '.bmp' -e '.ico'| wc -l"; exec($moreCommand, $accessDisplay); echo "<br/>No of lines are : $accessDisplay[0] <br/>"; ?> The output at the browser is :: No of lines are : 3428 (This is wrong) While executing the same command using command line gives a different output. My code snippet for the same is : <?php $moreCommand = "more +3693 /var/log/apache2/access_log | grep -v -e '.jpg' -e '.jpeg' -e '.css' -e '.js' -e '.bmp' -e '.ico'| wc -l"; exec($moreCommand, $accessDisplay); echo "No of lines are : $accessDisplay[0] \n"; ? The output at the command line is :: No of lines are : 279 (This is correct) While executing the same command directly in command line, gives me output as 279. I am unable to understand why the output of the same command is wrong in the browser. Its actually giving the word count of lines, ignoring the offset parameter. Please help !! Thanks, Amit

    Read the article

  • Images from url to listview

    - by Andres
    I have a listview which I show video results from YouTube. Everything works fine but one thing I noticed is that the way it works seems to be a bit slow and it might be due to my code. Are there any suggestions on how I can make this better? Maybe loading the images directly from the url instead of using a webclient? I am adding the listview items in a loop from video feeds returned from a query using the YouTube API. The piece of code which I think is slowing it down is this: Feed<Video> videoFeed = request.Get<Video>(query); int i = 0; foreach (Video entry in videoFeed.Entries) { string[] info = printVideoEntry(entry).Split(','); WebClient wc = new WebClient(); wc.DownloadFile(@"http://img.youtube.com/vi/" + info[0].ToString() + "/hqdefault.jpg", info[0].ToString() + ".jpg"); string[] row1 = { "", info[0].ToString(), info[1].ToString() }; ListViewItem item = new ListViewItem(row1, i); YoutubeList.Items.Add(item); imageListSmall.Images.Add(Bitmap.FromFile(info[0].ToString() + @".jpg")); imageListLarge.Images.Add(Bitmap.FromFile(info[0].ToString() + @".jpg")); } public static string printVideoEntry(Video video) { return video.VideoId + "," + video.Title; } As you can see I use a Webclient which downloads the images so then I can use them as image in my listview. It works but what I'm concerned about is speed..any suggestions? maybe a different control all together?

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca

    - by Marlen T. B.
    I have recently (as of July 2012) bought a HP Pavilion dv6-6c40ca laptop. It came pre-installed with Windows 7 on an MBR. I installed Ubuntu 12.04 on it on a GPT partition in what I think is BIOS emulation mode. I made a BIOS-Grub partition so the install didn't fail. That is what it is for .. right? Now I want to upgrade to UEFI mode. How would I Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca. Or is it impossible? My laptop, despite its new age may not be UEFI 2.0+ capable. If it isn't how can I install a software UEFI (i.e. a DUET such as the one by tianocore). Or is this too impossible? A link to my laptop's specs is: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03137924&tmp_task=prodinfoCategory&cc=ca&dlc=en&lang=en&lc=en&product=5218530 My laptop should have a UEFI given this link from HP http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c01442956#N218. And from the link I draw a quote: That means most notebooks distributed with Windows Vista, and all notebooks distributed with Windows 7, have the UEFI environment. My laptop had Windows 7 Home Premium pre-installed. OK. Following the comments so far -- NOTE: I am trying to do this on an external drive so I can see if it works. I have partitioned the drive using GParted as a GPT drive. Created a 200MB partition at the beginning of the drive with a FAT32 file system. Given the 200MB partition a label of "EFI". Set the boot flag on the 200MB partition. What should a do next to install Ubuntu 12.04? Given the link: https://help.ubuntu.com/community/UEFIBooting#Selecting_the_.28U.29EFI_Graphic_Protocol In my first read through (just to see if I will understand everything before I start) I get to step 2.3 Install GRUB2 in (U)EFI systems The first line is Boot into Linux (any live ISO) preferably in UEFI mode. Um .. how do you tell what mode your live CD is in?! And how do you change it if the mode is wrong?

    Read the article

  • Ubuntu bash command

    - by pedro
    Hello i want to show the number of lines, words and characters of all configuration files "/ Etc / * conf" (command "wc"). How can i modify the command to not view the messages error.

    Read the article

  • How do I choose the number of connection for load balancer?

    - by user105196
    I want to add hardware load balancer for apache and I want to know how many people are connected to my server to to choose the type of load balancer: Local Load Balancing with SSL - 250 Connections Local Load Balancing with SSL - 500 Connections Local Load Balancing with SSL - 1000 Connections I run the following commands in the same time: netstat -nt|grep -c :443 ( all connection wait and ESTABLISHED) result : 1208 netstat -ant | grep 443 | grep EST | wc -l ( just ESTABLISHED connection) result :106 My question: Whichever is the correct value to choose the load balancer all connection or just ESTABLISHED ?

    Read the article

  • zsh conditional statement help

    - by Roy Rico
    Feeling kinda dumb right now: Why is my contional always true? I've tried # this should let me know what's not a directory or # symbolic link. whoa=`find ${MUSICDIR} ! -type l ! -type d | wc -l` # I would expect if it's 0 (meaning nothing was found) that # one of these statements would evaluate to false, but so far # it's always evaluating to true if [[ "${whoa}" != "0" ]] if [[ ${whoa} -gt 0 ]] What am I missing?

    Read the article

  • How to exclude directories from Mozy custom backup sets using spotlight queries

    - by bromfiets
    I would like to create custom backup sets for Mozy which exclude certain directories. For example, I would like to backup my Itunes folder, but exclude all podcasts. I have created a backup set which searches in /Users/me/Music and used this query kMDItemPath == "*Podcasts*"wc to exclude all matching files. However, nothing matches. Queries which use the kMDItemFSName spotlight attribute work fine, but any query using kMDItemPath doesn't seem to work at all. What am I doing wrong?

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • How to make whoopsie more silent (log clutter with "online"messages)

    - by Rmano
    I know what whoopsie is from the answers to What is the 'whoopsie' process and how can I remove it? I do not want to stop error reporting, as I think that error reporting is the minimum a user should do to try to help Ubuntu. But since the upgrade to 13.10, whoopsie has grown up quite chatty. I have literally hundreds of messages like that in my logs: SYS: Nov 4 14:40:48 samsung-romano whoopsie[1156]: online SYS: Nov 4 14:41:56 whoopsie[1156]: last message repeated 4 times SYS: Nov 4 14:42:56 whoopsie[1156]: last message repeated 2 times SYS: Nov 4 14:43:56 whoopsie[1156]: last message repeated 2 times SYS: Nov 4 14:44:56 whoopsie[1156]: last message repeated 2 times % zgrep whoopsie /var/log/syslog*gz | wc -l 773 Is there a way to tell whoopsie to be less verbose? (the funny output format is from SLogger, a homemade program to check system log files I wrote ages ago, but this is basically the content of /var/log/syslog file).

    Read the article

  • sed problem with scripting

    - by Pablo Ramos
    I am trying to run a script using sed i runing like this for et in 1 # 2 3 do if [ -d ET$et ]; then rm -rf ET$et; fi mkdir ET$et cd ET$et cp $home/step_$i/FDE/diabatA/run.adf . cp $home/step_$i/FDE/diabatA/mas$i.xyz . awk1=`awk '/type=fde/{print NR }' run.adf | head -1` awk2=`$(echo "$a+379" | bc -l )` sed -n "$awk1,"$awk2"p" run.adf > first awk3=`awk '/ATOMS/{print NR +1}' first` awk4=`cat mas$i.xyz | wc -l` awk4=$( echo "$awk4-1" | bc -l ) awk5=`awk "/ATOMS/{print NR +"${awk4}" }" run.adf` sed -n "$awk3,"$awk4"p" first > atoms par=$( echo "$awk4-99" | bc -l ) rho1=$(cat atoms | head -34 ) rho2=$(cat atoms | head -64 | tail -31) rho3=$(cat atoms | head -97 | tail -33) rhoall=$(cat atoms | tail -${par} ) echo -e "$rho1\n$rho2\n$rhoall" > eje done but is telling me this: (standard_in) 1: syntax error sed: -e expression #1, char 6: unexpected `,' sed: -e expression #1, char 1: unknown command: `,' Please, I appreciate any help with this issue... Thanks Pablo

    Read the article

  • file acess slow after deletion of many files

    - by stefan
    I recently accidentally created millions of files in one folder (rougly 5 million) and due to limitations I couldn't process them correctly (maximum argument count exceeded for wc / ls and such stuff). So I deleted them, which took quite a while, but now they're gone. I deleted the files with a regular rm. It weren't any system files. So the files are definitively deleted, but the system is very slow on file stuff now. ls, cat and auto-complete by pressing tab freezes the terminal for several seconds. Is this some sort of fragmentation issue? Is it an issue with the files beeing still somehow present?

    Read the article

  • Installing Cairo to get FastRWeb working for R gWidgetsWWW2 -pkg

    - by hhh
    I want to install FastRWeb for R but it requires some Cairo. How can I install the Cairo? compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/Cairo’ ERROR: dependency ‘Cairo’ is not available for package ‘FastRWeb’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/FastRWeb’ The downloaded packages are in ‘/tmp/Rtmpno8hhF/downloaded_packages’ Warning messages: 1: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'Cairo' had non-zero exit status 2: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'FastRWeb' had non-zero exit status I cannot find what the Cairo is here, 16 entries with this search term below. It is apparently some library. $ apt-cache search libcairo|wc 16 132 996 Perhaps related http://stackoverflow.com/questions/9826128/r-making-r-rook-program-into-rscript-program-r http://stackoverflow.com/questions/9812547/r-gui-vizualiser-with-command-line-access-browser-based-letting-users-to-s Some related packages FastRWeb and RServe for the gWidgetsWWW2 -pkg.

    Read the article

  • How to check last changes in filesystem or directory with bash?

    - by Robert Vila
    After the system unmounted the root partition I detected that some files are missing in the filesystem. wifi and the gwibber icons disappeared from the indicator applet I want to check if there are other files missing using the ls program and the locate program, which woks on indexes of a previous state of the filesystem. Thus, locate '/usr/share/icons/*' | xargs ls -d 2>&1 >/dev/null serves for that purpose, and I can count the nonexistent files like this: locate '/usr/share/icons/*' | xargs ls -d 2>&1 >/dev/null | wc -l except for the case where filenames have blank spaces in them; and, not very surprisingly, that is the case with Ubuntu (OMG!! It is no longer "forbidden" like in good old times). If I use: locate '/usr/share/icons/*' | xargs -Iñ ls -d 'ñ' 2>&1 >/dev/null it is not working because there is some kind of interference in the syntax between the redirections of the standard outputs and the use of the parameter -I. Can anyone please help me with this syntax or giving another idea?

    Read the article

  • disable shutdown/suspend if there is other user logged in via ssh

    - by Denwerko
    I remember that in versions of ubuntu around 9.04 was possible to disable user to shutdown ( and maybe suspend too ) system if there was other user logged in.Something like policykit or similar. Is it possible to do in 11.04 ? Thanks edit: if someone needs ( for own risk ), little change in /usr/lib/pm-utils/bin/pm-action will allow user to suspend machine if he is only user logged in or when user will run sudo pm-suspend. Probably not best piece of code, but for now works. diff -r 805887c5c0f6 pm-action --- a/pm-action Wed Jun 29 23:32:01 2011 +0200 +++ b/pm-action Wed Jun 29 23:37:23 2011 +0200 @@ -47,6 +47,14 @@ exit 1 fi +if [ "$(id -u )" == 0 -o `w -h | cut -f 1 -d " " | sort | uniq | wc -l` -eq 1 ]; then + echo "either youre root or root isnt here and youre only user, continuing" 1&2 + else + echo "Not suspending, root is here or there is more users" 1&2 + exit 2 + fi + + remove_suspend_lock() { release_lock "${STASHNAME}.lock" Question still stands, is it possible to forbid shutdown or suspend when there is more than one user logged in ( without rewriting system file )?

    Read the article

  • Attempting to GREP details of a Java error

    - by BOMEz
    I'm running Ubuntu 11 and I'm having some issues with grep. I have a shell script (see below) which essentially checks if a certain Java program of mine is running, if not it runs it. That part works out great! If my Java application throws any kind of exception however I would like to capture that information and email it to myself. How can I go about checking to see if the call to java -jar /bin/MyApp.jar fails? I tried piping it to grep, but that doesn't seem to work. Below is the full script that I've written: #Check if MyApp.jar is running, if not run it. if [ $(ps aux | grep 'java' | grep -v grep | wc -l | tr -s "\n") -eq 0 ] then echo "PacketCapture Starting...\n" java -jar /bin/MyApp.jar echo "PacketCapture Started.\n" else echo "PacketCapture already running.\n" fi

    Read the article

  • How to count differences between two files on linux?

    - by Zsolt Botykai
    Hi all, I need to work with large files and must find differences between two. And I don't need the different bits, but the number of differences. For the differ rows I come up with diff --suppress-common-lines --speed-large-files -y File1 File2 | wc -l And it works, but is there a better way to do it? And how to count the exact number of differences (with standard tools like bash, diff, awk, sed some old version of perl)? Thanks in advance

    Read the article

  • Ubuntu bash command

    - by pedro
    Hello i want to show the number of lines, words and characters of all configuration files "/ Etc / * conf" (command "wc"). How can i modify the command to not view the messages error.

    Read the article

  • Count number of occurrences of a pattern in a file (even on same line)

    - by jrdioko
    When searching for number of occurrences of a string in a file, I generally use: grep pattern file | wc -l However, this only finds one occurrence per line, because of the way grep works. How can I search for the number of times a string appears in a file, regardless of whether they are on the same or different lines? Also, what if I'm searching for a regex pattern, not a simple string? How can I count those, or, even better, print each match on a new line?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >