Search Results

Search found 239 results on 10 pages for 'wc'.

Page 6/10 | < Previous Page | 2 3 4 5 6 7 8 9 10  | Next Page >

  • How to "grep" out specific lline ranges of a file

    - by Mike
    There are often times I'll grep -l whatev file to find what I'm looking for. Say the output is 1234: whatev 1 5555: whatev 2 6643: whatev 3 If I want to then just extract the lines between 1234 and 5555, is there a tool to do that? For static files I have a script that does wc -l of the file and then does the math to split it out with tail & head but that doesn't work out so well with log files that are constantly being written to.

    Read the article

  • output redirection in UNIX

    - by Happy Mittal
    I am a beginner in UNIX. I am finding some difficulty in input/output redirection. ls -l temp cat temp Here why temp file is shown in the list and moreover, it is showing 0 characters. wc temp temp cat temp here output is 0 0 0 temp. Why lines, words, characters are 0. Please help me to undestand this concept.

    Read the article

  • How to "grep" out specific line ranges of a file

    - by Mike
    There are often times I'll grep -l whatev file to find what I'm looking for. Say the output is 1234: whatev 1 5555: whatev 2 6643: whatev 3 If I want to then just extract the lines between 1234 and 5555, is there a tool to do that? For static files I have a script that does wc -l of the file and then does the math to split it out with tail & head but that doesn't work out so well with log files that are constantly being written to.

    Read the article

  • Data extract from website URL

    - by user2522395
    From this below script I am able to extract all links of particular website, But i need to know how I can generate data from extracted links especially like eMail, Phone number if its there Please help how i will modify the existing script and get the result or if you have full sample script please provide me. Private Sub btnGo_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGo.Click 'url must be in this format: http://www.example.com/ Dim aList As ArrayList = Spider("http://www.qatarliving.com", 1) For Each url As String In aList lstUrls.Items.Add(url) Next End Sub Private Function Spider(ByVal url As String, ByVal depth As Integer) As ArrayList 'aReturn is used to hold the list of urls Dim aReturn As New ArrayList 'aStart is used to hold the new urls to be checked Dim aStart As ArrayList = GrabUrls(url) 'temp array to hold data being passed to new arrays Dim aTemp As ArrayList 'aNew is used to hold new urls before being passed to aStart Dim aNew As New ArrayList 'add the first batch of urls aReturn.AddRange(aStart) 'if depth is 0 then only return 1 page If depth < 1 Then Return aReturn 'loops through the levels of urls For i = 1 To depth 'grabs the urls from each url in aStart For Each tUrl As String In aStart 'grabs the urls and returns non-duplicates aTemp = GrabUrls(tUrl, aReturn, aNew) 'add the urls to be check to aNew aNew.AddRange(aTemp) Next 'swap urls to aStart to be checked aStart = aNew 'add the urls to the main list aReturn.AddRange(aNew) 'clear the temp array aNew = New ArrayList Next Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String) As ArrayList 'will hold the urls to be returned Dim aReturn As New ArrayList Try 'regex string used: thanks google Dim strRegex As String = "<a.*?href=""(.*?)"".*?>(.*?)</a>" 'i used a webclient to get the source 'web requests might be faster Dim wc As New WebClient 'put the source into a string Dim strSource As String = wc.DownloadString(url) Dim HrefRegex As New Regex(strRegex, RegexOptions.IgnoreCase Or RegexOptions.Compiled) 'parse the urls from the source Dim HrefMatch As Match = HrefRegex.Match(strSource) 'used later to get the base domain without subdirectories or pages Dim BaseUrl As New Uri(url) 'while there are urls While HrefMatch.Success = True 'loop through the matches Dim sUrl As String = HrefMatch.Groups(1).Value 'if it's a page or sub directory with no base url (domain) If Not sUrl.Contains("http://") AndAlso Not sUrl.Contains("www") Then 'add the domain plus the page Dim tURi As New Uri(BaseUrl, sUrl) sUrl = tURi.ToString End If 'if it's not already in the list then add it If Not aReturn.Contains(sUrl) Then aReturn.Add(sUrl) 'go to the next url HrefMatch = HrefMatch.NextMatch End While Catch ex As Exception 'catch ex here. I left it blank while debugging End Try Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String, ByRef aReturn As ArrayList, ByRef aNew As ArrayList) As ArrayList 'overloads function to check duplicates in aNew and aReturn 'temp url arraylist Dim tUrls As ArrayList = GrabUrls(url) 'used to return the list Dim tReturn As New ArrayList 'check each item to see if it exists, so not to grab the urls again For Each item As String In tUrls If Not aReturn.Contains(item) AndAlso Not aNew.Contains(item) Then tReturn.Add(item) End If Next Return tReturn End Function

    Read the article

  • MEF C# Service - DLL Updating

    - by connerb
    Currently, I have a C# service that runs off of many .dll's and has modules/plugins that it imports at startup. I would like to create an update system that basically stops the service, deletes any files it is told to delete (old versions), downloads new versions from a server, and starts the service. I believe I have coded this right except for the delete part, because as long as I am not overwriting anything, the file will download. If I try to overwrite something, it won't work, which is why I am trying to delete it before hand. However, when I do File.Delete() to the path that I want to do, it gives me access to the path is denied. Here is my code: new Thread(new ThreadStart(() => { ServiceController controller = new ServiceController("client"); controller.Stop(); controller.WaitForStatus(ServiceControllerStatus.Stopped); try { if (um.FilesUpdated != null) { foreach (FilesUpdated file in um.FilesUpdated) { if (file.OldFile != null) { File.Delete(Path.Combine(Utility.AssemblyDirectory, file.OldFile)); } if (file.NewFile != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/" + file.NewFile, Path.Combine(Utility.AssemblyDirectory, file.NewFile)); } } } if (um.ModulesUpdated != null) { foreach (ModulesUpdated module in um.ModulesUpdated) { if (module.OldModule != null) { File.Delete(Path.Combine(cs.ModulePath, module.OldModule)); } if (module.NewModule != null) { wc.DownloadFile(cs.UpdateUrl + "/updates/client/modules/" + module.NewModule, Path.Combine(cs.ModulePath, module.NewModule)); } } } } catch (Exception ex) { Logger.log(ex); } controller.Start(); })).Start(); I believe it is because the files are in use, but I can't seem to unload them. I though stopping the service would work, but apparently not. I have also checked the files and they are not read-only (but the folder is, which is located in Program Files, however I couldn't seem to get it to not be read-only programmatically or manually). The service is also being run as an administrator (NT AUTHORITY\SYSTEM). I've read about unloading the AppDomain but AppDomain.Unload(AppDomain.CurrentDomain); returned an exception as well. Not too sure even if this is a problem with MEF or my program just not having the correct permissions...I would assume that it's mainly because the file is in use.

    Read the article

  • "Possible SYN flooding" in log despite low number of SYN_RECV connections

    - by al4
    Recently we had an apache server which was responding very slowly due to SYN flooding. The workaround for this was to enable tcp_syncookies (net.ipv4.tcp_syncookies=1 in /etc/sysctl.conf). I posted a question about this here if you want more background. After enabling syncookies we started seeing the following message in /var/log/messages approximately every 60 seconds: [84440.731929] possible SYN flooding on port 80. Sending cookies. Vinko Vrsalovic informed me that this means the syn backlog is getting full, so I raised tcp_max_syn_backlog to 4096. At some point I also lowered tcp_synack_retries to 3 (down from the default of 5) by issuing sysctl -w net.ipv4.tcp_synack_retries=3. After doing this, the frequency seemed to drop, with the interval of the messages varying between roughly 60 and 180 seconds. Next I issued sysctl -w net.ipv4.tcp_max_syn_backlog=65536, but am still getting the message in the log. Throughout all this I've been watching the number of connections in SYN_RECV state (by running watch --interval=5 'netstat -tuna |grep "SYN_RECV"|wc -l'), and it never goes higher than about 240, much much lower than the size of the backlog. Yet I have a Red Hat server which hovers around 512 (limit on this server is the default of 1024). Are there any other tcp settings which would limit the size of the backlog or am I barking up the wrong tree? Should the number of SYN_RECV connections in netstat -tuna correlate to the size of the backlog? Update As best I can tell I'm dealing with legitimate connections here, netstat -tuna|wc -l hovers around 5000. I've been researching this today and found this post from a last.fm employee, which has been rather useful. I've also discovered that the tcp_max_syn_backlog has no effect when syncookies are enabled (as per this link) So as a next step I set the following in sysctl.conf: net.ipv4.tcp_syn_retries = 3 # default=5 net.ipv4.tcp_synack_retries = 3 # default=5 net.ipv4.tcp_max_syn_backlog = 65536 # default=1024 net.core.wmem_max = 8388608 # default=124928 net.core.rmem_max = 8388608 # default=131071 net.core.somaxconn = 512 # default = 128 net.core.optmem_max = 81920 # default = 20480 I then setup my response time test, ran sysctl -p and disabled syncookies by sysctl -w net.ipv4.tcp_syncookies=0. After doing this the number of connections in the SYN_RECV state still remained around 220-250, but connections were starting to delay again. Once I noticed these delays I re-enabled syncookies and the delays stopped. I believe what I was seeing was still an improvement from the initial state, however some requests were still delayed which is much worse than having syncookies enabled. So it looks like I'm stuck with them enabled until we can get some more servers online to cope with the load. Even then, I'm not sure I see a valid reason to disable them again as they're only sent (apparently) when the server's buffers get full. But the syn backlog doesn't appear to be full with only ~250 connections in the SYN_RECV state! Is it possible that the SYN flooding message is a red herring and it's something other than the syn_backlog that's filling up? If anyone has any other tuning options I haven't tried yet I'd be more than happy to try them out, but I'm starting to wonder if the syn_backlog setting isn't being applied properly for some reason.

    Read the article

  • Windows 7 Drivers for Hp Tx2000 tablet Pc

    - by iceman
    Has anyone tried the new tablet features of windows 7 on the Hp tablet TX2001au : http://h10025.www1.hp.com/ewfrf/wc/product?product=3653674&lc=en&dlc=&cc=us&lang=&softwareitem=ob-77258-1&dest_page=product&os=4062 ? Hp has only Vista drivers and no Windows 7 drivers yet : link and link The main issues after installing Windows 7 is that the graphics card (Nvidia GeForce Go 6150) and the coprocessor are not detected.The latest 191.07 GeForce/ION Driver installer says that it cannot find any compatible hardware by the way, i still haven't been able to get Xp running on this . The Nvidia GeForce Go 6150 driver seems to be customized by HP !! If you install the laptopvideo2go.com/ drivers on Xp(after MS nags that it can't verify the publisher of the driver) , it shows the blue screen of death. Is there a way to debug the BSOD ?

    Read the article

  • error: cannot fork() for status: Resource temporarily unavailable (git)

    - by Elnaz Shahmehr
    when I want to do something: add , remove, pull , push in github, I just have this error in my terminal Thanks in advance! selnaz:iOS-Tidinfo Lnaz$ git add . error: cannot fork() for status: Resource temporarily unavailable fatal: Could not run git status --porcelain fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed fatal: git status --porcelain failed Edit: selnaz:iOS-Tidinfo Lnaz$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited Edit2 selnaz:iOS-Tidinfo Lnaz$ ps xfu | wc -l ps: illegal option -- f usage: ps [-AaCcEefhjlMmrSTvwXx] [-O fmt | -o fmt] [-G gid[,gid...]] [-u] [-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]] ps [-L] 0

    Read the article

  • How to run sed on over 10 million files in a directory?

    - by Sandro
    I have a directory that has 10144911 files in it. So far I've tried the following: for f in ls; do sed -i -e 's/blah/blee/g' $f; done Crashed my shell, the ls is in a tilda but i can't figure out how to make one. ls | xargs -0 sed -i -e 's/blah/blee/g' Too many args for sed find . -name "*.txt" -exec sed -i -e 's/blah/blee/g' {} \; Couldn't fork any more no more memory Any other ideas on how to create this kind command? The files don't need to communicate with each other. ls | wc -l seems to work (very slow) so it must be possible.

    Read the article

  • Is there a maximum of open files per process in Linux?

    - by Malax
    My question is pretty simple and is actually stated in the title. One of my applications throws errors regarding "too many open files" at me, even tho the limit for the user the application runs with is higher than the default of 1024 (lsof -u $USER reports 3000 open fds). Because I cannot imagine why this happens, I guess there might be a maximum per process. Any idea is very appreciated! Edit: Some values that might help... root@Debian-60-squeeze-64-minimal ~ # ulimit -n 100000 root@Debian-60-squeeze-64-minimal ~ # tail -n 4 /etc/security/limits.conf myapp soft nofile 100000 myapp hard nofile 1000000 root soft nofile 100000 root hard nofile 1000000 root@Debian-60-squeeze-64-minimal ~ # lsof -n -u myapp | wc -l 2708

    Read the article

  • Unix Interview Question

    - by Rachel
    I am giving some interviews right now and recently I was asked this questions in Interview and I was not sure of the answer, in your opinion are this kind of questions worthwhile for Interview process and if yes than how would you go about approaching this kind of questions. How to get number of files in directory without using wc ? How to get all files in descending order on size ? What is the significance of ? in file searching ? Would appreciate if you can provide answers for this questions so that I could learn something about them as I am not sure for this questions.

    Read the article

  • Archive Recorded Show from DVR

    - by BillN
    IF I have an AT&T U-Verse DVR, is there a way to copy the information off the DVR to an PC for Archiving/burning to a DVD? I know that I could connect the A/V outs on the DVR to the A/V inputs on a DVD Recorder, or to a capture card on a PC, and Play the show, and record in real-time on the DVD or PC, but if the show is say a World Cup Game, this would take 2-3 hours per. My thought is that the DVR is just a HardDrive, and the files have to be stored in an digital format, it should be possible to do a copy of some sort. I'm currently recording all of the WC games, but if I don't watch them right away, I'm concerned that I will push other programming that the rest of the family has recorded out, making me unpopular at home. Note, I'm talking about archiving for personal use, not redistribution or anything.

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • apache2 slow responding (debian)

    - by baloo
    I'm running an apache2 2.2.9 webserver with modpython and mpm_worker_module. The current config for the mpm is ServerLimit 32 StartServers 10 MaxClients 800 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 The server has 1G of ram and a 100Mbit connection. Checking netstat -na | grep ESTABLISHED | wc -l gives me a number between 50 - 60. The load is about 1.0 Every pageload is also cached by memcached. I can't see why the server is so slow in responding to new connections, sometimes droping them completely? Also tried disabling iptables to make sure it's not because of a full state table or something like that. The only thing in dmesg is a lot of spam about "TCP: Treason uncloaked!"

    Read the article

  • Calculate disk space occupied by many .png files

    - by Alexander Farber
    I have 357 .png files located in different sub dirs of the current dir: settings# find . -name \*.png |wc -l 357 settings# find . -name \*.png | head ./assets/authenticationIcons/audio.png ./assets/authenticationIcons/bbid.png ./assets/authenticationIcons/camera.png ./bin/icons/ca_video_chat.png ./bin/icons/ca_voice_control.png ./bin/icons/ca_vpn.png ./bin/icons/ca_wifi.png Is there a oneliner to calculate the total disk space occupied by them (before I pngcrush them)? I've tried (unsuccessfully): settings# find . -name \*.png | xargs du -s 4 ./assets/support/wifi_locked_icon_white.png 1 ./assets/support/wifi_vpn_icon_connected.png 1 ./assets/support/wi_fi.png 1 ./assets/support/wi_fi_conected.png 8 ./bin/blackberry-tablet-icon.png 2 ./bin/icons/ca_about.png 2 ./bin/icons/ca_accessibility.png 2 ./bin/icons/ca_accounts.png 2 ./bin/icons/ca_airplane_mode.png 2 ./bin/icons/ca_application_permissions.png 1 ./bin/icons/ca_balance.png

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • I/O redirection using cygwin and mingw

    - by KLee1
    I have written a program in C and have compiled it using MinGW. When I try to run that program in Cygwin, it seems to behave normally (i.e. prints correct output etc.) However, I'm trying to pipe output to a program so that I can parse information from the program's output. However, the piping does not seem to be working in that I am not getting any input into the second program. I have confirmed this by using the following commands: This command seems to work fine: ./prog Performing this command returns nothing: ./prog | cat This command verifies the first: ./prog | wc Which returns: 0 0 0 I know that the script (including the piping from the program) works perfectly fine in an all Linux environment. Does anyone have any idea for why the piping isn't working in Cygwin? Thanks!

    Read the article

  • TOP CPU usage for whole system

    - by heike
    I am using a machine that has using cat /proc/cpuinfo | grep processor | wc -l returning 8 I am trying to load the server using a load generator that I wrote, and capture the behaviour of TOP command for idle (as the software that is tested on server runs on ROOT). Doing the load as an increasing step function, I capture the idle state every second, and see the result. Strange thing is that when I increase the load every 1 minute, the idle state is infact increasing (??). This honestly does not make sense .. I thought with more load, the idle state will decrease, and the cpu usage will increase. Is there any reasonable explanation for this behaviour, maybe for the server utilization itself? Thanks for any feedback -- ok, no idea for the down vote. but I try to find this behaviour a lot, can not find any reasonable things to explain this ..

    Read the article

  • HP C4400 is paper out

    - by borges
    I have a HP C4400 printer and it claims to be out of paper (which is not true) and don't even try to pull the paper. I have already followed these instructions with no success: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c00786157&cc=us&lc=en&dlc=en&product=3418705 Two strange things: I used the printer successfully minutes before this problem. It prints its test page successfully (by pressing the cancel and color buttons simultaneously). How can I fix this problem? PS: Sorry about the creepy english.

    Read the article

  • Apache has many PHP session files

    - by PiTheNumber
    # ls /var/lib/php5 | wc -l 7488 # ls -la -rw------- 1 wwwrun www 0 Nov 9 15:30 sess_vtuh671rlafdidfjmgjfu6065p4tfieg -rw------- 1 wwwrun www 0 Nov 12 02:30 sess_vu9pn476oiqbsd20q4s2brt60b9vg90d -rw------- 1 wwwrun www 0 Nov 9 15:07 sess_vuonfs2cqsdiq8ja51ornh6lp5j9mf93 -rw------- 1 wwwrun www 0 Nov 9 16:02 sess_vuutcad8as55il34db3uqhqrsltd4q6o -rw------- 1 wwwrun www 0 Nov 9 23:26 sess_vv2mrv5dnlnts6das4g5jlfldael4l0e -rw------- 1 wwwrun www 44 Nov 9 20:35 sess_vvc0cfjuvk3lqb5m97fv6gsmv6bjhsdk -rw------- 1 wwwrun www 0 Nov 9 10:33 sess_vvq82fhj9lg29gaejemlb2lrk25mqv7d -rw------- 1 wwwrun www 0 Nov 9 20:36 sess_vvtd4ka8rfmcroa34unl06916ubj8sb9 Most of them are empty. There are not so many users on the server so I wonder where those files came from. Is this a problem or how does apache handle those file? Do they get delete automaticly? Could this be caused by a bad PHP file?

    Read the article

  • Measuring custom statistics with sar

    - by Will Glass
    I have a server application which I think is leaking file handles. I want to track the usage of file descriptors over time on my Linux (ubuntu) server. I've figured out that I can track the number of file descriptors in use by a process with lsof -p `pgrep the-process-name` | wc -l Since I'm already using sysstat and sar to track various metrics, I thought it'd be nice to display with sar. I want to measure this every 10 minutes. Is it possible to add a custom metric to sar? Then I can easily report it out. If not, I'll write a simple cron job to collect this data and store it separately in a log file.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10  | Next Page >