Search Results

Search found 20140 results on 806 pages for 'output formatting'.

Page 627/806 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • hg clone has stopped working on my Vista box

    - by vkraemer
    I have a Windows Vista machine that has been connecting to http://hg.netbeans.org productively for awhile... until recently. Lately, when I attempt to pull or clone, the update appears to stall... I see the following messages on the screen when I attempt to clone: destination directory: web-main requesting all changes adding changesets And then... nothing happens. I have opened the Task Manager and there doesn't appear to be any significant network activity for HOURS. I can contact the server with FireFox and see the proper output. I can clone from the repo with Solaris and/or Mac OS X... so the issue doesn't appear to be at the 'other end'. I had been running a fairly old version of Mercurial before this started happening. After it started happening, I upgraded to Mercurial 1.5.2.. which did not help resolve the issue at all. What are the likely causes and work-arounds for this?

    Read the article

  • Symlink path can be followed manually, but `cd` returns Permission denied

    - by Ricket
    I am trying to access the directory /usr/software/test/agnostic. There are several symlinks involved in this path. As you can see by the below transcript, I am unable to cd directly to the path, but I can check each step of the way and cd to the symlinked directories until I reach the destination. Why is this? (and how do I fix it?) Ubuntu 12.10, bash > ls /usr/software/test/agnostic ls: cannot access /usr/software/test/agnostic: Permission denied > cd /usr/software/test > cd agnostic bash: cd: agnostic: Permission denied > pwd -P /x/eng/localtest/arch/x86_64-redhat-rhel5 > ls -al | grep agnostic lrwxrwxrwx 1 root root 15 Oct 23 2007 agnostic -> noarch/agnostic > ls -al | grep noarch ... lrwxrwxrwx 1 root root 23 Oct 23 2007 noarch -> /x/eng/localtest/noarch > cd noarch > cd agnostic bash: cd: agnostic: Permission denied > ls -al | grep agnostic lrwxrwxrwx 1 5808 dip 4 Oct 5 2010 agnostic -> main > cd main > ls (correct output of `ls`) > pwd /usr/software/test/noarch/main > pwd -P /x/eng/localtest/noarch/main

    Read the article

  • zfs pool error, how to determine which drive failed in the past

    - by Kendrick
    I had been copying data from my pool so that I could rebuild it with a different version so that I could go away from solaris 11 and to one that is portable between freebsd/openindia etc. it was copying at 20mb a sec the other day which is about all my desktop drive can handle writing from the network. suddently lastnight it went down to 1.4mb i ran zpool status today and got this. pool: store state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t3d0p0 ONLINE 0 0 2 c8t4d0p0 ONLINE 0 0 10 c8t2d0p0 ONLINE 0 0 0 it is currently a 3 x1tb drive array. what tools would best be used to determine what the error was and which drive is failing. per the admin doc The second section of the configuration output displays error statistics. These errors are divided into three categories: READ – I/O errors occurred while issuing a read request. WRITE – I/O errors occurred while issuing a write request. CKSUM – Checksum errors. The device returned corrupted data as the result of a read request. it was saying low counts could be any thing from a power flux to a disk event but gave no suggestions as to what tools to check and determine with.

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • Auto-focus xdvi after running viewdvi in Emacs with AUCTeX.

    - by D Connors
    I've been using emacs with AUCTeX mode to edit my latex documents for a few days now, but there's something that's really bugging me. As it should be, whenever I do C-c C-c RET it compiles the file, and if repeat the command it views the output in xdvi. It's also set to the mini-mode TeX-source-specials-mode, so instead of opening a new window in xdvi it only reloads the window that's already open, brings it to the front, and sends me to wherever the pointer was in emacs (forward search). Now here's the problem: Even though the xdvi window is brought to the front, it's not focused. Instead, the emacs windows stays with focus (and that's where any keyboard input goes). And I keep forgetting about that, which leads me to accidentally editing the source file while trying to navigate in xdvi. Not to mention I'm forced to alt-tab in order to focus xdvi, and alt-tab twice if I just want to get back to emacs. Is there a way around this problem? I just want xdvi to be focused whenever I run the view command from emacs.

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • linux: per-process monitor, every 10 minutes, with history access

    - by Inbar Rose
    I really didn't know a better way to ask my question, hence you get a horribly named question. I will explain what i want to do, maybe that will help you help me. I would like to have my linux machine continuously monitor (every 10 minutes) all the processes on my machine. The information from each process that I require is the name, CPU usage, allocated (virtual) memory, and resident (ram) memory. If these periodic reports were to be looked at, they would look something like this: PROCESS CPU RAM VIRTUAL name1 % MB MB name2 % MB MB ...etc..etc These reports should be stored in such a way that I can access them at a later date by giving a date/time scope (range). For instance, if I want to see the history of my processes from 12:00:00 1.12.12 till 12:00:00 2.12.12 I can - and it should give me the history of the processes for every 10 minutes between those date/time borders. The format of the return is not important, that will be handled by a script anyway and can be modified into anything I need. I have looked into a few things so far, but have not found something that clearly meets my needs. Among the things i searched: sar, free(1), top(1).. and a few other things. It should be a simple issue, i can already see all this information by simply looking at my htop, but i need only a tool that will gather the desired fields for me for each processes every 10 minutes, and then also let me extract slices of that data based on date/time scopes (ranges). note: I have limited experience with linux, so please give detailed information. note2: The desired output will be something like this (after receiving the desired range) CPU USAGE BY PROCESS: proc_nameA 1,2,2,2,2,2...... numbers represent % usage every 10 minutes... proc_nameB 4,3,3,6,1,2...... The same idea with the other information.

    Read the article

  • Gentoo server time issue, can't manually set time, NTPD won't correct, too big of a time difference

    - by kevingreen
    So, my server is living in the future, unfortunately I can't get lottery numbers, or stock picks out of it. It thinks this is the time: Thu Nov 7 04:07:18 EST 2013 Not correct, I tried to set the time manually via date in a few ways # date -s "06 NOV 2013 14:48:00" # date 110614482013 -- same output, same problem Which outputs Wed Nov 6 14:48:00 EST 2013, but when I check the date again, it's still set to Nov 7th 0400 or whatever. I checked my system messages, and I see this pop up often: Nov 7 03:54:00 www ntpd[4482]: time correction of -47927 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Which makes sense, we're way off the correct time. But I can't seem to manually fix it. So now what? Also, I'm wondering if my hardware clock is setup correct, hwclock doesn't return any values. Would that be causing issues? This is a virtualized server, I don't have direct access to the hypervisor, but I can talk to who does, assume I can explain myself well enough. Thanks

    Read the article

  • fglrx-legacy-driver not seeing Radeon HD 4650 AGP

    - by Rocket Hazmat
    I am running Debian Squeeze on an old Dell Dimension 8300 box. It has an AGP Radeon HD 4650 card. I use this machine to mine bitcoins, and today I noticed that the machine had rebooted! My precious uptime! Anyway, my miner wouldn't start, so I figured might as well update my graphics driver, maybe that would fix the issue. I went to amd.com and downloaded the newest driver (12.6 legacy), but after installing it, aticonfig gave an error: aticonfig: No supported adapters detected I uninstalled the driver and figured I'd try to install it from apt. AMD has dropped support for the HD 4000 series in fglrx, forcing me to use fglrx-legacy-driver (currently only in experimental). In order to install this, I had to update libc6 (and some other important packages, like gcc), I had to use their wheezy versions. I finally got glrx-legacy-driver installed, but I still got: aticonfig: No supported adapters detected Why isn't the driver finding my video card? I have a hunch it has something to do with the fact that it's an AGP video card. Here is the output of lspci -v (why does it say Kernel driver in use: fglrx_pci?): 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0028 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 16 Memory at e0000000 (32-bit, prefetchable) [size=256M] I/O ports at de00 [size=256] Memory at fe9f0000 (32-bit, non-prefetchable) [size=64K] Expansion ROM at fea00000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] AGP version 3.0 Kernel driver in use: fglrx_pci

    Read the article

  • mysql disk io keeps increasing ... is that normal?

    - by trustfundbaby
    So I've been trying to figure out this disk IO problem I have been having with my linode VPS. Over the last day or two I've just left watch -n1 pidstat -d running in a console window and the output looks like this: Monitoring it over the last few days, I've noticed that my problem lies with the init, searchd, and mysql processes. Searchd is sphinx and all its indexes are on disk, so disk io there is inevitable (apparently). What I can't understand is why the disk reads (kB_rd/s) for mysql refuse to stabilize and just keep going up. It started out at 154 yesterday and is up to what you see in that screen shot. but disk writes (kB_wr/s) have remained pretty constant the entire time. My VPS only has 768MB RAM, my mysql db has a size of about 220MB and after running mysqltuner.pl and reading a bit about it, I've been advised to set my innodb_buffer_pool_size to 220MB but I simply cannot afford to do that ... I have it up to 150MB. My question is twofold. Why does the init process have that much disk reading to do? Why is mysql doing so much disk reading?

    Read the article

  • Failed Software RAID0 on Linux - Attempting to recover data

    - by Gizmo_the_Great
    I have a two disk RAID0 software raid (not hardware raid) that is reported to have failed during boot and my OS won't start. Using a Live CD, I get the following output : sudo mdadm -E /dev/sdc1 /dev/sdd1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1465145328 (698.64 GiB 750.15 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : ad427cd2:9f885f57:7f41015f:90f8f6af Update Time : Sun Jun 8 12:35:11 2014 Checksum : a37407ff - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 976771056 (465.76 GiB 500.11 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : 2ea0199d:cb08d9e7:0830448a:a1e1e348 Update Time : Sun Jun 8 13:06:19 2014 Checksum : 8883c492 - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) GParted lists both disks, detects the flags as 'Raid' and lists the data usage. Can anyone please help me re-assemble just so that I can copy some of the data off that I have not backed up recently? Thanks

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • iptables port redirection on Ubuntu

    - by Xi.
    I have an apache server running on 8100. When open http://localhost:8100 in browser we will see the site running correctly. Now I would like to direct all request on 80 to 8100 so that the site can be accessed without the port number. I am not familiar with iptables so I searched for solutions online. This is one of the methods that I have tried: user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 8100 -j ACCEPT user@ubuntu:~$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8100 It's not working. The site works on 8100 but it's not on 80. If print out the rules using "iptables -t nat -L -n -v", this is what I see: user@ubuntu:~$ sudo iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8100 Chain INPUT (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination The OS is a Ubuntu on a VMware. I thought this should be a simple task but I have been working on it for hours without success. :( What am I missing?

    Read the article

  • Upstart multiple instances of service not working.

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • Google Chrome on Linux not using ALSA for sound

    - by DarkMoon
    On my laptop, I've got an .asoundrc file that outputs sound to my USB headset. This works fine for SMplayer and Firefox. However, Google Chrome (at least, Flash-based and HTML5-based videos and HTML5-based audio in Chrome) plays through the laptop speakers instead. I've tried running Chrome from a command-line, hoping there would be some helpful output, but no such luck. I've tried looking through Google for whether Chrome even uses ALSA, or if it uses something else, but I have been unsuccessful in this. This question seems to be the same issue, but no suggestion was made. Anyone have any ideas? I'm running Gentoo with a 3.10.17 kernel, 1.0.27 ALSA utils, 2.6.5 FVWM, and 36.0.1985.143 Chrome. If you need more info, please let me know. EDIT: I've configured the USB headset as the default ALSA device. Volume levels for both headset and onboard are set and un-muted using alsamixer. My .asoundrc file is as follows. ctl.!default { type hw card Headset } pcm.dmixer { type dmix ipc_key 1024 slave { pcm { type hw card Headset } period_size 1024 buffer_size 4096 } bindings { 0 0 1 1 } } pcm.!default { type plug slave.pcm dmixer }

    Read the article

  • Locate devices within a building

    - by ams0
    The situation: Our company is spread between two floors in a building. Every employee has a laptop (macbook Air or MacbookPro) and an iPhone. We have static DHCP mappings and DNS resolution so every mobile gets a name like employeeiphone.example.com, every macbook air gets a employeelaptop.example.com and every macbook pro gets a employeelaptop.example.com on the Ethernet interface (the wifi gets a dynamic IP from a small range dedicated for the purpose). We know each and every MAC address of phones and laptops, since we do DHCP static mapping (ISC DHCP server runs on linux). At each floor we have a Netgear stack of two switches, connected via 10GB fiber to each other. No VLANs so far. At every floor there are 4 Airport Extreme making a single SSID network with WPA2 authentication. The request: Our CTO wants to know who is present at which floor. My solution (so far): Every switch contains an table listing MAC address and originating port. On each switch stack, all the MAC addresses coming from the other floor are listed as coming on port 48 (the fiber link). So I came up with: 1) Get the table from each switch via SNMP 2) Filter out the ones associated with port 48 3) Grep dhcpd.conf, removing all entries not *laptop and not *iphone 4) Match the two lists for each switch, output in JSON or XML 5) present the results on a dashboard for all to see I wrote it in bash with a lot of awk and sed, it kinda works but I always have for some reason stale entries in the switch lookup tables, making it unreliable; some people may have put their laptop to sleep, their iphones drop connections after a while, if not woken up and so on..I searched left and right, we are prepared to spend a little on the project too (RFIDs?), does anybody do something similar? I can provide with the script if needed (although it's really specific to our switches and naming scheme). Thanks! p.s. perhaps is this a question for stackoverflow? please move if it so.

    Read the article

  • Am I obliged to use ipv6 tunnel services if I want to be able to use it?

    - by Zagorax
    I was looking for configuring Slackware to use ipv6 but all instruction I found speak about using an ipv6 tunnel that encapsulate ipv6 request into ipv4 packet and send them to an external router that extracts ipv6 request and sends a reply (or, at least, this is what I understood). Is that necessary? Isn't there a way to configure a pure ipv6 system? If yes, could you please point me to a guide that clearly explain how to enable ipv6 without this trick? I would like to configure my Slackware desktop at first, and then do the same with my Centos server. EDIT: maybe I gave you too few information. Sorry. I'll write some more information thanks to the posted guide. ~$ test -f /proc/net/if_inet6 && echo "Running kernel is IPv6 ready" Running kernel is IPv6 ready So, it seems ipv6 is enabled in my kernel. Some other output from ifconfig, route and /etc/resolv.conf content (with opendns): ~$ /sbin/ifconfig wlan0 | grep inet6 inet6 addr: fe80::21f:3bff:fe60:cc5b/64 Scope:Link ~$ /sbin/route -A inet6 | grep wlan0 fe80::/64 :: U 256 0 0 wlan0 ff00::/8 :: U 256 0 0 wlan0 ~$ cat /etc/resolv.conf inet6 nameserver 2620:0:ccc::2 nameserver 208.67.222.222 nameserver 208.67.220.220 But still, with ping6 I can only ping localhost (::1). Everything else is unreachable. Normal ping works fine. That is why I was asking if I am obliged to use a tunnel.

    Read the article

  • package manager cannot create users? installation script fails?

    - by RapidWebs
    It seems that when trying to install any package which requires the creation of a system User and Group, the installer fails with the error: install: invalid user 'username' When it happened the first time, I assumed it was related to the package, or it's installation script. But now I am attempting a different package, and alas, the same issue arises! note: /etc/passwd and /etc/group is not chattr -i Here is the example output from an attempted installation (of varnish): Selecting previously unselected package varnish. Unpacking varnish (from .../varnish_3.0.2-1ubuntu0.1_amd64.deb) ... Processing triggers for man-db ... Setting up libvarnishapi1 (3.0.2-1ubuntu0.1) ... Setting up varnish (3.0.2-1ubuntu0.1) ... install: invalid user `varnish' dpkg: error processing varnish (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: varnish note: running ubuntu 12.04 For now, I have resolved the issue by manually creating the users, and running apt-get install -f, but this does not resolve the actual issue. Any ideas?

    Read the article

  • phpMyadmin issue on cpanel

    - by user1149244
    I logged in into our cpanel to view phpMyAdmin but I can't open it. It give me this error message: Warning: session_write_close() [function.session-write-close]: write failed: No space left on device (28) in /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php on line 42 Warning: session_write_close() [function.session-write-close]: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/cpanel/userhomes/cpanelphpmyadmin/sessions) in /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php on line 42 Warning: Cannot modify header information - headers already sent by (output started at /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php:42) in /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php on line 99 I contacted the support to assist me with. They just told me that the server is unmanaged. They told me to delete the free the partition but I don't know how to do it. I'm noob on that matter. The /var/log/btmp is over 8GB as they said. I want to delete that file. How would I do this? I need to free the space of the partition. I need some steps on how to do it and also steps on how to delete the /var/log/btmp. I have been googling around but I haven't found anything.

    Read the article

  • Server load increases by lot of httpd request with same PID

    - by user3740955
    I can see that my server load increases to more than 200-300 range. Before 1 week the maximum load was around 20-25. In top and ps -ef i can see a lot of httpd threads and the PPID of most of the httpd request are of the same PID. When i verified this the parent process ID is of root. Please let me know how i can reduce the server load. I have searched a lot for this but not able to find out a proper solution for this. Please let me know. Please see below a part of the top output. apache 29698 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29700 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29701 2062 10 16:54 ? 00:00:02 /usr/sbin/httpd apache 29702 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29703 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29705 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29706 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29707 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29708 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29709 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29710 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29711 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29712 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd Server version: Apache/2.2.3

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Can I use Upstart to start a script which requires the user's X session?

    - by ledneb
    I wrote a script which greps through the output of synclient to determine whether a laptop's touchpad has miraculously turned itself off (Ubuntu seems to /love/ doing this recently) and, if so, turns it back on. The script is something like this: #!/bin/bash while true ; do if [ `synclient | grep -e"TouchpadOff[\s]*1" | wc -l` -ge 1 ] ; then synclient TouchpadOff=0 fi sleep(3) done (I don't have the laptop to hand right now but you get the point! I will update later when I'm at my laptop if that's incorrect) So I tried running this as an upstart script so my touchpad can heal itself without any interaction. But it seems synclient doesn't find the current user's X session when my script is upstart'ed. I tried running it by using something like su -c myscript.sh ledneb in my script stanza, but to no avail. Should I be looking in the direction of /etc/X11/xinit/xinitrc rather than upstart? Is there a proper way to have this script run in the context of the current (or even a hard-coded) user's x session?

    Read the article

  • Why would the Apache parent process restart silently?

    - by miracle
    I run apache 2.2.9 with mpm prefork on debian lenny. Following http://httpd.apache.org/docs/2.2/mod/prefork.html, I would expect that there is one parent process, running as root and listening as configured, which would start child processes as defined by the Min/Max/etc. directives. I expect the children to be restarted as per MaxRequestsPerChild, but the parent process to stay put with one process id until I restart it manually. Out of a little paranoia, I started monitoring listening ports including process ids. I have a cron job every 20 minutes to run netstat -ap | grep LISTEN and diff the output. Sometimes (about once per day) I see a series of this: 8c8 < tcp6 0 0 [::]:www [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:www [::]:* LISTEN 6607/apache2 10c10 < tcp6 0 0 [::]:https [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:https [::]:* LISTEN 6607/apache2 Over a period of an hour or three, the parent would change its pid at least once every 20 minutes, without any explanation in the log files or any other hint that anything is going wrong. This is not what I expected. What am I missing?

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >