Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 430/1048 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • Upgrading phpmyadmin (and other packages) on Debian Squeeze

    - by westexasman
    I just setup a new VM with Debian Squeeze (latest stable release, 6.0.4). I am going for a webserver, so I installed the usual... apache, php5, mysql, phpmyadmin, etc. Everything went well, everything is working. My question is about upgrading packages. I noticed the phpmyadmin version is 3.3.7... the latest is 3.4.10.1. Doing apt-get update/upgrade does not upgrade the package. How does one go about upgrading packages on a Debian Squeeze server if apt-get update/upgrade does not work? Thanks!

    Read the article

  • Migrating to ssh key authentication; implications of adding sbin's to users $PATH

    - by ancillary
    I'm in the process of migrating to key's for authentication on my CentOS boxes. I have it all set up and working, but was a bit taken aback when I noticed service (and other things) didn't work the way I was accustomed to. Even after su'ing to root, still had to call the full path for it to work (which I assume to be expected/normal behavior). I also assume this is because there are different $PATH's for root (what I was using and am used to) and the newly created, key-using user. Specifically, I noticed the sbin's of the world missing from the user path. If I were to add those paths (/sbin/,/usr/sbin/,/usr/local/sbin) to a profile.d .sh script for this new key-loving user, would: I be opening up the system in ways I shouldn't be? I be doing something I needn't do save for reasons of laziness? I create other potential problems? Thanks.

    Read the article

  • Openmeetings: problem in running: "Address already in use "

    - by takpar
    hi, i am trying to run openmeetings in my CentOS vps. when i run $ ./red5.sh after a lot of lines it says: Bootstrap Complete and a few lines before it it says: Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind(Native Method) ... i have tried red5.sh with root and a nomral user. both gives error like that. any suggestion?

    Read the article

  • is there a way to prevent network manager from storing the password for a wireless network

    - by tolomea
    Our corporate wireless network uses continuously changing passwords with RSA tokens. So every time we need to connect to the wireless we need to enter a new password off the RSA token. For extra fun using the wrong password a couple of times in a row causes the users account to be locked. Network manager automatically stores and reuses the password, with the net result that it is constant getting my account locked. Is there some way to prevent it from storing my password for that network? Or perhaps someway to get the gnome keyring to not store it?

    Read the article

  • "service"-command and environment variables

    - by varesa
    I am trying to start a service that requires a env. variable to be set to certain path. I set this variable in "/etc/profile.d/". However when I start this service using the service command, it doesn't work. man service: service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /. So it seems that service is removing my variables. How should I set the variables up to keep them from being removed. Or is that something i should not do. I could start the service manually using the init-scripts, or even hardcode the path into the script, but I'd like to know how to use it with the service command.

    Read the article

  • RHEL 5 list missing critical patches/packages

    - by Vinnie Biros
    Im trying to figure out if there is an easy way to identify the missing critical patches/packages on my RHEL5 boxes. This is for audit purposes and was trying to figure out if there was an RPM command or something of the sort that would accomplish this easily. I know with my Solaris 10 boxes, i can run the "smpatch analyze" command which would display this information for me. Anyone know of anything similar for RHEL5? Thanks.

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • DHCP server inside a virtual machine can't see other machines

    - by William
    Hi, I setup a private network from virtual machines and one of the machines is the DHCP server for the group. I want to specify a next-server for the DHCP server but I'm having trouble connecting to any of the machines that I lease IPs to. I'm just trying to do a simple ping/ssh to 10.0.0.252 (a machine with a lease) but it doesn't seem to respond. Any advice? I'm assuming I need to be able to connect to my next-server but maybe I'm wrong. Thanks.

    Read the article

  • Pass parameters to a script securely

    - by codeholic
    What is the best way to pass parameters to a forked script securely? E. g. passing parameters through command line operands is not secure, since someone who has an account on the host can run ps and see them. Unnamed pipe is quite secure, as far as I understand, isn't it? I mean, passing parameters to STDIN of the forked process. What about passing parameters in environment vars? Is it secure? What about passing parameters by other means I didn't mention?

    Read the article

  • Why does Exim puts emails on hold if there are frozen messages in the queue?

    - by user51932
    I've a CentOS with CPanel server working as a SMTP server, which currently uses 20 different hostnames and IP addresses to deliver email for an email newsletter service. However, it's extremely slow in sending emails. It's sending like 10 emails per minute, which I check by running the "exim -bpc" command. What could be affecting this? One thing I'm supposing, is that there are frozen messages in the queue, which are slowing down the sending until they're sent out, and are putting new messages on hold. What are the most common reasons a message can get frozen? Also, would it be more efficient to use 20 different small VPSs to send out email rather than use one large VPS with the 20 different hostnames and IPs in it?

    Read the article

  • Apache Virtual host Subdomains points to same directory

    - by Jakobud
    I have setup subdomains using Apache before and have never really ran into any big problems. But with this (I believe Centos) server that is one of my clients, I'm not understanding what I'm doing wrong. Here is the .conf that apache is loading: Listen 80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.thedomain.com DocumentRoot /u1/thedomain.com/public RailsEnv production </VirtualHost> <VirtualHost *:80> ServerName subdomain.thedomain.com DocumentRoot /u1/subdomain.thedomain.com/public_html </VirtualHost> When I access either the primary or subdomain addresses, they both point to the primary www.thedomain.com content. Any thoughts? UPDATE: Yes I did a configtest and graceful after making the changes.

    Read the article

  • Creating rescue / install USB flash disk for CentOS

    - by wwwpanda
    For CentOS installation CDs, you can install OS, as well as booting into "rescue" mode so that you can do a chroot mount on the system partition for problem solving, even the system is installed in hardware RAID drives. How can we create a similar thing but on usb flash drive? I tried to do it with unetbootin, but when booting into the USB, eventually the CentOS setup still requires presence of CDs. Ultimately, I want to use this usb flash drive for remote disaster recovery through say HP iLo remote console / Dell iDrac etc.

    Read the article

  • Package temperature above threshold, cpu clock throttled

    - by drN
    I am running 64 bit Ubuntu 11.10 on an i7 with 8gigs of ram. I thought of putting this on askubuntu.com but decided that maybe the question has a much broader appeal. I have the following error message popping up when I run math simulations. CPUn: Core temperature above threshold, cpu clock throttled (total events = xxxxxxx) CPUn: Package temperature above threshold, cpu clock throttled (total events = xxxxxxx) I realize that this is a hardware warning message (machine check exception, correct me if I am wrong). How do I turn these messages off? Since it doesn't seem to have a detrimental effect of my calculations or my computer (presumably), I don't like it cluttering up my virtual console screen with hundreds of these messages.

    Read the article

  • What music streaming app fits my needs on Ubuntu?

    - by Jim
    I'm looking for an application to stream Internet radio on Ubuntu. I like listening to Radio Paradise while I work. Right now, I'm using Amarok. "Movie Player" sometimes refuses to open the stream, and VLC doesn't keep its window title updated with the currently playing track. Amarok has nice translucent notifications when tracks change, but track changes in streams don't trigger the notifications. Mostly, I want something that reliably opens streams and makes it easy to see the name of the track that's playing. If it has a built-in directory of streaming radio stations, that would be a big benefit.

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • getting a weird error whenever I try restarting apache

    - by Binny Zupnick
    I'm trying to install apache, php5, mysql, and phpmyadmin. And I'm following a tutorial but this error keeps happening here's the error: apache2: Syntax error on line 227 of /etc/apache2/apache2.conf: Could not open configuration file /etc/phpmyadmin/apache.conf: No such file or directory Action 'configtest' failed. The Apache error log may have more information. ...fail! I've tried removing all of them and reinstalling but, to no avail. I'm pulling my hair out over this so, thanks in advance! =) edit: during the tutorial I screwed up and deleting something, lol, so I know that's the issue I just don't know what to do about it now

    Read the article

  • It's the ethernet 10/100 in LAN transfer faster than USB 1.0?

    - by dag729
    I have an old laptop (PIII 800MHz, with 256 RAM) that I wish to use as my home server: it'll have to serve just two people, so I think that I'll be more than ok as for the RAM and the CPU. The issue is about data, because the internal hard disk is a 12GB, that is...ridicolous! I have more than 60GB of mixed storage and counting (images, videos and music) in an external usb hd. I could put the hd in my desktop pc just to serve the big files through ethernet or let it inside its usb box attached to the laptop. The question is: which of these solutions will be the fastest? USB 1.0 attached to the server (laptop) or internal hard disk serving files via 10/100 ethernet to the laptop on demand?

    Read the article

  • Timeout ssh sessions after inactivity?

    - by Insyte
    PCI requirement 8.5.15 states: "If a session has been idle for more than 15 minutes, require the user to re-enter the password to re-activate the terminal." The first, and most obvious, way to deal with ssh sessions that are idling at the bash prompt is by enforcing a read-only, global $TMOUT of 900. Unfortunately, that only covers sessions sitting at the bash prompt. The spirit of the PCI spec would also require killing sessions running top/vim/etc. I've considered writing a */1 cron job that parses the output of "/usr/bin/w" and kills the associated shell, but that seems like a blunt instrument. Any ideas for something that would actually do what the spec requires and just lock the terminal? I've looked at away and vlock; they both seem great for voluntarily locking your terminal, but I need a cron/daemon task that will enforce locking.

    Read the article

  • Comparison of Unix shells

    - by Andy White
    Of the major Unix shells (bash, ksh, tcsh, zsh, others?), are there any compelling reasons to use one over another? Which is the most interactive/command-line friendly? Which is the most conducive/intuitive for writing scripts? Are there any major built-in features that one shell offers that others don't? Are any of these shells really good for one type of function, but not another? Or are they all pretty well-rounded/flexible? Is it just a matter of personal preference? I can make this community wiki if anyone prefers.

    Read the article

  • Is it reasonable to make a RAID-1 array with a ram disk and a physical disk to maximize read performance and protect data?

    - by Petr Pudlák
    In one of the answers on SO (I forgot which one) I've seen a suggestion to make a RAID-1 array composed of a RAM disk and a physical partition. By adding the physical partition with --write-mostly and enabling --write-behind the system should read everything instantly from the RAM disk but still save all data to the physical partition so that the data are preserved and the RAID array can be assembled again after reboot. Is such a setup reasonable? Will it perform any better in some scenario than having just the physical partition and perhaps tweaking the kernel to favor disk cache (swappiness and vfs_cache_pressure)?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >