Search Results

Search found 24933 results on 998 pages for 'arch linux'.

Page 427/998 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • Migrating to ssh key authentication; implications of adding sbin's to users $PATH

    - by ancillary
    I'm in the process of migrating to key's for authentication on my CentOS boxes. I have it all set up and working, but was a bit taken aback when I noticed service (and other things) didn't work the way I was accustomed to. Even after su'ing to root, still had to call the full path for it to work (which I assume to be expected/normal behavior). I also assume this is because there are different $PATH's for root (what I was using and am used to) and the newly created, key-using user. Specifically, I noticed the sbin's of the world missing from the user path. If I were to add those paths (/sbin/,/usr/sbin/,/usr/local/sbin) to a profile.d .sh script for this new key-loving user, would: I be opening up the system in ways I shouldn't be? I be doing something I needn't do save for reasons of laziness? I create other potential problems? Thanks.

    Read the article

  • Upgrading phpmyadmin (and other packages) on Debian Squeeze

    - by westexasman
    I just setup a new VM with Debian Squeeze (latest stable release, 6.0.4). I am going for a webserver, so I installed the usual... apache, php5, mysql, phpmyadmin, etc. Everything went well, everything is working. My question is about upgrading packages. I noticed the phpmyadmin version is 3.3.7... the latest is 3.4.10.1. Doing apt-get update/upgrade does not upgrade the package. How does one go about upgrading packages on a Debian Squeeze server if apt-get update/upgrade does not work? Thanks!

    Read the article

  • Run application with other user

    - by user62367
    OS: Fedora 14 GUI: GNOME I need to run an application with another user then the "default" (normally used). Purpose: create a ".desktop" file on my desktop to run e.g.: Google Chrome with another user (NOT ROOT! - so beesu doesn't count.) There aren't any gksu, or kdesu packages in Fedora 14. Why? So i want to create a user with "adduser SOMEONE", and i want to run e.g.: Google Chrome with "SOMEONE" - then it will have minimum permissions, "more security". Thank you!

    Read the article

  • slow software raid

    - by Jure1873
    I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it. Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec I thought I would get more speed (while reading data) from two drives. I don't know what is the problem. I'm using kernel 2.6.31-18 hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec Edit: Raid 1

    Read the article

  • Rpm removal does not remove delivered dirs and leaves garbage

    - by Jim
    I deliver an application via an RPM. This application delivers various directories and files. E.g. under /opt/internal/com a file structure is being copied. I was expecting that on rpm -e all the file structure delivered under /opt/internal/com will be removed. But it does not. There are directories in the file structure that are non-empty. Is this the reason? But these (non-empty) directories were created by the RPM installation. So I would expect that they would be "owned" by RPM and removed automatically. Is this wrong? Am I supposed to remove them manually?

    Read the article

  • Why is 'grep -i' so slow? How to do it faster for ASCII?

    - by Vi.
    Consider: $ time lzop -d < tvtropes-index.lzo | egrep -B 5 '[Dd][eE][sS][cC][eE][nN][dD] ?[Ff][rR][oO][mM]' real 0m0.438s $ time lzop -d < tvtropes-index.lzo | egrep -B 5 'descend ?from' -i real 0m11.294s Both search case insensitively. Why is the -i version so slow? How do I make grep -i fast without entering things like [iI][nN] [tT][hH][iI][sS] [wW][aA][Yy]? For example, perl -ne 'print if /descend ?from/i' works fast, but '-B 5' is not as trivial to implement as in grep (as well as other options).

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • What is the best way to handle the multitude of different logs created all around the place?

    - by Low Kian Seong
    I run a few applications which creates their own logs. Then I run cron scripts on the same server to do importing of data for my app. When these cron errors out, the default is it sends emails to the user that runs the cron job. There are just too many places that I need to check the logs and mails for stuff that might have potentially went wrong. My question is, what is the best way to do this or even better is like a log parser application which will go through all the system logs when something really goes wrong instead of me having to go through it daily?

    Read the article

  • APF, IPTABLES, Fedora 15 - Not blocking correctly

    - by RichardW11
    I just got a new remote server which came with Fedora 15. I first tried to run APF but it gave me this error "apf(18031): {glob} unable to load iptables module (ip_tables), aborting.". Which I then set SET_MONOKERN="0" to SET_MONOKERN="1" to resolve the problem. However, with my config file showing BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778" The ports show up as closed, instead of being filtered. Any idea why this would be happening? 22/tcp open ssh 80/tcp open http 443/tcp open https 2323/tcp closed 3d-nfsd 4662/tcp closed edonkey 6346/tcp closed gnutella 6699/tcp closed napster 6881/tcp closed bittorrent-tracker 7778/tcp closed interwise

    Read the article

  • GNOME Screensaver Widgets

    - by Dark Falcon
    Is there a way to add widgets to a Gnome screensaver? I think this can be done with KDE 4, but I've never liked KDE very much. I'm a programmer and comfortable with writing code if needed. I'd like to be able to: See the weather and forecast Control Rhythmbox Use a flash card widget for reviewing musical concepts The reason I want these on the screensaver is that I have login restrictions. I would like to be able to do a very limited subset of activities without having to log in.

    Read the article

  • How to automatically take daily HDD backup?

    - by user13107
    I think my HDD might have crashed. I don't want to lose data if this happens again. I have dual boot Windows/Ubuntu system. What is the best way to backup data and other software settings in Ubuntu? I don't care much about Windows partition, important stuff is in Ubuntu. I have 1 Tb external HDD (laptop HDD is of 500 gb total). One way would be to run rsync every day (or via cronjob) to backup everything to external HDD. What might be better ways of achieving this (backup)? Also are instant backup software recommended? Are there any disadvantages of instant backup as opposed to daily rsync?

    Read the article

  • How can a driver change the kernel page table?

    - by Naruto
    I am encountering an issue with kernel memory. When my driver finishes running, the other processes in kernel fail to run, for example, I run ls, the command crashes the kernel with error "Corrupted page table" at a specified address. I do not know whether the page table of my driver relates to the page table of other process. How can my driver changes the page table of the other processes? And how the driver of a process relates to the kernel page table? As I know when the driver runs, it will be switched to kernel context. Kernel has its own page table and the driver has it own one. What is the relation among the kernel page table, the page table of my driver and the page table of the other processes when it runs in kernel context?

    Read the article

  • running a web server with encrypted file system (all or part of it)

    - by Carlos
    Hi, I need a webserver (lamp) running inside a virtual machine (#1) running as a service (#2) in headless mode (#3) with part or the whole filesystem encrypted (#4). The virtual machine will be started with no user intervention and provide access to a web application for users in the host machine. Points #1,#2 and #3 are checked and proved to be working fine with Sun VirtualBox, so my question is for #4: Can I encrypt the all filesystem and still access the webserver (using a browser) or will grub ask me for a password? If encrypting the all filesystem is not an option, can I encrypt only /home and /var/www ? will apache/php be able to use files in /home or /var/www without asking for a password or mounting these partitions manually? Thanks

    Read the article

  • Creating ip alias on bonded interface ie. bond0:1

    - by bobothechimp
    System: HP Proliant DL360 G5 running CentOS 5.4 Bonded interface is working fine for a long time. I just went to add an alias the way I always have on a regular interface, and on first check it works (pinging on the local box) but it is not accessable from outside (iptables is turned off). In addition with this setup the normal network response started to decline, hanging for around a minute before I could get a prompt on login. Here are my config files: [root network-scripts]# cat ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-bond0 DEVICE=bond0 BONDING_OPTS="mode=1 miimon=100" BOOTPROTO=none ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.11 USERCTL=no [root network-scripts]# cat ifcfg-bond0:1 DEVICE=bond0:1 BOOTPROTO=static ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.12 USERCTL=no any thoughts?

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • Timeout ssh sessions after inactivity?

    - by Insyte
    PCI requirement 8.5.15 states: "If a session has been idle for more than 15 minutes, require the user to re-enter the password to re-activate the terminal." The first, and most obvious, way to deal with ssh sessions that are idling at the bash prompt is by enforcing a read-only, global $TMOUT of 900. Unfortunately, that only covers sessions sitting at the bash prompt. The spirit of the PCI spec would also require killing sessions running top/vim/etc. I've considered writing a */1 cron job that parses the output of "/usr/bin/w" and kills the associated shell, but that seems like a blunt instrument. Any ideas for something that would actually do what the spec requires and just lock the terminal? I've looked at away and vlock; they both seem great for voluntarily locking your terminal, but I need a cron/daemon task that will enforce locking.

    Read the article

  • Permission denied when trying to execute a binary burned to a CD-R

    - by user16654
    On an Ubuntu 9.10 (Karmic Koala) machine, I burned a CD from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the CD by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the CD as root so I burned another one as another user but I still have the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • What music streaming app fits my needs on Ubuntu?

    - by Jim
    I'm looking for an application to stream Internet radio on Ubuntu. I like listening to Radio Paradise while I work. Right now, I'm using Amarok. "Movie Player" sometimes refuses to open the stream, and VLC doesn't keep its window title updated with the currently playing track. Amarok has nice translucent notifications when tracks change, but track changes in streams don't trigger the notifications. Mostly, I want something that reliably opens streams and makes it easy to see the name of the track that's playing. If it has a built-in directory of streaming radio stations, that would be a big benefit.

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >