Search Results

Search found 16665 results on 667 pages for 'nhibernate configuration'.

Page 111/667 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Apache outputs all urls of a second domain as a subfolder of the primary domain name

    - by s_rathbone
    Hi all, would anyone be able to possibly give me some guidance.. Basically, i have a 'shared hosting' account with a large internet hosting provider, and my account lets me have multiple seperate domains within this folder structure.(note: not aliased domains and not sub domains). so, my goal is to have 2 domains set up. i have already purchased the two domain names i need: The first domain is the 'primary' domain name for the root folder(eg. www.example1.com) and the second domain name is set for one of its sub folders(eg. www.example2.com is set to the folder www.example1.com/sites/music). The problem is that when apache returns a page of the second domain back to the browser, apache writes the hyperlinks as if it's a sub folder of the first domain ( eg. www.example2.com/index.html. comes out as http://www.example1.com/sites/music/index.html). Now, I have done some reading on this, looking though "Apache: the definitive guide"(o'reilly), and although it was useful, couldn't really find the answer. i'm guessing this issue is most likely an apache setup issue in http.conf, rather than an issue with the hosting company itself (which is why im posting it here) and I have also been to the official documentation for apache site, and i am guessing i might need to use something like the rewritebase directive in htaccess files.. but im really not sure, im more of a java programmer guy, and have been struggling with this for a couple of days. Any guidance would be REALLY appreciated. If it helps, my hosting company is godaddy, and my sites are hosted on linux. My problem was originally with wordpress which i reinstalled a number of times in various ways to correct the problem, but ive just done a test with a very simple static html, and it still has the same issue with relative urls like this: <html> <head></head><body><a href="images/dog.html">Pictures of Dogs</a></body> </html> However, it is fine if i hardcode the urls like this: <html> <head></head><body><a href="http://www.example2.com/images/dog.html">Pictures of Dogs</a></body> </html> Thanks heaps, Steve R NOW FIXED Ok, the problem has now been fixed, and i didn't need to modify any .conf or .htaccess files. The problem was, that when I went to install the second application into a second domain from the godaddy site, one of the setup questions is that it asks you which site you want it installed to. after that it asks for the desired folder path. However, the problem was that the second domain name was already pointing to the correct subfolder of the primary domain. So when I started installing wordpress again and came to the menu to select which site it was for, and it listed only the primary domain as an option, i assumed that this was like a label of "which hosting account?", or "which primary domain will your application will be installed under?" because I already knew that in the next step i was specifiying the folder. In order to correct this, you must make sure that your second domain is added to your domain list so that it will be listed as an option during the installation process. For further details please read tystips.com/archives/52/how2-save-money-host-multiple-wordpress-blogs-on-a-single-godaddy-hosting-account/

    Read the article

  • yum issue - error msg

    - by Monkey
    i am using oracle linux server 6.2. yum does not work. a manual wget was already used according to https://blogs.oracle.com/OTNGarage/entry/how_to_subscribe_to_the . there is always something about dropbox. yum update firefox Loaded plugins: refresh-packagekit, security http://linux.dropbox.com/fedora/6Server/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: Dropbox. Please verify its path and try again does anybody know a workaround?

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • Linux Mint Wireless doesn't connect

    - by guisantogui
    I'm having a great problem, I've installed Linux mint debian edition (LMDE), and following this tutorial http://community.linuxmint.com/tutorial/view/161 I did installed the network driver. The available connections appears to me, but when i try to connect to my connection at first time, I got this message: "(4) Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken." And the following tries, I got this another message: "(32) Insufficient privileges." I'm accepting ideas. Thanks. EDIT: The last piece of the logs: Oct 5 00:22:38 gsouza-host ntpd[2116]: peers refreshed Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): bringing up device. Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: nl80211: 'nl80211' generic netlink not found Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: Failed to initialize driver 'nl80211' Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: rfkill: WLAN soft blocked Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> WiFi hardware radio set enabled Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> WiFi now enabled by radio killswitch Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): supplicant interface state: starting -> ready Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): device state change: unavailable -> disconnected (reason 'supplicant-available') [20 30 42] Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): supplicant interface state: ready -> inactive Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <warn> Trying to remove a non-existant call id. Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: rfkill: WLAN unblocked Oct 5 00:22:44 gsouza-host avahi-daemon[1827]: Joining mDNS multicast group on interface wlan0.IPv6 with address fe80::7ae4:ff:fe4a:13a9. Oct 5 00:22:44 gsouza-host avahi-daemon[1827]: New relevant interface wlan0.IPv6 for mDNS. Oct 5 00:22:44 gsouza-host avahi-daemon[1827]: Registering new address record for fe80::7ae4:ff:fe4a:13a9 on wlan0.*. Oct 5 00:22:46 gsouza-host ntpd[2116]: Listen normally on 7 wlan0 fe80::7ae4:ff:fe4a:13a9 UDP 123 Oct 5 00:22:46 gsouza-host ntpd[2116]: peers refreshed

    Read the article

  • Setting up two screens in Xorg

    - by viraptor
    I'be got two Nvidia cards, but Xorg activates only one of them. The following config is based on the nvidia configurator output: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" LeftOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "keyboard" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "HP LE2201w" HorizSync 24.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Acer AL2017" HorizSync 30.0 - 82.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Card0" Driver "nvidia" VendorName "nVidia Corporation" BoardName "GeForce 6100 nForce 405" BusID "PCI:0:13:0" EndSection Section "Device" Identifier "Card1" Driver "nvidia" VendorName "nVidia Corporation" BoardName "GeForce 8400 GS" BusID "PCI:2:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection What I see in the log file is: (==) Log file: "/var/log/Xorg.0.log", Time: Fri Mar 19 11:08:08 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) ServerLayout "Layout0" (**) |-->Screen "Screen0" (0) (**) | |-->Monitor "Monitor0" (==) No device specified for screen "Screen0". Using the first device section listed. (**) | |-->Device "Card0" (**) |-->Screen "Screen1" (1) (**) | |-->Monitor "Monitor1" (==) No device specified for screen "Screen1". Using the first device section listed. (**) | |-->Device "Card0" (**) |-->Input Device "Keyboard0" (**) |-->Input Device "Mouse0" (**) Option "Xinerama" "0" (==) Automatically adding devices (==) Automatically enabling devices even though later on both cards are detected: (--) PCI:*(0:0:13:0) 10de:03d1:1019:2601 nVidia Corporation C61 [GeForce 6100 nForce 405] rev 162, Mem @ 0xfb000000/16777216, 0xd0000000/268435456, 0xfc000000/16777216, BIOS @ 0x????????/131072 (--) PCI: (0:2:0:0) 10de:0422:0000:0000 nVidia Corporation G86 [GeForce 8400 GS] rev 161, Mem @ 0xf8000000/16777216, 0xe0000000/268435456, 0xf6000000/33554432, I/O @ 0x0000bc00/128, BIOS @ 0x????????/131072 [ --- some more logs --- ] (II) Mar 19 11:08:10 NVIDIA(0): NVIDIA GPU GeForce 6100 nForce 405 (C61) at PCI:0:13:0 (II) Mar 19 11:08:10 NVIDIA(0): (GPU-0) [ --- some more logs --- ] (II) Mar 19 11:08:12 NVIDIA(GPU-1): NVIDIA GPU GeForce 8400 GS (G86) at PCI:2:0:0 (GPU-1) Unfortunately later on only one card is initialised and one screen is active. Xrandr shows only one screen too. Any ideas on how to fix it?

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Customize rsyslogd messages to show the sender of the message; not the receiver

    - by Nimmy Lebby
    I'm forwarding the WiFi router's log messages to our sysadmin box (sb3). This is the stanza in /etc/rsyslog.conf: # WiFi router log :fromhost-ip, isequal,'10.3.291.2' /var/log/wifi-router.log & ~ However, the log looks like this: Dec 23 10:41:58 sb3 dnsmasq-dhcp[253]: DHCPACK(br0) 10.3.292.133 xx:xx:xx:xx:xx:xx dg-ipad I want to customize so that anything logged to wifi-router.log does not mention sb3 but indicates the sender of the log message. How would I do this?

    Read the article

  • Error with Apache, Nagios and Snorby integration

    - by user1428366
    I'm trying to use apache to serve two different websites (Nagios and Snorby). The problem is that when I try to see the "/snorby" website, apache sends me the "It works" page. If I try to access to "/nagios" it works perfectly. Snorby is running under ruby passenger .This are the config files. <VirtualHost *:80> ScriptAlias /nagios/cgi-bin "/srv/nagios/sbin" <Directory "/srv/nagios/sbin"> # SSLRequireSSL Options ExecCGI AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> Alias /nagios "/srv/nagios/share" <Directory "/srv/nagios/share"> # SSLRequireSSL Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /srv/nagios/etc/htpasswd.users Require valid-user </Directory> </VirtualHost> And the other one is this: <VirtualHost *:80> #Alias /snorby "/var/www/snorby-2.6.0/public" # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /var/www/snorby-2.6.0/public <Directory /var/www/snorby-2.6.0/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> </VirtualHost> If I disable the Nagios webpage, the Snorby webpage works. I think the problem is Snorby because when I try to access to the Ip address with Nagios page disable, the webapplication redirects me to http:// myserverip/dashboard. Can anyone help me please? Thank you so much! Regards

    Read the article

  • fedora tomcat log file path

    - by Kamil
    My log file is inside: kamil@localhost tomcat$ grep "logs/" ./* ./log4j.properties:log4j.appender.R.File=${catalina.home}/logs/tomcat.log my CATALINA_HOME is kamil@localhost tomcat$ sudo grep "CATALINA" ./* ... ./tomcat.conf:CATALINA_HOME="/usr/share/tomcat" that above suggests that my log file is hare, and there it's: kamil@localhost tomcat$ sudo ls /usr/share/tomcat/logs/ | grep .out catalina.out So why can't I start server: kamil@localhost tomcat$ sudo tomcat start /usr/sbin/tomcat: line 30: /logs/catalina.out: No such file or directory

    Read the article

  • RAID options for a LAMP web server

    - by jetboy
    I'm due to set up a LAMP web server with four drives and a RAID controller to act as a web server. The drives are 146Gb SAS, and the machine has two quad core processors and 16Gb RAM. There will be very few write operations to the MySQL database, and I'll be using as much caching as possible to reduce disk I/O. Question is: Would I be better off splitting the drives into two RAID 1 arrays, splitting up sequential and random disk I/O, or would I get better overall performance putting them all in a single RAID 1+0 array?

    Read the article

  • Where are Adobe Acrobat Pro settings saved

    - by Bruce227
    Hi, I'm trying to figure out where the settings and options of an application(Adobe Acrobt Pro) are saved. This is to modify the settings without opening the application and going through the menu options. I'm trying to change the resolution of TIF settings. I have looked in the program file acrobat folder and the douments and settings but haven't found it. I also looked through the registry entrees just to make sure but didn't see anything relevant. Anyone have any ideas? Thanks

    Read the article

  • TargetProcess 404 error after installation

    - by Priednis
    I installed TargetProcess on my laptop but when trying to open the corresponding web page (http://localhost/TargetProcess2) I get 404 error. I am running Win7 with IIS7 (7.5.7600.16385), MS SQL 2008 Express. As suggested in Installation guide.pdf I have performed aspnet_regiis.exe -i (however in folder C:\Windows\Microsoft.NET\Framework\v2.0.50727 and not in C:\Windows\Microsoft.NET\Framework\v3.5.30729.1 because I simply don't have this folder although Ms Web Platform installer says that I have SP1 for NET Framework 3.5 installed)

    Read the article

  • JBoss AS: use .xml files in the properties-service.xml

    - by fgysin
    The properties service (configured in properties-service.xml) in JBoss application server lets you specify external .properties files that are loaded and can then be accessed as system properties from the deployed applications. (See here http://community.jboss.org/wiki/PropertiesService for more info...) Is it also possible to load config files in the .xml format instead of .properties? I know it is possible for certain given configs like for example the mail-service.xml and the jboss-log4j.xml... But they are both loaded directly by JBoss, and not via the properties service.

    Read the article

  • configuring a Fibre Channel switch

    - by lindenb
    Hi all, (I'm asking this for a friend and I'm don't know most of this technical stuff, so I'm sorry in advance if I'm not clear enough to describe the problem) Where can i find any information about how to configure a Fibre Channel switch ( QLocic , Mini GBic, QME2572 ) to make it communicate with a Dell R905 and a Dell M905 Blade Server ? Many thanks in advance Pierre

    Read the article

  • Small web server hardware advice

    - by Dmitri
    We need to build a new web server for our organization. We have around 100 hundred small traffic web sites, so our hardware requirements are not too tough. We run CentOS 6, Varnish+Apache, PHP, MySQL, Typo3 CMS for most of websites. Here's a hardware we want to buy: SuperMicro X9SCA-F-O (we need to have a remote management capability) (or better X9SCM-F?) Intel Xeon E3-1220 v2 2*4Gb DDR-III 1600MHz Kingston ECC (KVR16E11/4) (currently we have 4gb, and it feels like enough, so no reason for 16gb yet). Procase EB140-B-0 (1 unit) PSU 350W Procase MG1350, Active PFC We already have: Intel 335 120GB SSD (for OS, databases and important websites). 2*2tb WD Green RAID1 (for other data and backups). Does it look like a reasonable choice for our needs? Any issues with hardware compatibility? Any other notes?

    Read the article

  • puppetca never returns anything

    - by mrisher
    Hi: I'm trying to configure Puppet on Ubuntu, and strangely I am never able to generate a certificate because my server never shows any pending certificate requests. Put differently, on the server I am running puppetmasterd and on the client I am able to connect to the server, but the client continues printing notice: Did not receive certificate warning: peer certificate won't be verified in this SSL session and yet the server never sees the request mrisher@lab2$ puppetca --list [nothing shows up] mrisher@lab2$ puppetca --sign clientname.domain.com clientname.domain.com err: Could not call sign: Could not find certificate request for clientname.domain.com Edit: There was a suggestion that autosign was happening, but that does not seem to be it. There is no autosign.conf file, and when I run puppetmasterd --no-daemonize -d -v I receive the following output: info: Could not find certificate for 'clientname.domain.com' every time the client says notice: Did not receive certificate I checked the certs on the server and there don't seem to be any: mrisher@lab2:~$ puppetca --list --all mrisher@lab2:~$ sudo puppetca --list --all + lab2.domain.com // this is the server (master) mrisher@lab2:~$ sudo puppetca --list [blank line] mrisher@lab2:~$ Note: This is mostly running the default install from Ubuntu, if that gives any leads. Thanks for any help out there.

    Read the article

  • Can't get Monit to work

    - by Andrea
    I am trying to configure Monit on my local machine to get a taste at how it works, but I have some issues. What I am trying to do is to get any evidence that Monit is up and running correctly and is actually monitoring something. So my /etc/monit/monitrc looks like set daemon 60 set logfile /var/log/monit.log set idfile /var/lib/monit/id set statefile /var/lib/monit/state set eventqueue basedir /var/lib/monit/events slots 100 set httpd port 2812 and allow username:password check process apache2 with pidfile /usr/local/apache/logs/apache2.pid start program = "/etc/init.d/apache2 start" stop program = "/etc/init.d/apache2 stop" if failed port 6543 protocol http then exec "/usr/bin/touch /tmp/monit" If I understand correctly, since apache does not listen on port 6543 (it is just a random number) I should get an error, and as a consequence the file /tmp/monit should be created. So I start monit by sudo service monit start sudo monit monitor apache2 Unfortunately no such file is created. Instead the web console shows an error for apache - execution failed. The log says 'apache2' failed to start. What am I doing wrong? EDIT As suggested in the comments, I ran monit in verbose mode, by monit -vv monitor apache2 (the exact command suggested in the comments failed). The output is Runtime constants: Control file = /etc/monit/monitrc Log file = /var/log/monit.log Pid file = /var/run/monit.pid Debug = True Log = True Use syslog = False Is Daemon = True Use process engine = True Poll time = 60 seconds with start delay 0 seconds Expect buffer = 256 bytes Event queue = base directory /var/lib/monit/events with 100 slots Mail from = (not defined) Mail subject = (not defined) Mail message = (not defined) Start monit httpd = True httpd bind address = Any/All httpd portnumber = 2812 httpd signature = True Use ssl encryption = False httpd auth. style = Basic Authentication The service list contains the following entries: Process Name = apache2 Pid file = /usr/local/apache/logs/apache2.pid Monitoring mode = active Start program = '/etc/init.d/apache2 start' timeout 30 second(s) Stop program = '/etc/init.d/apache2 stop' timeout 30 second(s) Existence = if does not exist 1 times within 1 cycle(s) then restart else if succeeded 1 times within 1 cycle(s) then alert Pid = if changed 1 times within 1 cycle(s) then alert Ppid = if changed 1 times within 1 cycle(s) then alert Port = if failed localhost:6543 [HTTP via TCP] with timeout 5 seconds 1 times within 1 cycle(s) then exec '/usr/bin/touch /tmp/prova-monit' timeout 0 cycle(s) else if succeeded 1 times within 1 cycle(s) then alert System Name = system_andrea-Vostro-420-Series Monitoring mode = active

    Read the article

  • Make dhcp assign same IP and hostname for different interfaces at one machine

    - by Egeshi
    I have a feeling that question itself looks stupid but it is not. Please let me clarify. I have dynamic DNS with BIND and NIS configured at my LAN and have laptop which I am using in both wireless and wired mode. I mean that sometimes I have to use wired interface to achieve higher throughput but most of time I don't need it and using wireless mode. Everything works great. Issue is that I want both interfaces get same IP from DHCP. Just for convenient firewall setup. If I add both hosts to dhcp in this manner # bt wireless host bt { hardware ethernet 00:1f:1f:62:60:28; fixed-address 172.16.77.110; } # bt wired host bt { hardware ethernet 00:14:22:b7:5a:de; fixed-address 172.16.77.110; } DHCP says logs following message dhcpd: Dynamic and static leases present for 172.16.77.110 dhcpd: Remove host declaration bt-wired or remove 172.16.77.110 dhcpd: from the dynamic address pool for 172.16/16 Host records are added outside of any subnet, but it makes no difference if I put them there, effect is still the same. This is not critical but either is not my whim because even if DHCP seems to work fine for that "bt" host, I cannot make connection TO it from remote machine anymore with this definitely incorrect DHCP config. I'd be thankful if one spares a minute for advice about how to configure DHCPD correctly. UPDATE. I realize that there's a soulution to assign different hostname in DHCP config but would like to use benefits of short host names.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >