Search Results

Search found 3760 results on 151 pages for 'mutlple entries'.

Page 26/151 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • HOSTS ignored when disconnected [closed]

    - by Synetech
    Problem I’m seeing a strange and extremely frustrating problem. Any system that is not connect to the Internet (Windows 7 shows the no Internet access icon because it cannot constantly ping Microsoft’s servers) cannot even access locally hosted servers. Hypothesis The problem appears to be that the HOSTS file is not being used to resolve DNS entries when there are no active NICs. Tests / Reproduction You can reproduce it as so: Disconnect a system from the Internet (make sure all wired and wireless connections are disconnected). If necessary, add an entry to the HOSTS file (e.g., 127.0.0.1 foobar or 127.0.0.1 foobar.com) Open a command-prompt Type ping foobar or ping foobar.com Observations The screenshots below show a clear and demonstrative example. In the first snap, a laptop is connected to a router wirelessly. The HOSTS file has only three entries and they resolve just fine. In the second snap, the wireless radio is turned off, so the entries in the HOSTS file are ignored. Moreover, notice that pinging localhost still works even without any active NICs (as does 127.0.0.1), but it is using the IPv6 address (must be hard-coded). You can see the same results in Windows XP with no IPv6 installed, so it has nothing to do with IPv6. I tried pining what should have resolved to 127.0.0.1 while the desktop system (with no wireless NICs) was connected via its Ethernet adapter, then again after pulling the cable from the router and waiting a couple of seconds, then again after plugging the cable back in. The same thing happens if instead of pulling out the cable, the NIC is disabled through software (the [Disable] button in the NIC’s Status dialog or via Device Manager). Conclusions It looks as though the HOSTS file is only being read and used if there is an active NIC, otherwise it is being ignored. This makes some sense in that if there are no active network adapters, then presumably there will not be any network activity, and thus no need to resolve host names via the HOSTS file. This assumption is specious however because it precludes locally hosted virtual servers. The HOSTS file should be used regardless of external DNS server connectivity, otherwise you cannot use simple/consistent/testing-production names for locally hosted servers when not connected to the Internet (for example web servers; help servers for Visual Studio, 3dsmax, etc.; and so on). Question Does anyone know how to force Windows to use the HOSTS file even if there are no active NICs? Appendix Figure 1: While the wireless NIC is connected to the router (the cable-modem is in standby, so no external Internet connectivity). Figure 2: With the wireless radio turned off (the Ethernet port is not unconnected in both cases). Figure 3: Same results in XP with no IPv6

    Read the article

  • Update mysql database with arpwatch textfile database

    - by bVector
    I'm looking to keep arpwatch entries in a mysql database to crossreference with other information I'm storing based on mac addresses. I've manually imported the arpwatch database into my mysql database, but being a novice with databases I'm not sure what the best way to continually update the database with new entries without creating duplicates would be. None of the fields can be unique, as even the time is duplicated frequently. I'm not interested in the actual arpwatch events like flip flop or new station, just the mac/ip/time pairings. Would a simple bash (or sql) shell script do the trick? Would it be possible to make the mac address plus the time be a composite key of some sort? the database is called utility, table is arpwatch, columns are mac, ip, time a seperate table named 'hosts' with columns mac, ip, type, hostname, location, notes has mac as the primary key. This table will correlate different ip addresses that a mac had over time using the arpwatch column initial import was done with MySQL Workbench using INSERT INTO commands with creative search and replace on the text file

    Read the article

  • iPhone Cannot log into WiFi suddenly [closed]

    - by Stanley
    I suddenly get into this strange problem. My iPhone has been using the WiFi setup at my home for more than a year. Suddenly it cannot connect to the internet despite still having the full WiFi signal icon. Have an older iPhone 3 GS and it can still browse the net using the same WiFi. So the wireless router should be working. When I check the non-functioning iPhone, it has the "Router" and the "DNS" entries blank while the functioning iPhone has entries on both of the fields. Also the subnet Mask are different. Please help.

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • Determining currently-serving files in IIS 7

    - by Nat Papovich
    serverfault showed me this topic, and I think I want to do the same thing, but in IIS, not Apache. I have a "dashboard" application I'm building and I want it to show what files are currently being served by IIS. They'll mostly all be large files. I believe that the ILogScripting COM Interface would have been one good place to start, but it's not available in IIS 7, and it relies on the underlying IIS logs for its data. And therein, I believe, lies my problem. How do I make IIS put in, essentially, two log entries, one as the request begins, and one when the connection is closed? Also, it looks like IIS doesn't "commit" log entries as they're occuring, in "real-time". There's some kind of delay/batch-job. That will cause a problem for me too. Or do I need to do something in isapi instead?

    Read the article

  • Problem with the hosts file in Windows XP

    - by Mee
    I have a computer with Windows XP SP2 with a weird problem. The hosts file doesn't work. No matter what I do, adding or removing entries in the file doesn't make any difference, pinging the added names times out. I tried flushing the DNS cache (using ipconfig /flushdns) but that didn't work, I even tried to restart the DNS client service but that made no difference too. Removing entries also has no effect, I ping the names and I get a reply. Help!!! Edit: Thanks for your answer guys, but the problem is more complicated than this. It seems I'll have to reinstall Windows.

    Read the article

  • Mail on other server

    - by takeshin
    Here is the current DNS setup: mx.example.com 3600 A 93.157.123.73 example.com 3600 A 93.157.123.93 www.example.com 3600 A 93.157.123.93 mail.example.com 3600 A 93.157.123.72 smtp.example.com 3600 CNAME mail.example.com pop3.example.com 3600 CNAME mail.example.com imap.example.com 3600 CNAME mail.example.com panel.example.com 3600 CNAME panel.example2.pl www.panel.example.com 3600 CNAME panel.example2.pl ftp.example.com 3600 CNAME example.com mysql.example.com 3600 CNAME example.com pgsql.example.com 3600 CNAME example.com *.example.com 3600 CNAME example.com example.com 3600 MX 10 mx.example.com example.com 3600 NS ns1.example2.pl example.com 3600 NS ns2.example.pl example.com 3600 TXT "v=spf1 redirect=_spf.example3.pl" My client wants to have mail on his own server alfa.otherhost.com. Which entries do I have to update? Only the MX one? example.com 3600 MX 10 alfa.otherhost.com or: example.com 3600 MX 10 mx.alfa.otherhost.com Do I need to update POP, SMTP and IMAP entries too?

    Read the article

  • fast opening and closing connection with a specific port

    - by michale
    We have a Main application named "Trevor" installed in 2008R2 machine named "TEAMER12" which is slow now. One more application named "TVS" also running in and found there were many connections per second occurring to port 5009. netstat tool mentions that some fast connection open/close seen for port 5009 So first it will be listening mode like shown below TCP 0.0.0.0:5009 TEAMER12:0 LISTENING then establishes connection like TCP 127.0.0.1:5009 TEAMER12:49519 ESTABLISHED TCP 127.0.0.1:5009 TEAMER12:60903 ESTABLISHED After that iwill become TIME_WAIT and i could see several entries like shown below TCP 127.0.0.1:49156 TEAMER12:5009 TIME_WAIT after that it will establish connection like TCP 127.0.0.1:60903 TEAMER12:5009 ESTABLISHED TCP 127.0.0.1:64181 TEAMER12:microsoft-ds ESTABLISHED again it will go several entries like TIME_WAIT TCP 127.0.0.1:49156 TEAMER12:5009 TIME_WAIT Finally it will establish like this TCP 172.26.127.40:139 TEAMER12:0 LISTENING TCP 172.26.127.42:139 TEAMER12:0 LISTENING TCP 172.26.127.42:5009 TEAMER12:64445 ESTABLISHED TCP 172.26.127.42:64445 TEAMER12:5009 ESTABLISHED Can any body tell me whats the reason behind why many connections per second occurring to port 5009 and why application slow?

    Read the article

  • pnp4nagios does not generate perfdata

    - by gonvaled
    I am running nagios2, pnp4nagios-0.6.16 and php 5.2.4-2ubuntu5.19. In my setup, pnp4nagios is correctly generating perfdata, which can be seen via the web interface in graphical form for lots of services. The perfdata directory contains entries of the kind: /usr/local/pnp4nagios/var/perfdata/zeus/Disk_Space_Home.rrd /usr/local/pnp4nagios/var/perfdata/zeus/Disk_Space_Home.xml I have activated performance data for a new nagios service: define serviceextinfo { host_name zeus service_description 450average action_url /pnp4nagios/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$ } This service is generating monitoring data in the format: status_info|perf_data as required for performance gathering. But somehow the performance data related to this service is not being collected by pnp4nagios (no related entries in /usr/local/pnp4nagios/var/perfdata) Are there any pnp4nagios scripts or settings which I could use to debug this?

    Read the article

  • SQL Server Management Studio Connect to Server List Editing

    - by Paul Farry
    I'm using SQLServer Management Studio (2005) and I have a fairly lengthy list of servers in there, and I'd like to get rid of some of them that are no longer in use, without having to set them all up again. I know that the C:\Users\*\AppData\Roaming\Microsoft\Microsoft SQL Server\90\Tools\Shell\mru.dat can be deleted and this will remove ALL the entries, but is there anyway to just delete some of them? (Coding info) I looked at the file and it is a serialised blob from the Microsoft.SqlServer.Express.ConnectionDlg.dll (Class Personalization) in the Appplication directory, but all the methods are private. So I can't just create an instance of this and then call Remove on the entries. Update I have written an Article on CodeProject explaining How this can be achieved. http://www.codeproject.com/KB/vb/AlterSQL2005MRU.aspx

    Read the article

  • SPF include: too many IP addresses

    - by sprezzatura
    I've hit a snag with SPF. The SPF record for my domain will contain four or five entries, plus it will contain: include:sgizmo.com The SPF record for sgizmo.com contains eleven entries! This, plus mine, is way over the maximum ten allowed by the RFC (and probably by most servers). I realize that there has to be a limit in order to prevent DoS attacks. However, in the real world, it is probably not unreasonable for large companies to have many server addresses. Furthermore, must I know monitor my 'include:' counterparts for changes and additions? Must I check weekly, daily, to insure that some combination of changes doesn't suddenly put me over the top? It doesn't seem to me that SPF is suitable for prime time. Is there another way to do this?

    Read the article

  • Why use Google Apps Sync for Outlook to sync email?

    - by Howiecamp
    I currently use Outlook 2007 against an Exchange server for my email and will be moving to Google Apps. There are a number of ways to import your existing email and calendar entries into Google Apps Gmail (e.g. including the Google Apps Sync for Outlook tool), the Google Email Uploader, and copying messages using an IMAP client) so I'm covered on the import side. I'm trying to understand the use cases for the Google Apps Sync for Outlook tool http://mail.google.com/support/bin/topic.py?topic=23333 with respect to email and calendar entries. The description says it syncs your Outlook email and calendar items with Google Apps, but doesn't using Outlook as an IMAP client against Google Apps do the same?

    Read the article

  • Know which Apps to Remove From MSConfig with this Startup Applications List

    - by Mit Naik
    Just found userfull information on Internet and thought it would help all of users. This list of startup applications is a really handy resource for cleaning up msconfig entries that have overtaken old computers. It catalogs tons of different startup programs, what they do, and which ones you should delete, leave running, or decide based on the program's usefulness. It even has a nice search box so you can search through the tens of thousands of entries. Hit the link below to check it out, and if your relatives' computer is especially broken, be sure to check out our guide to fixing your relatives' terrible computer. http://www.sysinfo.org/startuplist.php Please update the list here if you got any other tools or sites which can be help full to others

    Read the article

  • Make GRUB automatically boot Ubuntu

    - by Matt Robertson
    I am running a dual-boot with Ubuntu (10.10) and Windows 7. Recently I edited my /boot/grub/grub.cfg file to only show one version of Ubuntu (as opposed to several kernel versions) and Windows, simply by commenting out all other menu entries. My question is if I can edit GRUB to just boot a specific entry automatically. I tried removing all other menu entries, but GRUB still showed the menu with only one entry. I've also considered just setting the timeout to either 0 or 1 second, as this would basically achieve the same thing. What is the best way to do this?

    Read the article

  • Perl: Deleting multiple re-occuring lines where a certain criteria is met

    - by george-lule
    Dear all, I have data that looks like below, the actual file is thousands of lines long. Event_time Cease_time Object_of_reference -------------------------- -------------------------- ---------------------------------------------------------------------------------- Apr 5 2010 5:54PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:55PM Apr 5 2010 6:43PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:58PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 5:58PM Apr 5 2010 6:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:01PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:04PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:04PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA As you can see, each file has a header which describes what the various fields stand for(event start time, event cease time, affected element). The header is followed by a number of dashes. My issue is that, in the data, you see a number of entries where the cease time is NULL i.e event is still active. All such entries must go i.e for each element where the alarm cease time is NULL, the start time, the cease time(in this case NULL) and the actual element must be deleted from the file. In the remaining data, all the text starting from word SubNetwork upto BtsSiteMgr= must also go. Along with the headers and the dashes. Final output should look like below: Apr 5 2010 5:55PM Apr 5 2010 6:43PM LUGALAMBO_900 Apr 5 2010 5:58PM Apr 5 2010 6:01PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 6:04PM KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM BULAGA Below is a Perl script that I have written. It has taken care of the headers, the dashes, the NULL entries but I have failed to delete the lines following the NULL entries so as to produce the above output. #!/usr/bin/perl use strict; use warnings; $^I=".bak" #Backup the file before messing it up. open (DATAIN,"<george_perl.txt")|| die("can't open datafile: $!"); # Read in the data open (DATAOUT,">gen_results.txt")|| die("can't open datafile: $!"); #Prepare for the writing while (<DATAIN>) { s/Event_time//g; s/Cease_time//g; s/Object_of_reference//g; s/\-//g; #Preceding 4 statements are for cleaning out the headers my $theline=$_; if ($theline =~ /NULL/){ next; next if $theline =~ /SubN/; } else{ print DATAOUT $theline; } } close DATAIN; close DATAOUT; Kindly help point out any modifications I need to make on the script to make it produce the necessary output. Will be very glad for your help Kind regards George.

    Read the article

  • opening and closing connection with port happening fastly

    - by michale
    We have a Main application named "Trevor" installed in 2008R2 machine named "TEAMER12" which is slow now. One more application named "TVS" also running in and found there were many connections per second occurring to port 5009. netstat tool mentions that some fast connection open/close seen for port 5009 So first it will be listening mode like shown below TCP 0.0.0.0:5009 TEAMER12:0 LISTENING then establishes connection like TCP 127.0.0.1:5009 TEAMER12:49519 ESTABLISHED TCP 127.0.0.1:5009 TEAMER12:60903 ESTABLISHED After that iwill become TIME_WAIT and i could see several entries like shown below TCP 127.0.0.1:49156 TEAMER12:5009 TIME_WAIT after that it will establish connection like TCP 127.0.0.1:60903 TEAMER12:5009 ESTABLISHED TCP 127.0.0.1:64181 TEAMER12:microsoft-ds ESTABLISHED again it will go several entries like TIME_WAIT TCP 127.0.0.1:49156 TEAMER12:5009 TIME_WAIT Finally it will establish like this TCP 172.26.127.40:139 TEAMER12:0 LISTENING TCP 172.26.127.42:139 TEAMER12:0 LISTENING TCP 172.26.127.42:5009 TEAMER12:64445 ESTABLISHED TCP 172.26.127.42:64445 TEAMER12:5009 ESTABLISHED Can any body tell me whats the reason behind why many connections per second occurring to port 5009 and why application slow?

    Read the article

  • Problem with USB drivers (Windows-XP)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • Virtualhost setup, same IP address, different DirectoryIndex's

    - by kaykills
    I am trying to set up 2 virtual host entries in apache but I'm not sure how to accomplish what I want to do. I have two domain names, both pointing to the same IP Address. I need the DirectoryIndex to be different, which is pretty much the only difference in the entries. I have the following set up: <VirtualHost *:80> ServerName firstdomain.com ServerAdmin [email protected] DocumentRoot "/srv/www" DirectoryIndex /portals/site/index.html </VirtualHost> <VirtualHost *:80> ServerName seconddomain.com ServerAdmin [email protected] DocumentRoot "/srv/www" DirectoryIndex /portals/site/index_fr.html </VirtualHost> Not sure what I need to do differently but the second entry doesn't work. The only real difference is I need the second domain to point to a different DirectoryIndex. If there is a better way to accomplish this, your help would be appreciated.

    Read the article

  • Snort [PFSense] is configured but not blocking or generating alerts!

    - by Chase Florell
    I've got PFSense V 2.0-RC1 (i386) and I've got the latest version of Snort installed I've loaded up a bunch of rules from Oinkmaster, I've enabled all of the preprocessors, and I've ensured the service is started. When I let it sit for a while and then check my Alerts and Block list, there are no entries. Even when I test it by logging into Skype (skype is listed as a Rule from P2P), I don't get any entries in the logs. If you need any further information, please let me know... I simply can't figure this one out.

    Read the article

  • Puppet claims to be unable to resolve domains even if domain properly resolves

    - by gparent
    I have a fairly simple puppet setup, one master and one node, both running Debian Squeeze 6.0.4. I have DNS entries for the two machines, client and master respectively. Both client and master's DNS entries resolve correctly on both machines to the right IPs. On my client, I have this configuration: [main] server = master.example.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter pluginsync=true templatedir=/var/lib/puppet/templates Key exchange seems to fail, according to this messages in /var/log/syslog: localhost puppet-agent[11364]: Could not request certificate: getaddrinfo: Name or service not known Why is resolution not working only for puppet?

    Read the article

  • Eventlog entry for allowed connection in Windows Firewall

    - by Jaap
    I was seeing a lot of entries in the eventlog: The Windows Filtering Platform has permitted a connection. Application Information: Process ID: 4 Application Name: System Network Information: Direction: Inbound Source Address: 10.xxx.xxx.xxx Source Port: 80 Destination Address: 10.xxx.xxx.xxx Destination Port: 31773 Protocol: 6 Filter Information: Filter Run-Time ID: 67903 Layer Name: Receive/Accept Layer Run-Time ID: 44 We have a loadbalancer which checks every second to see if the application is still running (a health check). The logs contain large amounts of this kind of entries, which makes the Event Viewer slow and it's difficult to find the more interesting logs. How do I make sure these messages don't end up in the event logs?

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • CentOS: safe to yum reinstall after removing 32-bit packages?

    - by virtualeyes
    as per the CentOS FAQ on removing 32-bit packages present in a 64-bit install, is it safe to perform the last step: You may also want to do this: yum reinstall \* The reason is that sometimes the /usr/share/ items (shared between BOTH packages) get removed when removing the 32-bit RPM packages. on an existing installation? (i.e. where data & settings of possibly affected applications need to be preserved) rpm -Va shows a number of entries like: /sbin/ethtool: at least one of file's dependencies has changed since prelinking S.?..... /sbin/ethtool /usr/libexec/mysqld: at least one of file's dependencies has changed since prelinking S.?..... /usr/libexec/mysqld along with /usr/share entries with T flag (apparently filetime diff, seems safe) The machine is up & running fine, but may not be whenever a reboot occurs. Clue-in as to the real state of the machine (hosed or OK) appreciated Thanks

    Read the article

  • Reasons for missing IP info in `last` output on pts logins?

    - by Mike Pennington
    I have five CentOS 6 linux systems at work, and encountered a rather strange issue that only seems to happen with my userid across all the linux systems I have... This is an example of the problem from entries I excepted from the last command... mpenning pts/19 Fri Nov 16 10:32 - 10:35 (00:03) mpenning pts/17 Fri Nov 16 10:21 - 10:42 (00:21) bill pts/15 sol-bill.local Fri Nov 16 10:19 - 10:36 (00:16) mpenning pts/1 192.0.2.91 Fri Nov 16 10:17 - 10:49 (12+00:31) kkim14 pts/14 192.0.2.225 Thu Nov 15 18:02 - 15:17 (4+21:15) gduarte pts/10 192.0.2.135 Thu Nov 15 12:33 - 08:10 (11+19:36) gduarte pts/9 192.0.2.135 Thu Nov 15 12:31 - 08:10 (11+19:38) kkim14 pts/0 :0.0 Thu Nov 15 12:27 - 15:17 (5+02:49) gduarte pts/6 192.0.2.135 Thu Nov 15 11:44 - 08:10 (11+20:25) kkim14 pts/13 192.0.2.225 Thu Nov 15 09:56 - 15:17 (5+05:20) kkim14 pts/12 192.0.2.225 Thu Nov 15 08:28 - 15:17 (5+06:49) kkim14 pts/11 192.0.2.225 Thu Nov 15 08:26 - 15:17 (5+06:50) dspencer pts/8 192.0.2.130 Wed Nov 14 18:24 still logged in mpenning pts/18 alpha-console-1. Mon Nov 12 14:41 - 14:46 (00:04) You can see two of my pts login entries above that do not have a source IP address associated with them. My CentOS machines have as many as six other users that share the systems, but the mpenning userid is the only one that has this issue. Approximately 5% of my logins see this issue, but no other usernames exhibit this behavior. Questions Given the kind of scripts I keep on these systems (which control much of our network infrastructure), I'm a little spooked by this and would like to understand what would cause my logins to occasionally miss source addresses. Is there anything (other than malicious activity) that would reasonably explain the behavior? Other than bash history timestamping, are there other things I can do to track the issue down? Informational Since this started happening, I enabled bash history time-stamping (i.e. HISTTIMEFORMAT="%y-%m-%d %T " in .bash_profile) and also added a few other bash history hacks; however, that does not give clues to what happened during the previous occurrences. All the systems run CentOS 6.3... [mpenning@typo ~]$ uname -a Linux typo.local 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux [mpenning@typo ~]$ EDIT If I use last -i mpenning, I see entries like this... mpenning pts/19 0.0.0.0 Fri Nov 16 10:32 - 10:35 (00:03) mpenning pts/17 0.0.0.0 Fri Nov 16 10:21 - 10:42 (00:21)

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >