Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 402/500 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • Recommended motherboard with hardware raid for Linux

    - by luison
    Hi. We want to setup an internal office server for testing jobs (LAMP), email and samba. Only about 5-10 users. We are also considering starting to virtualize, initially by a base Ubuntu Server with Xen or VMWare Open Source server. Our current system runs with a Linux Raid which has worked great but it's always been complicated to recover the boot sector when one the drives fail and therefore I would prefer using now a hardware raid instead, but ideally with some kind of software monitoring. For this reason and considering we don't want to spend a fortune a I would appreciate any comments on the following options. Motherboard with RAID with linux support... which could you recommend. Motherboard + Hardware Raid card... Adaptec does not seem to have great Linux suppport. 3Ware seems to have a tc soft controller which we've used on a hosting company, but hard to find here in Spain. HP Proliant type basic server, which? Dell Small Servers... any good for Linux? Thanks in advance for any feedback.

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

  • Web server with static IP from cable provider

    - by Dmitri
    I have a subscription to 5 static IP addresses. I want to run a web server from behind a router. My network config is as follows: Server's local address is 10.1.10.2, has IIS running on it, port 80 and 443 (IIS is not my fault, had to be done) the server's ip address is static, the subnet mask is 255.255.255.0, gateway is 10.1.10.1, which is the local address of the cable modem / router / gateway thingy. All looks to be in textbook order as far as the LAN goes. I can get to anything on my LAN from any computer on my LAN, whether they have static IP or get it through DHCP from the cable modem/router thingy. however, I have no internet access form any of my LAN computers. I called Comcast tech support and they say they can connect to my modem/router just fine and can actually use it to ping any computer on the internet or any computer on my LAN from the router/modem (i checked, myself, this is in fact the case). However, nothing on my LAN has internet connectivity. I tried pinging the DNS servers, nothing. I tried directly typing in web sites' IP addresses, nothing, so doesn't seem to be a DNS issue. Any Ideas? What malfunction of a router could be causing such weird behavior? nay ideas or educated guesses are very much appreciated.

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

  • Why RSA SSH authentication only works after console log-in?

    - by smorhaim
    I setup RSA authentication on one of my Ubuntu servers, however after every restart, I can't log-in via ssh RSA. In order to log-in with ssh I need to first log-in via console, then the RSA starts working. Why??? Below are my sshd config file as well as an output from the ssh -vv command before console log-in and after. . Before console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7ff8d8c242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7ff8d8c24cf0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/id_rsaadmin debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). After console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7f91c14242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7f91c1424ae0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp b1:d5:90:43:be:43:52:a9:7f:05:c7:04:86:57:b3:ff debug1: Authentication succeeded (publickey). Authenticated to 10.10.30.151 ([10.10.30.151]:22). sshd config: Port 22 Protocol 2 ListenAddress 10.10.30.151 UsePrivilegeSeparation yes SyslogFacility AUTHPRIV PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes

    Read the article

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • Why the system information message when accessing an Ubuntu server doesn't match free -m?

    - by Andres
    Each time I SSH into my AWS Ubuntu servers I see a system information message, showing load, memory usage and packages available to install, like this: Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-51-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 10 18:06:43 EST 2013 System load: 0.08 Processes: 127 Usage of /: 4.9% of 98.43GB Users logged in: 1 Memory usage: 69% IP address for eth0: 10.236.136.233 Swap usage: 100% Graph this data and manage this system at https://landscape.canonical.com/ 13 packages can be updated. 0 updates are security updates. Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud Use Juju to deploy your cloud instances and workloads. https://juju.ubuntu.com/#cloud-precise *** /dev/xvda1 will be checked for errors at next reboot *** *** System restart required *** My question is about the memory percentage shown. In this case, it's showing a 69% of memory usage, but since the swap usage was 100% I checked it by myself. So when I run free -m I get this: total used free shared buffers cached Mem: 1652 1635 17 0 4 29 -/+ buffers/cache: 1601 51 Swap: 895 895 0 And that's of course closer to 100% than to 69%

    Read the article

  • Unable to get defined path in 'source' type on AIX node

    - by haris
    hi all, I am trying to create a set of users on my AIX node and trying to get their authorized_keys which are already hosted on my server with name like, 'myuser_id_dsa.pub'. Currently i am managing 2 nodes (1. SLES 2. AIX). I defined the 'source' file paths in 2 separate contexts in fileserver.conf; [AIX] path myfiles/users/ssh/ allow *.another.mydomain.com [SLES] path myfiles/users/keys/ssh/ allow *.mydomain.com but when I run puppet then it ended successfully on my SLES node but encountered failure on AIX; with following err; /* Could not describe /AIX/myuser_id_rsa.pub: Fileserver module 'AIX' not mounted*/ in my code i have defined the 'source' with $filserver variable as: case $operatingsystem { "AIX": { $fileserver = "AIX" } default: { $fileserver = "SLES" } } file { "${home}/${username}/.ssh/authorized_keys": source = "puppet:///$fileserver/${username}_is_dsa.pub", ... ... } why AIX is not able to get the source path from my fileserver.conf while SLES is running absolutely fine? and how can I do it? I have to run similar configuration across different servers so I can only deal it with case statement. looking forward for your help Thanks

    Read the article

  • Changing subnet-mask of class-c network host to 255.255.0.0

    - by Prashant Mandhare
    We have a existing class-c network with IP address range 11.22.33.44/24 (just for example). My domain controller has been configured within this subnet. So all servers within this subnet have subnet mask configured to 255.255.255.0. Now we have got a new subnet with IP address 11.22.88.99/24 (note that only last 2 octets have changed). I want all new hosts in this new subnet to join my existing DC. For this we have configured firewall properly so allow this. (so there is no issue with firewall). But initially I was not able to join hosts in new subnet in existing domain. Later I doubted on subnet mask used in domain controller (255.255.255.0) and for testing purpose I changed it to 255.255.0.0, it worked like charm, i was able to join subnet-2 hosts in subnet-1 domain. Now i am wondering whether it will be good practice to change subnet mask of a class-c network to 255.255.0.0? Can any issues arise due to this? Experts please provide your opinion.

    Read the article

  • 'pskill \\hostname winlogon' might budge a server "stuck rebooting", but why?

    - by Snoi
    Question: Executing remote (Sysinternals) command... pskill \\machine winlogon ...can budge a server that is stuck rebooting, but how/why does this work? How do you know which service to kill? To recreate (e.g.): You run Windows Update, allow a reboot, and ...NOTHING! RDP gets cut off but the server does not reboot. Just about every other service seems to stay up. Further Background: I've faced this problem on VMs hosted around the planet for some years, and used various sc.exe and shutdown commands to learn the state of and attempt remote reboot of servers in such a state, with limited success. Most datacentres don't offer any way to see the true console or power off/on such machines. They charge $$ for you to call them to do such simple things after hours, when you nearly always have to run your maint tasks. e.g. NET USE \\machine\IPC$ /USER:login password sc \\machine query RpcSs sc \\machine query TermService sc \\machine query wuauserv tasklist /s machine This occasionally works for me... shutdown /m \\machine /r /f /t: 0 ...but more often than not it fails with: A system shutdown is in progress (1115). I found this question, and the answer by @Tweek, and it worked really well, but was I just lucky? Can not RDP to Win 2003 box or initiate remote restart @Tweek said to run: pskill \\hostname winlogon ...and that got me past this situation in a new way (Server 2008 R2 in my most recent case) - really useful! I just need to understand if I got lucky or there is more science here. What I'd like to know is why the winlogon process? @Livne said to use "tasklist /s HostName" to see what is the culprit, but how do you tell from the listed output? It's just a list of running tasks etc. From that I would not know what to look for, nor could I see anything about the winlogon process that suggested to my eyes that was the one to kill.

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user’s mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • Powershell Win32_NetworkAdapterConfiguration Not "seeing" PPP Adapter

    - by Ben
    I am trying to get the IP of a PPP VPN network connection, but Win32_NetworkAdapterConfiguration does not seem to "see" it. If I interrogate all adapters using my script, it will see everything but the PPP VPN adapter. Is there a specific filter or something I need to enable, or do I need a different class? My Script: $colItems = Get-wmiobject Win32_NetworkAdapterConfiguration foreach ($objItem in $colItems) { Write-Host Description: $objItem.Description Write-Host IP Address: $objItem.IPAddress Write-Host "" } Script Output: Description: WAN Miniport (SSTP) IP Address: Description: WAN Miniport (IKEv2) IP Address: Description: WAN Miniport (L2TP) IP Address: Description: WAN Miniport (PPTP) IP Address: Description: WAN Miniport (PPPOE) IP Address: Description: WAN Miniport (IPv6) IP Address: Description: WAN Miniport (Network Monitor) IP Address: Description: Intel(R) PRO/Wireless 3945ABG Network Connection IP Address: 192.168.2.5 Description: WAN Miniport (IP) IP Address: ipconfig /all output: PPP adapter My VPN: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : My VPN Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.1.8.12(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 10.1.1.3 10.1.1.2 Primary WINS Server . . . . . . . : 10.1.1.2 Secondary WINS Server . . . . . . : 10.1.1.3 NetBIOS over Tcpip. . . . . . . . : Enabled Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Belkin Description . . . . . . . . . . . : Intel(R) PRO/Wireless 3945ABG Network Connection Physical Address. . . . . . . . . : 00-3F-3C-22-22-22 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.2.5(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 25 May 2010 20:33:19 Lease Expires . . . . . . . . . . : 22 May 2020 20:33:17 Default Gateway . . . . . . . . . : 192.168.2.1 DHCP Server . . . . . . . . . . . : 192.168.2.1 DNS Servers . . . . . . . . . . . : 192.168.2.1 NetBIOS over Tcpip. . . . . . . . : Enabled Thanks in advance, Ben

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • Xen HVM networking wont work

    - by Nathan
    I'm trying to get a Xen HVM network working using route however I am failing. Xen PV works fine using Ubuntu but when installing Ubuntu on HVM it fails to pick up the network. I'll let you know now that I'm not that experienced with Xen so I would appreciate any help. vm104 is the HVM thats causing me the problems, here is the configs that I believe should help resolve the problem. [root@eros vm104]# cat vm104.cfg import os, re arch = os.uname()[4] if re.search('64', arch): arch_libdir = 'lib64' else: arch_libdir = 'lib' kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' memory = 6000 shadow_memory = '8' cpu_weight = 256 name = 'vm104' vif = ['type=ioemu, ip=85.25.x.y, vifname=vifvm104.0, mac=00:16:3e:52:3d:fe, bridge=xenbr0'] acpi = 1 apic = 1 vnc = 1 vcpus = 4 vncdisplay = 3 vncviewer = 0 vncconsole = 1 vnclisten = '217.118.x.y' vncpasswd = 'kCfb5S4tE7' serial = 'pty' disk = ['phy:/dev/vpsvg/vm104_img,hda,w', 'file:/home/solusvm/xen/iso/Windows-Server-2008-RC2.iso,hdc:cdrom,r'] device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm' boot = 'cd' sdl = '0' usbdevice = 'tablet' pae=1 [root@eros /]# cat /etc/xen/xend-config.sxp | egrep -v "(^#.*|^$)" (xend-unix-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-route) (vif-script vif-route) (network-script 'network-route netdev=eth0') (dom0-min-mem 256) (dom0-cpus 0) (vnc-listen '0.0.0.0') (vncpasswd '') (keymap 'en-us') The Windows install will not pick up the network - I've tried setting the IP manually by using the Xen servers IP as the gateway and setting the main IP in Windows but no luck. If anyone needs any more information let me know and I appreciate any input!

    Read the article

  • Setting up ProxyPass for a Virtualmin virtual server

    - by Andy Ibanez
    I am trying to set up my web server so that I can server multiple Ghost.org blogs. I am stuck in one crucial step. To be honest, my knowledge in servers is not so big, so I request your help to do this. Basically, a tutorial I'm reading suggests I set up a VirtualHost in this way: NameVirtualHost *:80 <VirtualHost *:80> ServerName your-url.com ServerAlias www.your-url.com ProxyRequests off ProxyPass / http://127.0.0.1:2368/ ProxyPassReverse / http:/127.0.0.1:2368/ </VirtualHost> So I have gone to the Virtual site on Virtualmin to try to add everything as is. Services Configure Website Edit Directives The problem is, the previous page (Services Configure website) says I can't edit the port: This Apache virtual host belongs to the Virtualmin server linguist.andyibanez.com, so the address, port, base directory and hostname cannot be changed here. And whenever I try to add the ProxyRequests off (the other two can be added fine) directive in Edit Directives manually, I'm simply told that the directive can't be there. So what is the right way to "add" the last three directives in the VirtualHost above to my sub server? Maybe I'm missing a menu item that will help me with this? I request your help as I have looked for a while, Google keeps thinking I want to serve Webmin via Apache when I google "Set up Virtualmin proxypass", and I have no clue what to do.

    Read the article

  • weblogic plug-in apache http server location directive question

    - by user39510
    We are using Weblogic Portal and Apache 2.x http server with the weblogic plug-in for apache for load-balancing. We have an application that right now can only be accessed from one of our managed servers. What I would like to do is use the Location directive to direct all requests for that page to the one managed server and I can't get it to work. The context that the portal tries to forward to is something like /MyWebApp?portalusername= (where equals a legitimate user. For example /MyWebApp?portalusername=joesmith. All other applications and the plug-in is load balancing as expected because every now and then you'll get sent to the second managed server for this particular application and its not deployed. I tried various things in the Apache http.conf like the following but can't seem to get it work. Any suggestions? The following is a snippet of the httpd.conf. Its a standard out of the box httpd.conf file with the weblogic plugin configuration. <Location /MyWebApp> SetHandler weblogic-handler WebLogicCluster myserver:7011 </Location> <Location /?> SetHandler weblogic-handler WebLogicCluster myserver:7011, myserver2:7012 </Location>

    Read the article

  • Cisco 837 not passing UDP traffic properly (was: DNS query problem)

    - by TessellatingHeckler
    We have a setup of ADSL line - Cisco 837 ADSL router - Zyxel ZyWall 35 firewall/NAT - Switch - LAN. It has been fine for years, suddenly DNS resolution stopped working from the LAN to public DNS servers. No changes that I know of, so I can't revert anything. Current behaviour: DNS requests from the LAN using TCP show up in the oubound firewall log, in the Cisco debug log, in the dns-server-firewall, in tcpdump on the DNS server, the answer comes back, it works fine. DNS requests from the LAN using UDP show up in the outbound firewall log, in the Cisco debug log, but does NOT show in the dns-server-firewall, not in tcpdump on the DNS server, times out. DNS requests from the Cisco using UDP show up in the dns-server-firewall and in tcpdump on the DNS server, answer received, works fine. netcat connections to port 53 or a random port by TCP show up in the dns-server-firewall netcat connections to port 53 or a random port by UDP do not show up in the dns-server-firewall Summary: TCP seems fine throughought. UDP works from the Cisco over the ADSL, and it works from the LAN to the Cisco, but it doesn't seem to cross the Cisco 837 properly. Update: confirmed with netcat that any UDP traffic from the LAN is affected, not just traffic to port 53. Update: If I change the firewall's external IP to any other IP in the subnet, this starts working. When I put it back, it stops working. I now suspect it's an ISP issue (does that sound plausible?), and am removing the Cisco config.

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Server 2003, XP Clients, DNS issues

    - by ron
    Hello, Im having DNS issues on my network. My DC is my DNS server 10.76.4.11 and recently I configured a forwarder to 10.4.36.10. My workstations are not working because they cannot resolve the domain controller name because of DNS. an ipconfig /all reveals that they know the IP of the DNS server is 10.76.4.11, but if I nslookup 10.76.4.11 it forwards the request to 10.4.36.10 and goes nowhere. I have since removed the forwarder, but still any nslookup requests on workstations are going to 10.4.36.10. If I nslookup 10.76.4.11 on the server it can resolve its name, but for some reason when it receives the same request from workstations it doesnt know what to do. All the A, CNAME records etc are correct. DHCP's DNS is set correctly, GPOs are correct (even though they cant refresh cos of this problem!), the servers network adapter has its DNS set to 10.76.4.11. Just don't know. Very confused.

    Read the article

  • /scripts/easyapache broke PHP and now outputs binary

    - by James Lawrie
    I ran /scripts/easyapache on one of my cpanel servers last night and since then PHP just gives 501 errors, which no logs I can find. I tried running it again and it's running as normal but all output has been replaced with strange characters, eg: ????+?+± °?? ?-?.?... =?? ????+?+± °?? ¦+?+?.?... =?? ????+?+± °?? ?=?/¦+?+?.?... +? ????+?+± °?? ?=?/??++.?... =?? ????+?+± °?? ??++.?... =?? ????+?+± °?? ???+?+.?... +? ????+?+± °?? ?=?/????¦???.?... =?? ????+?+± °?? +??±?+.?... =?? ????+?+± °?? +??¦+?.?... =?? Has anyone come across this before? Is it fixable?

    Read the article

  • Nagios plug-in check_snmp receives NO SNMP data from a CISCO Router

    - by Shehryar
    I have tried setting up Nagios on Ubuntu 10.10, successfully installed and can login to web interface, I am however stuck on configuring snmp or I am doing something wrong here, i have followed various sites / nagios wiki to setup configuration (cfg) files. When I check on the web interface, it gives the following error on one of my cisco router: Current Status: UNKNOWN (for 0d 2h 55m 56s) Status Information: SNMP problem - No data received from host CMD: /usr/bin/snmpget -t 1 -r 5 -m RFC1213-MIB -v 1 [authpriv] 192.168.1.1:161 ifOperStatus.1 On the command-line itself, when I type the following, it just sits there waiting and waiting : sudo /usr/local/nagios/libexec/check_snmp -H 192.168.1.1 -C Routers -o sysUpTime.0 When I type the following command : I get an OK /usr/bin/snmpget -v1 192.168.1.1:161 1.3.6.1.2.1.1.5.0 -c "Routers" I have configured SNMP properly on our cisco device as we can collect SNMP Data via two other monitoring tool (SolarWinds and Manage Engine), we are tempted towards Nagios as its opensource. Will be grateful if someone could assist in rectifying this situation and guide me with setting up nagios to monitor Cisco Routers, Switches and a Few Servers. We want to monitor Bandwidth, cpu utilization, uptime and other necessary counters. Will be grateful for your assistance Thanks for reading Shehryar

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >