Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 335/393 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Error when mount the database in exchange 2010 SP1

    - by user64060
    Hi, My company have two exchange 2010 SP1 servers with DAG configuration with OS widows server 2008 R2 in testing entironment. Today i want to test my backup possibility, so i restore the backup data to another location not original location. I dismount the database and then delete the all files under the database location. last I copy back the files from back up location to database location. When i want to mount the database. It will come out the below error! -------------------------------------------------------- Microsoft Exchange Error -------------------------------------------------------- Failed to mount database 'mail2'. mail2FailedError: Couldn't mount the database that you specified. Specified database: mail2; Error code: An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn]. An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn] An Active Manager operation failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Server: mail2.e0594.cn] MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) Any suggestion? Thanks!

    Read the article

  • VMWare web UI intermittent access on CentOS

    - by PeteWilliams
    I've got a CentOS 5.2 server that I'm trying to get set up as a development environment. As part of this, I planned to install VMWare Server 2 and set up several virtual development servers. I've got as far as installing VMWare Server 2 but access to the remote control panel is only working intermittently. If I access it through Firefox at https://127.0.0.1:8333/ui/# it usually says either: "Connection intterupted: connection was reset before the page loaded" Or "Firefox can't establish a connection to the server at 127.0.0.1" But every now and then it lets me in and I'll manage a few clicks in the web UI before it kicks me out with the following error: "The server could not complete a request (HTTP 0 ). The server encountered an unexpected condition that prevented it from fulfilling the request. If this problem persists, please contact your system administrator." I've done all the updates available in CentOS except one OpenOffice one that is causing a conflict, and I re-ran wmware-config.pl after updating the kernel. Though I went with all the defaults as I don't really know what I'm doing! I've since rebooted and nothing changed. I've also tried accessing the control panel remotely from another machine in the network and the results are the same. Does anyone have any ideas what might be causing this and how I can resolve it? I'm afraid I'm a developer playing at sys-admin, so I may be missing something obvious! Many thanks Pete Update I have now reinstalled both the operating system and VMWare and I'm still getting the same issue. I wonder if it's a result of the settings I'm putting in on the config.pl script..?

    Read the article

  • My bios datetime is resetting to 2002, what should I do?

    - by Thierry Lam
    I bought my PC brand new in mid-2006. I'm currently dual booting Win XP 32 bits and Ubuntu Karmic Desktop. Over the last few days, when I boot up my computer, it tells me that my bios time is not set. I now have 2 choices for booting in: Windows first: I can safely get into Windows but the date now shows 2002/01/01 at 00:00. From there, Windows will not sync its time to its servers, I have to manually advance the date myself. I can also advance the date from the bios at startup. Ubuntu first: Since the time date is still set at 2002, the boot system cannot find my linux partition. I can fix that issue by booting to Windows, set the date properly and the linux issue will be fixed automatically. The above two issues can also be fixed if I manually set the bios date/time manually when I turn on my computer. It's annoying to do that every single day since I shut down my PC daily. Am I having an OS issue or a hardware issue? How can I resolve this problem?

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Cisco IOS ACL: Don't permit incoming connections just because they are from port 80

    - by cjavapro
    I am going much based on my memory and I may not be correct on all of this. On a Cisco 851 (IOS) that uses a BVI or a bridge-route (the servers on the inside are configured with static and public IP addresses). I would apply two access lists (both end with deny ip any any log) on FastEthernet4 (the WAN port). There would be one for FA4 in and another for FA4 out. FA4 out would have a line like access-list 110 permit 98.76.54.0 0.0.0.255 gt 1023 any eq http I think this means from 98.76.54.* with a from port of at least 1024 can connect to any other machine with a destination port 80. So, then I have to allow the response to the HTTP connection. FA4 in would have a line like access-list 120 permit any eq http 98.76.54.0 0.0.0.255 gt 1023 Now the problem with that is that anybody on the outside can set their from port to port 80 and then connect to any inside port that is at least 1024. How do we prevent this and require the incoming data to be a response to the outgoing data.

    Read the article

  • DPM server 2010 Attach agent error : administrator privileges missing?

    - by Michael
    I’m hoping you would be able to help me out with this little problem I’m having. I installed DPM 2010 in our test environment to test backups on Exchange 2010 servers. The environment includes : 1xDC 2x Exchange Server 2010 1x DPM 2010 server All of these are running on Microsoft server 2008 R2 Virtual machines. The host machines are using Hyper-v. So the problem goes like this : 1- I tried to install the agents from the DPM server GUI, which failed saying I didn’t have the correct permissions. 2- So then I tried the manual installation using the commands from : the Microsoft site http://technet.microsoft.com/en-us/library/bb870935.aspx 3- The agent installation worked but when I get to attaching the agents to the DPM server it still gives me the error saying that the specified account does not have administrator rights. 4- I tried the Domain admin, users who are domain admin + local admin, single local admins. 5- I have turned off the windows firewall and made sure all the services are running. So now I’m out of ideas and really need help, the agent attach to the DPM server is the last thing that is holding me back from deploying everything to the production site. Any help would be really appreciated.

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • Does Active Directory on Server 2003 R2 support IPv6 subnets in Sites and Services?

    - by NorbyTheGeek
    I've been experimenting with IPv6 at our organization. The domain controllers (all 2003 R2) and most of the servers (2003 R2 / 2008 / 2008 R2) have IPv6 configured. We have a subnet assigned through a tunnel provider. Currently, the only workstation that is running IPv6 is mine. (Windows 7) I have been noticing that my workstation is picking domain controllers in other sites for things like DFS, and I finally realized that I don't have the IPv6 subnets set up in Active Directory Sites and Services (ADSS). But when I try to add a IPv6 prefix in ADSS, it tells me: Windows cannot create the object 2001:xxxx:xxxx:xxxx::/64 because: The object name has bad syntax. I believe I may be using the 2008 version of the admin tools (ADSS reports version 6.1.7601.17514) so I'm wondering if maybe my 2003 R2 Active Directory schema doesn't support configuring IPv6 subnets in ADSS. Is this true? UPDATE Even with 2008 R2 schema in Active Directory, I'm having the same problem. How can I get my IPv6 subnets into Sites and Services?

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

  • Converting a Windows 2003 server

    - by Jim Bass
    We have a legacy database system based upon MS SQL running on Windows Server 2003. The client software will only run on Windows XP. We have recently had success converting a client into a virtual machine and running it under Fusion on Mac minis. So far, it is working incredibly well. So well, in fact, that we are now considering trying to convert the server to a virtual machine. This raises several questions, though: 1. The server uses a raid array. Does the VM virtualize the raid array? I only ask because in my experience Windows products don't like it when you change core hardware. 2. Is there any reason why running SQL server on a virtual machine won't work? It will be up 24/7. 3. Is there a different converter for servers? 4. Will I have to track down the licensing for MS SQL and Server 2003 or will they come across ok? 5. The company that designed the software is no longer in business. There is some fear that the software is somehow tied to the hardware configuration. We bought the hardware, but their engineers came out and configured the system. Will the virtual machine be able to spoof particular chip sets? Thanks! Jim Bass

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Licensing SQL Server 2012 Reporting Services w/ SharePoint 2010

    - by Evan M.
    Here's my situation: I have 1 VM that is running SharePoint 2010 SP1. I have a different physical server that is running SQL Server 2008 R2 that hosts all the configuration and content database for SharePoint. Now, we want to start providing BI capabilities to our users with SharePoint and SQL Server. With it's new features, 2012 is the obvious way to go. To support this, I'm looking to build a new VM that will have SQL Server 2012 installed w/ Analysis services and SSIS, which will be the platform that gets our data from our Oracle databases, puts it in a warehouse hosted by the SQL 2012 instance, and is put into cubes. What's getting me about the platform is licensing for Reporting Services and PowerPivot. My plan was to install SSRS and PowerPivot on the current SharePoint server. But my understanding of the licensing means that instead of the new SQL server being licensed, I'd have to license both new server, and the SharePoint server. Conversely, I could install SharePoint onto the SQL server, and only have to get a second SP license, but then I'd have the added complexity of deploying a separated application server, and combines my data and application servers. Is my licensing understanding correct, or can I have SSRS and PowerPivot installed separately without incurring additional licensing costs?

    Read the article

  • use ssh tunnel with phpmyadmin

    - by JohnMerlino
    I been using ssh tunnel to bypass firewall of remote mysql server. On my Ubuntu 12.04 installation, it works via the terminal and it works when using a program called mysql workbench. However, that program freezes often and I want to try phpmyadmin as an alternative. However, I cannot connect to remote server using ssh tunnel on phpmyadmin, albeit I can connect locally. These are the steps I've tried: 1) Open a tunnel, listening on localhost:3307 and forwarding everything to xxx.xxx.xxx.xxx:3306 (used 3307 because MySQL on my local machine uses the default port 3306): ssh -L 3307:localhost:3306 [email protected] So now I have the port for tunnel open and I have my local mysql installation default port: $ netstat -tln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3307 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN ... 2) Now I can easily connect to remote server via localhost using the terminal: $ mysql -u user.name -p -h 127.0.0.1 -P 3307 Notice that I expicitly identify 3307 as the port, so traffic forwards to the remote server, and hence it logs me in to the remote server. Unfortunately, the localhost/phpmyadmin local login interface doesn't allow you to specify a port option. So I modify the config-db.php file and change the $dbport variable to 3307, under the impression that the phpmyadmin interface will now work with port 3307: $ sudo vim /etc/phpmyadmin/config-db.php $dbport='3307'; Then I restart the mysql server. Unfortunately, it didn't work. When I use the remote credentials to login, it gives me error: #1045 Cannot log in to the MySQL server

    Read the article

  • How do you backup your own files? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • How to setup a hyper-v domain with internet access

    - by fynnbob
    First off let me say that I'm not a network admin or server guy, I know very little about that stuff. What I'm trying to do is setup a virtualized domain using hyper-V. Here is the configuration: Physical Server: 4Mb RAM Windows Server 2008 R2 running Hyper-V Virtual Environment: One Domain Controller running Windows Server 2008 R2 One Client running Windows Server 2008 R2 I have been successful in setting up a virtual domain controller and adding a virtual client to that domain controller but I'm stuck at trying to give the virtual Environment Internet access. I can give the client VM Internet access if I remove them from the virtual domain but once I add them back to the virtual domain, Internet access is gone. I've read articles describing many different ways this can be done (using RRAS with NAT, using a wireless connection, etc...) but all of those articles only cover a small piece of the setup and also seem to be geared towards people who know there way around networking and servers which I don't. I'd like to know more but my thing is software development and I have my hands full trying to keep up with everything in that realm. I simply want to setup a virtual domain with Internet access for testing. Can anyone point me to any "for Dummy's" type information on how to setup this type of environment or can anyone provide this kind of step-by-step help. Any help would be very much appreciated.

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Where do these mysterious DNS lookups come from and why are they slow?

    - by Hongli
    I have recently obtained a new dedicated server which I'm now setting up. It's running on 64-bit Debian 6.0. I have cloned a fairly large git repository (177 MB including working files) onto this server. Switching to a different branch is very very slow. On my laptop it takes 1-2 seconds, on this server it can take half a minute. After some investigation it turns out to be some kind of DNS timeout. Here's an exhibit from strace -s 128 git checkout release: stat("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=132, ...}) = 0 socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 5 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("213.133.99.99")}, 16) = 0 poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) sendto(5, "\235\333\1\0\0\1\0\0\0\0\0\0\35Debian-60-squeeze-64-minimal\n\17happyponies\3com\0\0\1\0\1", 67, MSG_NOSIGNAL, NULL, 0) = 67 poll([{fd=5, events=POLLIN}], 1, 5000) = 0 (Timeout) This snippet repeats several times per 'git checkout' call. My server's hostname was originally Debian-60-squeeze-64-minimal. I had changed it to shell.happyponies.com by running hostname shell.happyponies.com, editing /etc/hostname and rebooting the server. I don't understand the DNS protocol, but it looks like Git is trying to lookup the IP for Debian-60-squeeze-64-minimal as well as for happyponies.com. Why does Debian-60-squeeze-64-minimal come back even though I've already changed the host name? Why does Git perform DNS lookups at all? Why are these lookups so slow? I've already verified that all DNS servers in /etc/resolv.conf are up and responding slowly, yet Git's own lookups time out. Changing the host name back to Debian-60-squeeze-64-minimal seems to fix the slowness. Basically I just want to fix whatever DNS issues my server has because I'm sure they will cause more problems that just slowing down git checkout. But I'm not sure sure what the problem exactly is and what these symptoms mean.

    Read the article

  • Are there any viable DNS or LDAP alternatives for distributed key/value storage and retrieval?

    - by makerofthings7
    I'm working on a software app that needs distributed decentralized name resolution, and isn't bound to TCP/IP. Or more precisely, I need to store a "key" and look up it's value, and the key may be a string, a number, or any other realistic data type. Examples: With a phone number, look up a name. (or with an area code, redirect to the server that handles that exchange) With an IP Address get a DNS name, or a Whois contact (string value) With a string, get an IP, ( like a DNS TXT or SRV record). I'm thinking out of the box here and looking for any software that allows for this. (more info below) Are there any secure, scalable DNS alternatives that have gained notoriety? I could ask on StackOverflow, but think the infrastructure groups would have better insight on this. Edit More info: I'm looking at "Namecoin" the DNS version of Bitcoin, and since that project is faltering, I'm looking at alternative ways to store name-value pairs, with an optional qualifier. I think a name value pair is of global interest is useful, but on a limited scale. Namecoin tried to be too much, and ended up becoming nothing. I'm trying to solve that problem in researching alternatives and applying distributed technologies where applicable. Bitcoin/Namecoin offers a Distributed Hash Table, which has some positive aspects, but not useful for DNS, except for root servers.

    Read the article

  • How can I create a VLAN on my extreme switch for a seperate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

  • SBS DC DNS entries going missing?

    - by Chris W
    I've been looking at a problem on a friends SBS (2003) server where the client PC's aren't able to connect to the server with a variety of errors reported. Checking the server itself the only indicator of an issue is an error 5782: Dynamic registration or deregistration of one or more DNS records failed with the following error: No DNS servers configured for the local system. Running a dcdiag reports that there are no DNS records registered for the DC so I fixed the problem by doing a netdiag /fix after which the dcdiag comes back clean and clients are ok again. It happened a few weeks ago as well and the same fix solved it. What are the possible causes of the DC DNS entries going missing? Is this a config option that needs tweaking or could it be solved by something simple like scheduling the SBS server to re-boot periodically? The only change they can think of that was made near to the time of the first instance of this problem occurring is that RRAS was started up to allow for a VPN connection from a home user. NB - The server is setup with a pair of NICs in a team so the server has a single virtual NIC providing both LAN/WAN connections to it. An external hardware firewall is in use rather than the windows firewall.

    Read the article

  • Windows service fails to start with local user until password is entered again in logon tab

    - by Nick
    Basically we have a service where we use a local account as its logon. it has all the proper permissions, and everything is working fine, service starts and runs and all is good. Then one day, after rebooting, the service fails to start. Logs show incorrect password. Our technicians resolve the issue by simply retyping the password into the "Log On" tab from the services.msc. Unfortunately we have not been able to root cause. I suspect that the password that is stored for the service is lost somehow. Does anyone know where the password hash might be stored so we can check it? The only activities that seem to be possibly related are patching with Microsoft security patches, but we have multiple servers running the same service, and we have never seen more than one at a time, and its usually a different one each time when this occurrs. I believe this to be the same issue as this: Windows service fails to start with custom user until started once with local user But i was unable to add comments, and its really old.

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • xen + debian network after upgrade squeeze to wheeze

    - by rush
    I've got a Debian + Xen server. After a system upgrade to the stable version the network doesn't come up after boot. Every time after reboot I need to bring it up manually. The network configuration was not changed during upgrade. Here is /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 11.22.33.44 netmask 255.255.255.0 gateway 11.22.33.1 nameserver 8.8.8.8 After boot ip r shows no route and eth0 has no ip address. Manually ip and route setup goes fine and network starts working. Messages from dmesg about network I've found (looks like nothing interesting) [ 3.894401] ACPI: Fan [FAN3] (off) [ 3.894444] ACPI: Fan [FAN4] (off) [ 4.178348] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c9 [ 4.178351] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection [ 4.178392] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: 0100FF-0FF [ 4.178413] e1000e 0000:02:00.0: Disabling ASPM L0s L1 [ 4.178432] xen: registering gsi 16 triggering 0 polarity 1 -- [ 4.223667] ata5: DUMMY [ 4.223668] ata6: DUMMY [ 4.289153] e1000e 0000:02:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c8 [ 4.289155] e1000e 0000:02:00.0: eth1: Intel(R) PRO/1000 Network Connection [ 4.289245] e1000e 0000:02:00.0: eth1: MAC: 3, PHY: 8, PBA No: 1000FF-0FF [ 4.506908] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 4.542920] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) -- [ 10.362999] EXT4-fs (dm-23): mounted filesystem with ordered data mode. Opts: (null) [ 10.419103] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null) [ 10.988255] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 13.175533] Event-channel device installed. [ 13.287555] XENBUS: Unable to read cpu state -- [ 13.288670] XENBUS: Unable to read cpu state [ 13.965939] Bridge firewalling registered [ 14.134048] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 14.283862] ADDRCONF(NETDEV_UP): peth0: link is not ready [ 14.284543] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 17.800627] e1000e: peth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 17.801377] ADDRCONF(NETDEV_CHANGE): peth0: link becomes ready [ 18.307278] device peth0 entered promiscuous mode [ 24.538899] eth1: no IPv6 routers present [ 28.570902] peth0: no IPv6 routers present I've upgraded two servers and I've such behaviour on two of them. How to fix this and get network starts automatically on boot?

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >