Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 87/393 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How do I configure IIS to allow access to network resources for PHP scripts?

    - by Dereleased
    I am currently working on a PHP front-end that joins together a series of applications running on separate servers; many of these applications generate files that I need access to, but these files (for various reasons) reside on their parent servers. If I, from the command line, issue a bit of script such as: <?php var_dump(glob("\\\\machine-name\\some\\share\\*")); I will get the full contents of that directory, proving that there's no problem programmatically with PHP reading the contents of a UNC share. However, if I try to execute the same script from the web server, I get an empty array -- more specifically, if I use more explicitly functions designed to "open" a directory like it was a file, I get access errors. I believe this to be a permissions issue, but I am not a server/network administrator type, so I'm not sure what I need to do to correct this and get my script running, and the links I've checked out have not been a terrible amount of help, perhaps due to my background, or lack thereof as far as IIS is concerned, coupled with the fact that we are not actually using .NET for this. Relevant Stats: Windows Server 2008 Standard SP2 IIS 7.0 PHP 5.2.9 I will be connecting to two types of servers: a few other nearly-identical Server 2008 machines, and a machine running embedded XP. Links that have not been particularly helpful but maybe I am just misreading: http://support.microsoft.com/?id=306158 http://support.microsoft.com/kb/207671/EN-US/ http://support.microsoft.com/kb/280383/

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • connecting to a network using route command

    - by ami
    I have a computer with an external IP(192.168.223.220) and also an internal address (10.1.1.20) in order to connect to some servers that don't have external addresses only 10.1.1.xx . in order to connect to these servers from other machines I used the following command "route ADD 10.1.1.0 MASK 255.255.255.0 192.168.223.220" and than I was able to connect to the servers using there 10.1.1.xx address. The problem is that the hard disk of main server(192.168.223.220) died and was replaced and after the that I am not able to connect to the servers as before, the route command succeeds and I can ping 10.1.1.20 but not the other servers. Thanks I am using Windows XP and the print outs are D:\AurosHome\Scriptsipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : N100-master Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection 3: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 EB Network Connection with I/O Acceleration #2 Physical Address. . . . . . . . . : 00-30-48-34-BA-B9 Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.225.180 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.225.254 DNS Servers . . . . . . . . . . . : 192.168.225.2 Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 EB Network Connection with I/O Acceleration Physical Address. . . . . . . . . : 00-30-48-34-BA-B8 Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 10.1.1.20 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 2: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Mellanox IPoIB Adapter Physical Address. . . . . . . . . : 00-02-C9-25-34-0D D:\AurosHome\Scriptsroute print Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 30 48 34 ba b9 ...... Intel(R) PRO/1000 EB Network Connection with I/O Acceleration #2 - Packet Sche duler Miniport 0x3 ...00 30 48 34 ba b8 ...... Intel(R) PRO/1000 EB Network Connection with I/O Acceleration - Packet Schedul er Miniport 0x10005 ...00 02 c9 25 34 0d ...... Mellanox IPoIB Adapter - Packet Scheduler Miniport =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.225.254 192.168.225.180 10 10.1.1.0 255.255.255.0 10.1.1.20 10.1.1.20 10 10.1.1.20 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.1.1.20 10.1.1.20 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.225.0 255.255.255.0 192.168.225.180 192.168.225.180 10 192.168.225.180 255.255.255.255 127.0.0.1 127.0.0.1 10 192.168.225.255 255.255.255.255 192.168.225.180 192.168.225.180 10 224.0.0.0 240.0.0.0 10.1.1.20 10.1.1.20 10 224.0.0.0 240.0.0.0 192.168.225.180 192.168.225.180 10 255.255.255.255 255.255.255.255 10.1.1.20 10.1.1.20 1 255.255.255.255 255.255.255.255 10.1.1.20 10005 1 255.255.255.255 255.255.255.255 192.168.225.180 192.168.225.180 1 Default Gateway: 192.168.225.254 Persistent Routes: None

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • VPN with VLANs? [closed]

    - by Craig
    As usual, I'm sure I'm in way over my head on this one. My networking skills are limited; so, bear with me if you will. What I have are a few testing servers at my house as well as at a friends house that I want to link together so they can see each other (VPN right? I've done those before). We want to be able to see all the servers and work with them from either location. All the servers also need to be able to see each other. But, we don't want to see each others PCs, printers, PS3s etc. How do we pull that trick off? Multiple VLAN?... subnets?... what? If hardware matters, I have an old PC I was planning on loading pfSense onto because my current el-cheapo router doesn't support VPN. The VPN linking the houses is about the only thing I'm sure on. Beyond that, I'm lost. I'm not a complete noob; but, like I said, I'm not so sharp with the more complex networking. I do however read well... So use lots of descriptive words and feel free to link away to long dry articles if necessary. :-)

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • apache/httpd responds slower under EL6.1 than EL5.6 (centos)

    - by daniel
    I've read through other threads on performance differences between RHEL6 and RHEL5, but none seem a tight match to mine. My issue manifests itself in slightly slower average response time (20ms) per request. I have about 10/10 servers of the same hardware spec with Cent6.1 and Cent5.6. The issue is consistent across the group. I am running Ruby on Rails with Passenger. Apache config is identical (checked out from the same SVN repo) Ruby and Passenger are identical builds. Application is identical and being served traffic round robin. mod_worker An interesting clue from server-status: The Cent6.1 servers have a steady 20-40 threads in the "Reading Request" state while the Cent5.6 servers have around 1. I'm graphing this so I can see it trend over time. I also have a bunch of much newer machines that are significantly faster and are running Cent6.1. They dust all the older machines in response time, but I can see they also have a steady 20-40 threads in the "Reading Request" state. This makes me believe I can get their response time down, if I can figure out what is holding up these requests. My gut is telling me that I need to tune some network setting in sysctl, but I haven't figured it out yet. Help is appreciated.

    Read the article

  • Email Proxy Ideas

    - by jtnire
    Hi Everyone, I wish to host some managed email servers for some customers. Each customer will have their own email server which will be an all-in-one virtual machine running postfix, dovecot and some webmail suite. Even though each customer will have their own server, I do not wish to give each email server it's own public facing IP. I wish to avail the use of proxy servers so all customers use the same public IP. As for the "smtp-in" from the public internet, this isn't a problem as I can set up many mx servers (using postfix) which will store-and-forward the mail to the correct server (using transport maps). As for the IMAP access from the customer, I was thinking of using perdition which is an IMAP proxy - I believe that this will suit my needs. I am confused however on what to use for the "smtp-out" proxy. The customers will have to authenticate with their receptive email server, however they will have to go via a proxy of some sort as they won't have direct access to their server instance. It probably can't be a store-and-forward proxy either. Does anyone have any idea on what I could use here? Many Thanks

    Read the article

  • Having an issue trying to get Gigabit speed across my network (Ubuntu Server)

    - by user94217
    I've just started looking into the network speeds at my office, the entire network is setup to be "Gigabit". This includes Gb switches, Gb Network cards and Cat 5e cabling. I'm not expecting the full speed, I just want more than ~90 Mb/s. I've been running some tests with iperf the linux tools and checking the hardware with ethtool. I have 3 servers and when doing my checks/test I discovered that the two backup servers can access each other at around 450 Mb/s but when using either one of them to connect and test the main server, I only get the 90Mb/s even though ethtool shows the networking card running at 1000/Full. The only difference between all the server/networking cards is the "Port" which ethtool shows. On the two backup servers the "Port" is shown as MII yet on the other it's shown as "Twisted Pair". When using ethtool -s to manually set the "Port" to MII on the main server, it looses all connectivity and does not show "Speed" or "Duplex". Anyway, Am i doing something wrong? Is there a specific reason my main server cannot use Gb when there appears to be no difference except the "Port"?

    Read the article

  • Possible Hack with FTP - What are the solutions?

    - by iamrohitbanga
    I was reading the FTP rfc and hence had this idea. Suppose there are several public ftp servers that allow anonymous user login. I open a control connection on port 21 to each of these servers. Now suppose there is a web server a.com with ip address x.y.z.w listening on port 80. FTP allows a user to specify the host on which the data connection is to be setup. So a user specifies the host and port number of a.com web server. Now the ftp server starts sending data to a.com for which it is not a valid HTTP request and hence it is rejected. But a.com notes that the invalid http request came from a public ftp server and not my ip address. Can this not lead to a distributed attack by utilizing all public ftp servers. worse still the the data being sent by ftp server could be a valid http request which could trigger a.com to send a file back to the ftp server. Is there a solution for this or is it no problem at all.

    Read the article

  • IPTables Reroute SSH based on Connection string?

    - by senrabdet
    We are using a cloud server (Debian Squeeze) where public ports on a public IP route traffic to internal servers. We are looking for a way to use IPTables and ssh where based on some part of the ssh connection string (or something along these lines) iptables will reroute the ssh connection to the "right" internal server. This would allow us to use one common public port, and then re-route ssh connections to individual servers. So, for example we hope to do something like the following: user issues ssh connection (public key encryption) such as ssh -X -v -p xxx [email protected] but maybe adds something into the string for iptables to use iptables uses some part of that string or some means to re-route the connection to an internal server using something like iptables -t nat -A PREROUTING ! -s xxx.xxx.xxx.0/24 -m tcp -p tcp --dport $EXTPORT -j DNAT --to-destination $HOST:$INTPORT ....where $HOST is the internal ip of a server, $EXTPORT is the common public facing port and $INTPORT is the internal server port. It appears that the "string" aspect of iptables does not do what we want. We can currently route based on the IP table syntax we're using, but rely on having a separate public port for each server and are hoping to use one common public port and then re-route to specific internal servers based on some part of the ssh connection string or some other means. Any suggestions? Thanks!

    Read the article

  • VMware ESX Linux Guest Customization

    - by andyh_ky
    Hello, I am interested in deploying several RHEL 4 Update 8 virtual machines for creation of a test environment. Here are the steps I am taking: In off hours, P2V/V2V the production machines and convert them to templates Deploy the virtual machines with a customization specification that changes hostname, IP address I am interested in how these processes are done and if there are any options for further customization. Are the machines brought on the network when they are powered on, before they are reconfigured? Is there a potential IP address conflict? Is there an option to run additional scripts which reside on the guest as a part of the reconfiguration? For example, restoring an Oracle Database. This is an option with Windows guests and sysprep, but I have been unable to locate anything showing a RHEL equivalent. I am dealing with a multi tier application. The main issue I am attempting to mitigate is that the application servers reference database servers by hostname and in tnsnames files. I am interested in scripting the reconfiguration of the application in the deployment so that the app/db servers are pointing to the test environment. I am OK with placing the 'cleanup' script on the source and executing it after the machine has been brought up. I am interested in the automation of the script's execution post clone/boot, as well as if there could be an IP address conflict. (cross posted to VMTN's ESX 4 community)

    Read the article

  • Why did I loose access to the mailboxes on my old web/mail host after changing to a new one but keeping old MX values

    - by LaserBeak
    So I changed the NS records with registrar to point at the new webhosts DNS servers and edited the SOA record there, deleting the new hosts default MX records and instead putting in the old ones for the old web\mail hosts. The website A record is however pointing at the new webhosts servers and the site comes up fine. But none of this should cause me to loose access to mailboxes on my old hosts mail server right? I log into the control panel on the old host, all the mailboxes are there, all the passwords are fine but I can't log in using either webmail or pop3, says incorrect log-in/password. I even created a new mailbox and password for it respectively, but it would not let me log in. For what its worth I did not change\delete the records for 'A' on the old webhost zone file, since I am not hosting the site with them anymore and NS records are pointing to other hosts DNS servers/zone file so that shouldn't matter right? The old hosts mailserver is also not simply down, I can tell because through the control panel I setup a mail forward for one of the existing inboxes and when sending mail to it, it receives it and forwards it fine. So from this I can deduce that I have correctly inputted the old hosts MX records into the zone file hosted on the new hosts DNS and the mail is being sent to the old hosts mail server(s) and is successfully forwarded by it. But why can't I log into those account/inboxes anymore ?

    Read the article

  • Connection reset to some websites

    - by user143271
    I'm using a 2wire 3600HGV modem/router. Starting around this afternoon, any time I try to access anything from i.imgur.com I get The connection to i.imgur.com was interrupted in chrome, and the actual error is Error 101 (net::ERR_CONNECTION_RESET). It's network wide (tested multiple browsers on multiple computers and phones). I can access imgur.com just fine, but nothing from its content server i.imgur.com. If I disable wifi on my phone and use its 4G connection, I can access it just fine, so obviously imgur isn't down. I haven't changed any configuration on the router, and I have tried changing DNS servers (I tried google and OpenDNS). It also seems that imgur is not the only site; howtogeek and a couple of others seem to have the same problem. It looks like they are all edgecast cdn content servers, but not all edgecast cdn servers fail. Tumblr, for instance, works just fine. Does anyone have any idea what would be causing this? EDIT: Related to the edgecast remark, it would appear that this is a specific edgecast server: gs1.wpc.edgecastcdn.net. Tumbler's content is on gs1.wac.edgecastcdn.net, so it might be on a different server. Edit #2: These sites all respond to ping just fine as well.

    Read the article

  • Trying to migrate old server to new. Getting duplicate name errors

    - by SpaceCowboy74
    I have an existing server on my network that is running under windows 2000 with SQL Server 2000 on it. We are in the process of moving the server to a windows 2008 platform, with SQL 2008 as well. A few changes are happening though. For one, applications that were on the old server, will now be on a new application server. The issue is, the developers of the original applications hard coded the server name in the apps and/or batch files. I could change all the code, but that would require weeks of work. My original idea was to change the hosts and lmhosts files to point to the new servers with a different IP. So i implemented the following where oldserver was the original server and server is the new one brought online: hosts: 192.168.1.10 oldserver 192.168.1.15 server lmhosts: 192.168.1.10 oldserver #pre 192.168.1.15 server #pre Problem is, when i try to do this, i get the following errors: \\server\c$ Logon Failure : The target account name is incorrect. and \\oldserver\c$ A duplicate name exists on the network. I know about renaming servers in AD, but can't do so yet as the original server is in production and i cannot rename it without breaking a lot of things at the moment. I'm wanting to do a proof of concept to the management before renaming the servers. Any idea how i should resolve this?

    Read the article

  • ASP.NET, IS7 and IE8 caching?

    - by jdege
    We're suddenly having problems with some of our sites having old versions of .css and .js files show up in the browser. Generally, these problems go away, when the user clears cache in the browser. Is there something we can do either in the code or in IIS7, to convince the browser to not used the cached files? In our weirdest case, we have one customer whose users hit our site, and get an old version of a js file. They clear cache, load the page, get the current version, and the page runs fine. Then they load the file again, and suddenly have the old version, again. Any ideas as to how that might be happening? I can think of three: The browser is somehow holding on to the old version, when we clear cache, and is putting it back in the cache, before the second page load. One of our servers has an old version of the file, and while the first page load after a clear cache pulls it from one of the servers with the current version, second and subsequent page loads pull it from the server that has the old version. The first load after a clear cache goes straight to our servers, while subsequent loads pull the file from the cache on the customer's web proxy. I have to say, all three of those scenarios seem outlandishly unlikely, but it's a repeatable behavior. Any ideas?

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • Growing a small hosting company [closed]

    - by user2353007
    We currently have a few servers, 1 WHM VPS (2GB), 1 MS SQL VPS (2 GB), and 1 IIS VPS (2GB). The VPS servers are doing fine as far as uptime and response times but we would like to add the following features. 1) monitoring with load statistics 2) failover I have looked a Zabbix, Zenoss, Nagios, and a couple of other cloud solutions like monitor.us and watchdog from Zerigo. Ideally for the monitoring solution. Our current hosting company suggested we get a dedicated server or VPS and install load balancing software (not sure I like that idea). I've looked into Rackspace and Amazon load balancers which seem like the most feasible solutions for load balancers. Does anybody have any input on the monitoring and load balancing products I'm looking into? Monitoring should monitor uptime as well as give reports on memory usage, disk usage, processor usage, and which processes/websites/users are responsible for the load. It would be ideal if the load balancer worked with any IP. Not sure if either Rackspace or Amazon load balancers would allow load balancing with servers outside their datacenter. Thank you.

    Read the article

  • Adding a 2008 server to a 2003 Domain with DNS devolution?

    - by mvdwege
    I'm running into a problem adding a 2008 server to our existing 2003 domain, and as I am not a Windows admin, I'm not getting the problem here. Some reading around on Technet seems to indicate that DNS devolution is the issue. Here's the setup: DNS for the entire company is hosted on a Unix server running Bind, including the service records for the Windows domain. Our toplevel is company.local, and functional domains are in subdomains, such as mgt.company.local (our management servers). Our Windows servers live mostly in office.company.local, but some of them live in .mgt.company.local and .customers.company.local. The 2003 servers all succesfully authenticate against company.local as the Windows domain. Their position in the infrastructure is set by setting the primary DNS suffix under the network settings and the computer name dialog. Trying to do the same with a brand new 2008 install throws an error though: "Changing the Primary Domain DNS name of this computer to office.company.local failed [...] The specified server cannot perform the requested operation" I tried googling, but the closest I came was the Technet article on DNS Devolution, and I can't make heads nor tails on how to apply that to my case. Addendum 2012-10-23: The problem is not joining the domain, that works, the problem is that it joins with the wrong name, as .company.local, instead of .office.company.local. So far everything works, but I'm rather afraid to run production like this, because sooner or later something is going to complain about the AD name not matching DNS.

    Read the article

  • Linux Server partitioning

    - by user1717735
    There's a lot of infos about this out there, but there's also a lot of contradictory infos… That's why i need some advices about it. So far, on the servers i had home for test (or even "home production") purposes i didn't really care about partitioning and i configured all in / + a swap partition, over RAID 0. Nevertheless, this pattern can't apply to production servers. I have found a good starting point here, but also it depends on what the servers will be used for… So basically, i have a server on which there will be apache, php, mysql. It will have to handle file uploads (up to 2GB) and has 2*2TB hard drive. I plan to set : / 100GB, /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB All of this if of course over RAID 1. But actually, it's not a big deal if I lose data being uploaded, so would it be interesting mounting /tmp over raid 0 while maintaining the rest over raid 1? Sounds complicated…

    Read the article

  • Conceptually how does load-balancing on the EJB tier work in Glassfish/any ejb container

    - by Benju
    I am wondering conceptually how load-balancing works on the EJB-level (not web session replication) with Java EE containers like Glassfish. From what I have gleaned your remote interface is a proxy that delegates your call to one of many servers you may have in an environment. If things fail are they supposed to be able to "finish" on another server? I want to understand the basic theory behind this load balancing, why is it better than a bunch of servers all running a plain web application with session affinity on a load-balancer?

    Read the article

  • Cache the result of a MySQLdb database query in memory

    - by ensnare
    Our application fetches the correct database server from a pool of database servers. So each query is really 2 queries, and they look like this: Fetch the correct DB server Execute the query We do this so we can take DB servers online and offline as necessary, as well as for load-balancing. But the first query seems like it could be cached to memory, so it only actually queries the database every 5 or 10 minutes or so. What's the best way to do this? Thanks.

    Read the article

  • How to incorporate an xml fragment into tomcat's server.xml

    - by rmarimon
    I'm doing a distribution of tomcat for a many servers and in each of these servers the realm is going to be different. I would like to have a file /etc/tomcat/realm.xml containing the realm for that installation and have the file /var/lib/tomcat/conf/server.xml import it directly. I've tried with Xinclude without luck and I'm about to resort to sed to the import when running /etc/init.d/tomcat. Is there a better way to do this?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >