Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 107/393 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • What is needed for 'Previous Versions' to be visible on the client OS?

    - by Zoredache
    I have servers with Shadow Copies enabled taking snapshots a couple times a day. From the server, if you look at the local devices you can see the Previous Versions being populated reliably. But from remote clients, the ability for an end-user to see the Previous Versions seems to be very hit-or-miss. For the sake of this question you can assume that all my clients are Windows 7 and the Servers are Windows Server 2008 R2. Is there an exhaustive list of everything that is required for end user to see Previous Versions? Are their any requirements for a certain level of share or filesystem permissions, other then read access? Does something need to be open on the firewall, other then what is already in-place for normal Windows networking?

    Read the article

  • Can't use HTTPS with ServerXMLHTTP object

    - by Imraan
    I am supporting a Classic ASP application that connects to a payment gateway via HTTPS. Up until recently there have been no issues. A few days ago this broke without the code, IIS config or anything local changing. Its broken on at least 3 separate servers. The last run of Windows Updates was in late November, but bringing the servers' updates up date has not resolved the problem. A code snippet is below. Dim oHttp Dim strResult Set oHttp = CreateObject("MSXML2.ServerXMLHTTP") oHttp.setOption 2, 13056 oHttp.open "POST", SOAP_ENDPOINT, false oHttp.setRequestHeader "Content-Type", "application/soap+xml; charset=utf-8" oHttp.setRequestHeader "SOAPAction", SOAP_NS + "/" & SOAP_FUNCTION oHttp.send SOAP_REQUEST Below is a dump of the error object :- Number: -2147012852 Description: A certificate is required to complete client authentication Message: A certificate is required to complete client authentication I initially posted the question on Stackoverflow (http://stackoverflow.com/questions/9212985/cant-use-https-with-serverxmlhttp-object) thinking it was a code issue, but further investigation seems to point to a server issue.

    Read the article

  • How do i know if i set up the nlb (network load balancing) cluster correctly???

    - by letseatlunch
    So ill start from the very beginning. I'm working on a web conference were we are going to show about 12 videos for a total of about half a gig for all 12. Since all the participants are going to be watching (and also streaming/downloading) at once it was recommended we set up a server farm. So i have 4 servers that i am trying to network together. They are all running Microsoft Server 2008 and i have spent the last three days setting them up and now that its done i want to make sure its all ready to go. so i just want to be sure that everything is setup the way that i think it is. What is the best way to do this. Really i want to make sure that the load will be split over the servers when its showtime. thanks for any help in advance letseatlunch Dave

    Read the article

  • Hyper-V and host-installed hardware devices: can guest VMs access?

    - by gravyface
    Have a couple of servers I'd like to setup as Hyper-V Servers, with a couple of Windows 2008 Standard VMs. On the host, we have a few hardware devices we'd like to be accessible to the guest; I'm not sure if these are supported via a raw "pass-thru" on Hyper-V (which I don't have a lot of experience with) if the same drivers are installed on the guest. Hardware in question is a Brooktrout fax card, a SCSI adapter for the tape drive, and a 9-pin serial connected to one of the core firewalls for management.

    Read the article

  • Move /var directories to to /mnt on an EC2 instance

    - by Geoff Lanotte
    I am trying to work on a standard configuration for a set of EC2 instances running ubuntu 12.04. These servers are going to be primarily web servers for a Ruby on Rails application. When you configure a new large instance, you are given a primary of 8GB and then ephemeral storage of 400 GB that is mounted to /mnt. It seems logical to me to move some directories that have a potential for growth off to the /mnt directory, I was specifically thinking of /var/www and /var/log. My question is two-fold: Is this a good idea or are there pitfalls that I cannot see? If this is a good idea, how should I go about configuring this. I do have the ability to configure new instances and down our old instances. My concern is over long term, doing this in such a way that it prevents downtime. I am a developer with some experience in devops, but mounting drives is something I have not faced before, so explicit directions would be greatly appreciated.

    Read the article

  • MS Clear Screen Saver doesn't work

    - by ufoq
    I have a problem with really exotic thing - Microsoft Clear Screen Saver. As it's name suggests, it's a screen saver, that's transparent (MS posted it as a part of W2k Resource Kit). When you move the mouse/hit a key, the "lock" dialog appears. I would like to use this to view servers desktops without need to log in. I tested it on my XP's, and it works flawlessy. But on W2K3 servers it doesn't work. After the screensaver timeout, the error message is displayed: "The Clear Screen Saver cannot display the user desktop after the workstation has been locked"

    Read the article

  • Start TLS and 389 Directory

    - by Kyle Flavin
    I'm trying to configure Start TLS on 389 Directory server, but I'm having all sorts of issues. I've been following this doc: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/managing-certs.html which specifies that I should create a certificate for both the directory server and admin server. I've imported the CA cert on both servers. I've tried to use the same server certificate for both. It will not allow me to do so. However, the admin and directory servers reside on the same host. If I generate a new certificate it will need to use the same hostname. I'm not sure if that's valid... Has anyone out there set this up before? Any direction would be helpful. I have multmaster replication set up. From an external client, I'm attempting to do an ldapsearch -ZZ -x -h "myhost" -b "dc=example,dc=com" -D "cn=Directory Manager" -W "", and I'm getting a protocol error.

    Read the article

  • Eliminate single point of failure for webservers?

    - by George Bailey
    I know in DNS, that each of the DNS servers will be tried to see if they will respond I know in email that in the event of a failure it will go to the next one in the list or it will hold the mail for a period of time As far as I know,, in webservers,, the browser will get one of the webserver IP addresses and try it and if it fails it will give up. Is this correct? If so,, then the only way to direct traffic away from a failed IP address would be with the DNS servers.. and even that would not update immediately?

    Read the article

  • Uninstalling PowerShell 1.0

    - by Ddono25
    I am attempting to standardize our PowerShell deployment and usage across all servers, which involves uninstalling PS1.0/installing PS2.0 on Server 2003 machines. In searching for KB926139 through CMD and Control Panel Add/Remove Programs, it is nowhere to be found. We have KB926141 installed on these servers as the Language Pack update, but no initial Install Update. PowerShell 1.0 is installed on the server and can be found at the default locations (%windir%\System32\WindowsPowerShell\V1.0, %windir%\Syswow64\WindowsPowerShell\V1.0). I would like to avoid deleting the Registry Entry in this situation since it should be pretty simple, any help would be appreciated. Thanks!

    Read the article

  • APC has no system cache entries

    - by lazzio
    I have 2 web servers to provide PHP websites. One server is : Apache + PHP-FPM + APC The other : Apache with MPM-itk + APC. For both of these servers, APC has no cache system entries but only users cache entries as you can see on the screenshot. APC with only users cache entries APC configuration is : apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2 apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 256 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Does anyone know why APC acts like this and how to make it work well ? Thank you for your help!

    Read the article

  • When load balancing, must all copies of static web page be exactly the same?

    - by Gilles Blanchette
    I am used to get answers for everything on the web, but not this time... Yesterday I enable Amazon DNS weight functionally to load balance 7 websites between two different IP addresses (split 50%-50%). Both servers run IIS 8.5, sites runs well on both sides. Today I found out that Google WebMasterTools is reporting fails error with file robots.txt, all close to 50% of access try errors. The robots.txt file is ok and accessible (even via Google testing URL page) on both servers. Lets say current version of static web pages are on the first computer and the updated version of the same web pages are on the second computer. Can it be the problem? When load balancing, can static web pages be slightly different from one host server to the other? Thank you for your help

    Read the article

  • Do TCP connections work differently within the same subnet?

    - by Dean
    I've encountered some network behaviour that confuses me while trying to get Java RMI working. I use netcat to connect to a local machine: [my_machine]$ nc -w 1 192.168.0.100 60000 && echo success success I try to do the same to my server: [my_machine]$ nc -w 1 my-servers-ip 60000 && echo success This doesn't work, unless I explicitly listen on the server socket: [amazon_ec2]$ nc -l 60000 [my_machine]$ nc -w 1 my-servers-ip 60000 && echo success success For the version that fails, the SYN packet receives a RST, ACK in response. I'm not too knowledgable about this stuff, at this point I only have wild theories such as the one in the question. Any ideas? Potentially useful details: Local Machine (192.168.0.100) - Macbook Remote Machine (Amazon EC2) - Amazon Linux AMI 2012.03 Security Group Settings: 22 (SSH) 0.0.0.0/0 1099 0.0.0.0/0 49152-65535 0.0.0.0/0 "iptables -L" shows no rules set

    Read the article

  • Iptables: masquarading and routing

    - by nixnotwin
    I have a WAN router which is linked to isp over a /30 WAN subnet. But it also servers as a router to a /29 local public WAN subnet which is connected to few of my servers. The traffic from /29 gets routed to ISP via /30 subnet. For a wired reason I want to masqarade (NAT) the interface which has /30 ip. So the interface with /30 ip should appear as masquaraded for my 192.168.1.0/24 network and it also should act as a normal non-NAT router for my WAN public subnet /29. Can this be done with iptables on a Linux machine?

    Read the article

  • forward same port but for two different IPs (cisco)

    - by Colin
    Hi! I have a cisco running IOS 12.0(25) responding to two different IPs addresses: IP_A and IP_B. Behind this router I also have two different servers: server_A and server_B. What I want is to forward port 22 to both servers, so: IP_A, port22 -> server_A, port22 IP_B, port22 -> server_B, port22 ATM this only works for one of them (server_A), this is my config: interface Ethernet0/0 description Internet ip address IP_A 255.255.255.0 ip address IP_B 255.255.255.0 secondary no ip directed-broadcast ip nat outside no ip mroute-cache no cdp enable ip nat pool pool_A IP_A IP_A netmask 255.255.255.0 ip nat pool pool_B IP_B IP_B netmask 255.255.255.0 ip nat inside source list A pool pool_A overload ip nat inside source list B pool pool_B overload ip nat inside source static tcp server_B 22 IP_B 22 extendable ip nat inside source static tcp server_A 22 IP_A 22 extendable access-list A permit server_A access-list B permit server_B

    Read the article

  • crontab still sending emails even with > /dev/null

    - by user2344668
    I have a crontab (root) that runs a script and output is set to /dev/null but I always get the emails whenever it runs. I only want to receive error emails. # Rackspace driveclient update (12pm MST) 0 12 * * * /root/scripts/driveclient-update > /dev/null The only way I can get it to turn off is to use /dev/null 2&1 but then I won't get error emails. This is happening on three different CentOS servers, two are 6.3 and one is 6.4. NOTE: I have read over and over that /dev/null is supposed to send stdout there and prevent the email if there is nothing but stdout from the script, so at works for at least some people; I cannot figure out why it is not working on these servers. Here's an example of where /dev/null is supposed to work: http://www.alphadevx.com/a/384-Suppressing-Cron-Job-Email-Notifications

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • AWS VPC - why have a private subnet at all?

    - by jkim
    In Amazon VPC, the VPC creation wizard allows one to create a single "public subnet" or have the wizard create a "public subnet" and a "private subnet". Initially, the public and private subnet option seemed good for security reasons, allowing webservers to be put in the public subnet and database servers to go in the private subnet. But I've since learned that EC2 instances in the public subnet are not reachable from the Internet unless you associate an Amazon ElasticIP with the EC2 instance. So it seems with just a single public subnet configuration, one could just opt to not associate an ElasticIP with the database servers and end up with the same sort of security. Can anyone explain the advantages of a public + private subnet configuration? Are the advantages of this config more to do with auto-scaling, or is it actually less secure to have a single public subnet?

    Read the article

  • How much does a IPtables router slow down a connection?

    - by RayQuang
    Hi, I would like to know if introducing a new gateway in my network will slow things down. The question may sound unclear but here is an illustration: Before Installing gateway server Main Router <=> switches <=> servers after installing gateway Server Main Router <=> IPtables router <=> switches <=> servers My question is. How much will this delay incoming outgoing requests / file transfers. thanks, RayQuang

    Read the article

  • What is the "real" difference between a NAS and NFS? Or, why pick a NAS device over "mere" NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over a standard ethernet port, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over just running NFS on servers?

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • Will this SPF record restrict delivery of email for the original domain?

    - by user199421
    As part of the product we offer we send emails on behalf of our clients. Because the emails don't come from an IP associated with the client they are sometimes flagged as spam. We advised some of our clients to add an SPF record approving us to send emails on their behalf. We saw immediate improvement in deliverability rates after making the change however one of our clients was notified by his hosting provider that the SPF record we suggested to add would "slightly restrict" all emails that don't come from our servers (including our client's own servers). The record we use is this: v=spf1 a mx include:ourdomain.com ~all So my question is if the warning we received about this is correct and if so why and what can be done to solve this (allow sending email both from original domain and by ourselves).

    Read the article

  • Can't connect to server from certain machines

    - by Joel Coel
    On a small college campus we have a VLAN setup for the computer labs. These machines get assigned IP addresses in the 192.168.7.xxx range. In the server room, all of the server are on the default VLAN and assigned an IP address in the 10.1.1.xxx range. For the most part this works, but the lab machines are unable to connect to one of the servers. They can't even ping it. They can talk to other servers on the same switch as this server just fine. At first I thought it might be a vlan issue, but I changed the server port vlan to match other known-working ports with no effect. Any ideas?

    Read the article

  • Safe to use high port numbers? (re: obscuring web services)

    - by sofakng
    I have a small home network and I'm trying to balance the need for security versus convenience. The safest way to secure internal web servers is to only connect using VPNs but this seems overkill to protect a DVRs remote web interface (for example). As a compromise, would it be better to use very large ports numbers? (eg. five digits up to 65531) I've read that port scanners typically only scan the first 10,000 ports so using very high port numbers is a bit more secure. Is this true? Are there better ways to protect web servers? (ie. web guis for applications)

    Read the article

  • How to point subdomain to a nameserver?

    - by vonconrad
    I've got an old crusty WHM/cPanel server which I'm trying to get rid of. I've got a new setup on shared hosting which is much cheaper in the long run. The problem is that there are a bunch of websites on the server whose domains I don't have access to. They're currently pointing to name servers of my domain (ns.mydomain.com), but the new provider has their own name servers (ns.provider.com) which I have to use instead. My initial idea was to set up a CNAME to point my name server to my provider's: ns.mydomain.com CNAME ns.provider.com, but I read in this question that this would be a bad idea. The accepted answer suggests using an A record instead, and I want to make sure how this would work. Assuming ns.provider.com has an IP address of 123.123.123.123, is it just a matter of doing ns.mydomain.com A 123.123.123.123? Is there any way the provider could block those requests as the name server domain technically doesn't belong to them?

    Read the article

  • Updating Applications in a Corporate Environment

    - by user145133
    I am very new to this subject and was hoping someone could shed some light on it. I am working on creating a corporate network that will obviously have multiple servers and multiple workstations. Let's say a new version of Adobe Flash comes out. I would think that you would want to test this update in a test environment before "pushing it out" to the servers and workstations. How do you guys go about controlling, testing and then pushing the application updates out? (i am not talking about windows updates). Do you use a 3rd party sysadmin tool? Home grown software? Any info will greatly be appreciated :)

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >