Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 263/394 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Change Exchange Server Name Before Upgrade

    - by ffrugone
    I need to upgrade the Exchange Server from 2003 to 2010. I'm physically changing servers as well as software. I'm worried about redirecting the Outlook clients after the upgrade is going to be troublesome. So, I thought that before doing anything else, that I would change the name of the Exchange server on the client from 'server-name.domain.com' to 'mail.domain.com' and add an entry in dns that points 'mail.domain.com' to the same ip as 'server-name.domain.com'. However, even though I added 'mail.domain.com' to the dns, I cannot get the Exchange server to change to that on the client computers. I found out that the Outlook clients check the Global Catalog for the name of the Exchange server computer. My question is: can I change the Global Catalog address of the Exchange computer from 'server-name.domain.com' to 'mail.domain.com'? If so/not, is there a better way to do this? thanks.

    Read the article

  • Highlight column when a row is clicked, depending on condition

    - by Fredrik
    We have a large matrix with lists of servers on the rows and persons as columns. Then we mark the column/row with an X if the person has access to the server. Pretty basic. But as the matrix grows, it becomes more difficult to quickly find the right person with access. So I'd like some way to make it easier to use In the example above I have clicked on the row "Resource B" and would like all the columns where there is an "X" (User 1, User 2) to be highlighted somehow. Then if I click the row for "Resource C", "User 1" should be highlighted.

    Read the article

  • Programmatic, script-based, or command line method to change starting program for user on Windows Server 2000/2003?

    - by Joe Majsterski
    I have written an app that we want to distribute to a large number of customers to be used as the shell program when they log onto their server with a particular admin account. I have figured out how to change the starting program by going to Administrative Tools->Computer Management->System Tools->Local Users and Groups->Users, selecting the properties for the user, going to the Environment tab, and changing the program file name under "Starting program" to my new app. But is there a way I could do this with some code that could be sent out and run on all these servers?

    Read the article

  • FTP Server with MySQL access, and POST notification

    - by TIW
    Im looking for an FTP server solution, that we can host either internally on a dedicated server, or on Rackspace Cloud/AWS, that provides a HTTP POST notification when a file is uploaded, and allows user accounts to be created either through an API or MySQL database. There are several offerings that provide email notification - but has anyone come across anything that matches the above requirements. BrickFTP being a IaaS system is an option, but we would prefer something hosted in house. I don't believe the standard FTP servers provided with Apache can do the above ... can they?

    Read the article

  • "this network location can't be included because it is not indexed" on Windows 2008R2 Remote Desktop Services Hosting

    - by ChrisNZ
    I'm setting up a new terminal server for our users on Win2008R2 (I guess I should call it Remote Desktop Services now!) When I try to change the location of "Documents" (by removing the default Documents library and adding a new one), to use the file server ie \\fileserver\username\Documents I get the message: "This network location can't be included because it is not indexed" I certainly don't want to make folders available offline, and in fact, I have set the GPO to prohibit offline folders on the terminal servers. What is the best practice for document libraries on terminal server and network file shares?

    Read the article

  • Is it possible as an Administrator to gain access to a SQL Server 2008 instance without changing any passwords?

    - by adhocgeek
    I have administrative access on our network, but I don't manage the installation of all servers or software. On some of our machines instances of SQL Server 2008 have been installed which I need to be able to access, but since my account hasn't been explicitly granted a login, I can't get into. Is there a way to get into the database without changing anyone's password (e.g. I could solve this by changing the password of the user who installed the instance, assuming they've set themselves up as admin, and then logging on as them, but I don't want to have to do this).

    Read the article

  • JBoss basic access

    - by user101024
    I have JBoss 5 deployed on Solaris 10 - the servers connection has unrestricted high ports (1023) open to the internet. I can access the box via ssh & FTP from a second server on the same subnet and anywhere over the internet. JBoss is running over port 8080 and is accessible via http://locahost:8080 on the box itself. I cannot access it via http://ip.add.goes.here:8080 from either the other server on the same subnet or via the internet. Is there any service or configuration within JBoss or elsewhere on Solaris 10 that needs to be changed from default to allow http traffic to be served? Thanks, Kevin

    Read the article

  • Why can't I access a webserver through a load balancer on my local network?

    - by Karptonite
    When I try to use curl (or wget, lynx, etc) to connect from a server on our local network to our website, which is on a local server behind a CoyotePoint load balancer, curl fails. Ping does not have this problem. When I curl directly to any of the servers behind that load balancer (from and to the same local network), I also have no problem. It doesn't matter whether the local server I'm curling from is behind the load balancer or not. Does anyone have any idea why I can't access my webserver through the load balancer on my local network?

    Read the article

  • Checking that tasks are executed

    - by homer5439
    I'm not sure how to explain this. Once one starts having dozens or hundres of servers, each running some sort of periodic jobs (mostly from cron), there is a problem of making sure (or as sure as possible) that these tasks are actually ran. I mean, I get an email if a job fails fails, and no mail if it succeeds, but also no mail if it doesn't run for whatever reason. Sure, I could change them and have them send a "successfully ran" email, only to be flooded by mails that most of the time I don't want to see. Basically, I want to be notified only if: a task ran and failed a task didn't run at the expected time. Is there a way to do this?

    Read the article

  • How to NFSv4 share a ZFS file system on FreeBSD?

    - by Sandra
    Using FreeBSD 9, and created a ZFS file system like so zfs create tank/project1 zfs set sharenfs=on tank/project1 There are many howto's on setting up NFSv3 on FreeBSD on the net, but I can't find any one NFSv4 and when the NFS share is done with ZFS. E.g. this howto say I have to restart the (NFSv3) by nfsd -u -t -n 4, but I don't even have nfsd. When I do # echo /usr/ports/*/*nfs* /usr/ports/net-mgmt/nfsen /usr/ports/net/nfsshell /usr/ports/net/pcnfsd /usr/ports/net/unfs3 /usr/ports/sysutils/fusefs-chironfs /usr/ports/sysutils/fusefs-funionfs /usr/ports/sysutils/fusefs-unionfs # I don't see any NFSv4 servers, which I could install with pkg_add. Question How do I install and setup NFSv4, so I can mount the share from e.g. a Linux host?

    Read the article

  • Application running as a service is not able to create the same number of processes as when it runs

    - by Pini Reznik
    I have a Windows application which creates up to 35 processes and it's working OK when it's running from cmd. But when it is executed as a service on the same machine it is able to create only 20 processes and all other are killed because of some kind of resource exhaustion problem. The problem is persistent on one Windows 2003 server but not reproducible on other servers. Can it be because the system has run out of desktop heap? http://support.microsoft.com/kb/184802 How can I check it?

    Read the article

  • rsync generates very much traffic

    - by user109459
    I use rsync for backing up one of my servers with 4GB of files. When I now try to transfer these files the traffic for the files isn't the estimated 4GB. It is a lot higher. It's about 60GB. I also checked the traffic on my server, backup server and router and all three say that there was a traffic of 60GB. But at the end rsync says that it only has transfered 4GB. Another problem is that I can't debugg it because the problem occures randomly.

    Read the article

  • need help setting up a VPN for remote computer connection

    - by Chowdan
    I am on a low budget right now. I am currently in the process of starting a computer company. I am in need of a VPN network so I can run Dameware tools for working on customers/partners computers remotely. I will be working with Windows and some Apple and linux machines. I have desktop with an AMD Phenom II 965BE(currently running stable at 3.8Ghz) processor with 8 GB of ram and a radeon hd 6870(i know graphics aren't too useful) and about 1.5TB of HDD space. I am attempting to create a network out of my office based all on one machine that would also be secure for me to remotely connect to my partners computers so when they have issues I would be able to connect and do the diagnosing and repairs remotely. What types of servers besides a VPN server would i need to create this? I have access to all Microsoft products so I can run Windows Server 2012, Windows Server 2008 R2, or any other Microsoft Software. thanks for the help all

    Read the article

  • What factors can affect performance of Http Server written in C-Sharp? [on hold]

    - by Yousaf
    I am having trouble in terms of handling huge databases. I have multiple clients like 100-300 (clients are basically servers with i.e windows sql). Each client may have 38 thousand rows/listing of data, each row has 10-12 fields. I cannot afford to have json files of each client and than handle them on main server, because of memory issue. What if i have http server written in c or c# installed on clients and they return 250 rows in each response to the main server. How the factors like speed, memory or other issues can effect us ? What exactly I am asking for ? In short words if a server writter in c-sharp sends 250 rows per request. What factors can effect the performance of server ? for example. Speed, processing, Operating system, Implementation of algorithm of server ? How these factors can really effect the performance on large scale?

    Read the article

  • Has anyone seen an HTTP 500 error when HTTPS traffic going through Pound Proxy forwards to an HTTP page?

    - by scientastic
    We have Varnish as our load balancer and reverse proxy cache for normal HTTP traffic. For HTTPS traffic, we use Pound proxy to unwrap the SSL and forward to Varnish, which then forwards to the back-end servers. This is used for our "checkout" process to encrypt credit card info in transition. However, on the last stage of checkout, users are always getting an HTTP 500 (Internal Server) error. It doesn't seem to be due to our back-end app server, by all tests I've tried. Does anyone know anything about how that transition works-- the transition back from HTTPS to HTTP and the interaction between Pound and Varnish-- and why it might cause 500 errors?

    Read the article

  • Speedup of fixing an openssl bug with 8192 bit key [on hold]

    - by rubo77
    This is related to this Bug-Report https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=747453 OpenSSL contains a set of arbitrary limitations on the size of accepted key parameters that make unrelated software fail to establish secure connections. The problem was found while debugging a XMPP s2s connection issue where two servers with long certificate keys (8192 Bit RSA) failed to establish a secure connection because OpenSSL rejected the handshake. This seems to be a small problem to be fixed but although there is an easy patch available to fix the issue in that bug report, no reactions are noticed so far.. The last patch that broke the 2048 barrier took 2 years to be implemented and only resulted in an increase to 4096bit, which seems to be a bad joke. Where would we have to report this to speed up the implementation for such an issue?

    Read the article

  • Nightly backups (and maybe other tasks) causing server alerts

    - by J. Pablo Fernández
    I have two independent alert notification systems for my servers. The server is a virtual machine on Linode and one of the alerts comes from Linode. The other monitoring system we use is New Relic. They are both watching out for IO utilization. Every night I get alerts from both of them as the server is using too much IO. I run quite a few tasks in the middle of the night but the one I confirmed that can cause IO-warnings is running the backups. The backup is done by s3cmd sync. I tried ionice but it still generates the warnings. Getting warnings every night reduces the efficacy of warnings when they happen for real. For Linode I could raise the level at which a warning is issued, but it might mean making the whole thing useless as the level is too high. What would be the proper solution for this?

    Read the article

  • PPTP VPN Server issue : server = centOS & client = windows 7

    - by jmassic
    I have a CentOS server configured as a PPTP VPN Server. The client is a Windows 7 with "Use default gateway on remote network" in advanced TCP/IPv4 properties enable. He can connect to CentOS without any problem and can access to: The Box of his ISP (http://192.168.1.254/) The CentOS server The website which is hosted by the server (through http://) But he canNOT access any other web service (google.com or 74.125.230.224) I am a beginner with web servers so I do not know what can cause this problem. Note 0 : The Windows 7 user must be able to access the whole internet through the CentOS PPTP proxy. Note 1 : With "Use default gateway on remote network" in advanced TCP/IPv4 UNCHECKED it is the same problem Note 2 : With "Use default gateway on remote network" in advanced TCP/IPv4 UNCHECKED AND "disable class based route addition" CHECKED the Win 7 can access google but with the ISP IP (no use of the VPN...) See Screenshot Note 3 : I have made a echo 1 > /proc/sys/net/ipv4/ip_forward and a iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

    Read the article

  • Some Domain Clients unable to access certain websites

    - by Shaunie
    I have a small domain around 20 clients with a 2003 R2 SP2 DC. Most of my clients can browse the internet freely and dont have a problem. However a couple are reporting problems accessing certain sites. IE: Hotmail, skyscanner, bbc news They can browse the sites sometimes then other times they get 408\409 errors. other machines in the domain can access these sites. I have cleared out dns cache on these machines modified external dns servers on the DC still to no avail. The main issue is the person not able to access skyscanner uses it several times a day to book flights for employess going on leave or returning to work. both clients are running XP SP3 though one machine is getting change for one running win7 shortly. Any advice greatly appreciated. thanks

    Read the article

  • Can't conncet to memcached

    - by DMClark
    We currently have memcached running on CentOS. None of our PHP applications can connect, have tried multiple applications trying to establish access. The most informative PHP error we get is: "Memcache::get() [function.Memcache-get]: Server 127.0.0.1 (tcp 11211) failed with: Permission denied (13) in /var/www/.." memcached 1.4.5 PECL 2.25 We can telnet and it works. IP tables is full access from lo to lo. We've tried this on two different servers with both compiled version and the rpm in CentOS 5.5 and get the same result. Is there anything fairly obvious that we are missing?

    Read the article

  • Question about conditions of vps host provide

    - by baobeiii
    Looking into buying a VPS from a company. In their terms of service it says: User may not: a) Use 25% or more of system resources for longer then 90 seconds. There are numerous activities that could cause such problems; these include: CGI scripts, FTP, PHP, HTTP, etc. So basically your only allowed to use a 1/4 of what your paying for? Anyone know if this is a standard restriction for most hosting providers? Seems a bit ridiculous but i don't know whats normal in the server world. And the weird thing is they only sell xen servers so why can't i use my allotted resources as no-one else can. Thanks.

    Read the article

  • SSH not working after installing SVN server on Debian

    - by sLIDe
    Today I had to install SVN server on my Debian server. I used this tutorial (only I didn't do anything to connect to SVN through file://, http:// or https://, only svn:// and svn+ssh://). After I installed SVN server and configured it after that tutorial, I tried to connect to it. I could connect to it using svn:// protocol, but when I tried to connect using svn+ssh:// protocol, my servers SSH stopped responding. Even after I stopped SVN server and restarted SSH server I can't connect to it.

    Read the article

  • Please provide how to setup using VMware, AD [closed]

    - by user552585
    In my organisation we have more than 100 pcs and high configured 3 IBM servers. Now the senario is 300 employees with diff programmers like .Net,java,php etc. these employees use by these systems only in diff shifts without stop their work. I want all applications required them on every system and they have perticular id, Pw to login and i have to secure the organisation data and userdata to tamper or any thing by other users. Please provide how to setup using VMware, AD with MicroSoft environment with fully secured manner. please give brief explanation. Please help me

    Read the article

  • Multi server management

    - by user788721
    We are running a website that allow users to create their own content, then share it through an iframe. We would like to get more servers to host the user content, and the main one for our website. Each user has a link like xxxx.com/content989856, xxxx.com/content45454545 We were thinking of two options : using a htaccess on the main server that will redirect to the good server but the problem is that if the main server is out, then all the content is out as well using subdomain depending on where the content is hosted, but then if we change the user content from one domain to another one, we will have to change his links as well Do you know a better option or is that really the only two available ? I am wondering how big websites like youtube handle this problem. Thank you very much for your help,

    Read the article

  • Best Practice for upgrading PHP On Production Systems

    - by Demic
    We Have two load balanced web servers running php 5.3. I've been asked by our dev team to upgrade php to 5.4 because they need certain functionality it will bring. The main issue is that 5.3 is the latest thats been built into the distros repository, so to upgrade using the package manager, Ill need to add another 3rd party repo. I dont have a problem with this per se, but Im concerned about using a package from a "non official" source. The other option is to compile php from source, but I guess this will prevent me from using the package manager to upgrade at any stage in the future? So I guess Im just looking for some guidance on which way to go. Compile from source or install from any old repo that purports to supply php 5.4? Or perhaps theres a third option I havent considered? Thanks in advance Demic

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >