Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 154/394 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Need to setup an office network, suggest some hardware?

    - by Yegor
    We have 6 windows workstations, spread out over a fairly large area. Need to share a DSL connection (upgrading to 100/100 mbit fiber in a few months) with these machines over a 1gbit network. Also need Wifi to be available for laptop use. Plan to add 2 rackmount servers for internal use as well. Can someone suggest a decent (preferably low cost) setup that will let me achieve the stuff mentioned above.

    Read the article

  • Best Practise: DNS and VPN (with private network IPs)

    - by ribx
    I am trying to find the best solution for my DNS problem. We are running several services in our company that you can reach only over VPN. Other services, that are reachable through the internet got the domain ... At the moment all services inside the VPN network go by .local... These have an VPN IP of the private network 192.168.252.0/24. Clients reach from Linux over OSX to Windows. I can think of 4 possibilities to implement a DNS infrastructure: Most common: an internal DNS Server, that is pushed by the VPN. But this has several drawbacks: your DNS responses are limited to the speed of the VPN Connection and your own DNS server. Because of very complex websites, this can increase the time for a page to load quite a lot. Also: we have several VPNs that are not connected to each other and all of them have their own DNS server. Several DNS servers locally. These have to be configured by hand. And you have to use some third party tool like dnsmasq. If you start a DNS request, you ask your locally running DNS server, which decides which server to ask for which domain name. One college of mine uses such a solution with this OSX (I am sorry, I don't remember the name of the application). You use your domain hoster. Most of them have APIs available to manipulate your DNS entries. So you could pull your private network informations to your domain hoster. I am not sure whether they all accept private network IPs. But I guess there will be some problems in the same way as in number 4. The one we currently use, because it's for us the most logical choice: we forward the sub domain *.local.. to our own public DNS Server. This works quite good for some public DNS Servers like Google. But most ISPs do not forward the answers. Or don't do that always. Like my ISP sends me a positive result of the a DNS request of a *.local.. domain only every 10th time I make a nslookup. (Can someone explain this?) Here the real Question: Is there another solution we were not thinking about? Or: What of these methods do you use?

    Read the article

  • Irc server for ubuntu

    - by Ralphz
    The Ubuntu WIKI https://help.ubuntu.com/community/IrcServer it lists few IRC servers you can use in Ubuntu. My question is which one is you favorite one and more secure. I will also need one that will allow me to monitor rooms for regular expressions and run some scripts if regexp matches. Thanks

    Read the article

  • How to set mod_rewrite in WAMP?

    - by Martin Jenseb
    I learn Symfony2 and i have: http://symfony.com/doc/current/quick_tour/the_big_picture.html http://localhost/Symfony/web/app.php/demo/hello/Fabien And if you use Apache with mod_rewrite enabled, you can even omit the app.php part of the URL: http://localhost/Symfony/web/demo/hello/Fabien Last but not least, on the production servers, you should point your web root directory to the web/ directory to secure your installation and have an even better looking URL: http://localhost/demo/hello/Fabien how can i make this in WAMP Server?

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by ErikE
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running under doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for both the web server and the domain account so they are "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • subdomain on another ip address

    - by pixeline
    hello, my main domain (domain.com) sits on a server with ip address 1. I need to have a subdomain (forum.domain.com) point to a server with ip address 2. Both servers are hosted at iWeb, so i have a cPanel interface to manage them, but i can't find the right way to do that. I tried a .htaccess redirection, which works, but the visible address in the browser changes too. Any help on how to do that would be appreciated. Thank you

    Read the article

  • RDP access to DirectAccess Server via DirectAccess

    - by crgnz
    I have just setup a Win2008R2 DirectAccess server (and also a Win2008R2 Active Directory server). From the Internet I can Remote Desktop login to the AD server, but I cannot RDP into the DirectAccess server. I can PING both servers and get an IPv6 response. (I can RDP to the DirectAccess server from the internal company network) DirectAccess is configured to allow full intranet access. I think I've hit a mental block, the answer will probably be obvious, but I just can't see it.

    Read the article

  • My -tpl file won't update!

    - by Kyle Sevenoaks
    Hi, I am running the site at www.euroworker.no, it's a linux server and the site has a backend editor. It's a smarty/php site, and when I try to update a few of the .tpl's (two or three) they don't update. I have tried uploading through FTP and that doesn't work either. I have no knowledge of how servers work or anything, please help? It runs on the livecart system. Thanks!

    Read the article

  • My -tpl file won't update!

    - by Kyle Sevenoaks
    Hi, I am running the site at www.euroworker.no, it's a linux server and the site has a backend editor. It's a smarty/php site, and when I try to update a few of the .tpl's (two or three) they don't update. I have tried uploading through FTP and that doesn't work either. I have no knowledge of how servers work or anything, please help? It runs on the livecart system. Thanks!

    Read the article

  • I want to set up a DynDNS service, what do I need to know?

    - by Hanno Fietz
    I'm gathering data from field devices, some of which will soon be behind a cellular-to-Ethernet gateway. Some of the devices need to be polled, and since the cellular carrier will usually assign changing IPs, I'm getting a gateway which has a Dynamic DNS client built in. I would like to have the devices call my own servers instead of a public DynDNS provider. What do I need to know to get started?

    Read the article

  • Using Openfiler inside a virtualmachine and VmWare Fault Tolerance

    - by SoMoS
    Hello, currently I have 2 servers with Fault Tolerance working with another server with openfiler as a iSCSI server (looks like without that Fault Tolerance does not work). I would like to remove that server and put the openfiler distribution as another Fault Tolerance protected machine. Is this possible? This way i could save one server and also have faster HD access. Thanks in advance for your help.

    Read the article

  • Setting up a GIt server

    - by lindenb
    Hi all, My team is working with several RedHat linux servers and we'd like to synchronize our sources from one server to another (for several distinct projects). I'd like to set-up a git-server as a version control; however I'm new to git and I'm confused by the terms ('server', "daemon', 'repository', etc...). Moreover we're working behind a firewall. Can anyone point me to a link about how to setup a git server ? Thanks. Pierre

    Read the article

  • Freeware local proxy engine for Windows?

    - by Tomalak
    Is there a nice and small, freeware proxy application that runs in the system tray? It should support HTTP and HTTPS proxy connections, NTLM authentication and configurable rules (different proxy servers for different hosts). Bonus karma if it can NTLM-authenticate anonymous requests passing through it.

    Read the article

  • How does the linux update manager work?

    - by Mr.Student
    I want to know how the update manager for linux works. For instance, how does my linux distro check to see if there are any available updates for download and which servers to download these updates? If I am dealing with 3rd party software not apart of the main distro, how do those programs interact with my update manager to notify me that those programs have available updates? Lastly what would be some good literature on the subject?

    Read the article

  • Wild Card DNS setup problems

    - by Phil Jackson
    Hi, this is my firast time with dedicated servers and im having problem setting up a wildcard sub-domain. I previously tried * /-----/ 14400 /-----/ IN /-----/ A /-----/ (serverip) waited 30 hours and nothing. so i then tried * /-----/ 14400 /-----/ IN /-----/ CNAME /-----/ actvbiv.co.uk. waited another 30 hours, nothing Im now trying; *.actvbiz.co.uk /-----/ 14400 /-----/ IN /-----/ CNAME /-----/ actvbiv.co.uk. Am i doing this correctly? using WHM. Regards, Phil Jackson

    Read the article

  • The perfect server room?

    - by splattne
    What do I have to consider when I'm planning a new server room for a small company (30 PCs, 5 servers, a couple of switches, routers, UPS...)? What are the most important aspects in order to protect the hardware? What things do not belong in a server closet? Edit: You may also be interested in this question: Server Room Survival Kit. Thank you!

    Read the article

  • Windows Server 2003 Standard - how to access other pc's remotely

    - by studiohack
    I'm a novice in the world of servers, and I'm about to install Windows Server 2003 Standard on a server box I have...However, I'm curious if there is a way to access the other PC's in my network remotely via the server (Windows XP Home and Windows 7 Home Premium)? Like say, I'm at a friend's house, and I want to access my Win7 machine via the server, how do I do it? Is it possible? Thanks!

    Read the article

  • Intel i7 vs Xeon quad core processor?

    - by jasondavis
    I know the Xeon processors have been around for a long time and are mostly used in servers but I am curious, why do people not use the xeon's in a high performance desktop? As far as I know about the best desktop processor out there now is the i7 line. The i7's and xeons are both quad-core processors, what is the main difference in these? I just saw that the mac pro's are using the quad core xeons instead of the i7's

    Read the article

  • How to create a RTMP server on linux (Gentoo)

    - by user221359
    I am trying to create a RTMP server to stream video files from my Linux server to the internet/network. I have been able to successfully use ffmpeg to stream to other RTMP servers like youtube, but how do I go about hosting my own on my RTMP server locally? I have tried looking into ffserver, but is this what I need to create a local RTMP server? If so could someone give me basic syntax or examples to get it working? Thanks for any information.

    Read the article

  • order of operations for environment variables

    - by alyda
    I want to understand how environment variables are set and reset (overridden). I'm running Apache/2.2.24 (Unix) PHP/5.4.14 on a mac . My theory is this: Environment vars can be set in bash, then they can be overwritten with httpd.conf preceding a VirtualHost directive that precedes php.ini, which can then be overwritten by .htaccess (if allowable) and finally by PHP I tried the following: setting environment variable in bash: I added export ENVIRONMENT='local' to my ~/.bashrc file, restarted apache and did not get any output from print_r($_ENV); (in a simple index.php file at the root of my webserver). I also tried putting ENVIRONMENT='local' into /etc/environment, and restarting apache, nothing, as well as /etc/bashrc, restart apache. still nothing. setting environment variable in httpd.conf: I added SetEnv ENVIRONMENT 'local-httpd to the end of my /etc/apache2/httpd.conf file (but before I load other conf files, such as virtual host [Include /private/etc/apache2/other/*.conf]). I now see the variable in the array print_r($_SERVER); but not print_r($_ENV);. setting environment variable in httpd-vhosts.conf: I added SetEnv ENVIRONMENT 'local-vhost to my /etc/apache2/extra/httpd-vhosts.conf file in my generic directive that points to my default document root. I now see the variable has been overwritten (to local-vhost from local-httpd, so I know where the variable is getting set). setting environment variable in php.ini: while searching for a proper place to put my environment variable, I noticed that variables_order = "GPCS" was set to the production value rather than EGPCS. I changed it, restarted my server and found that I was now getting output for print_r($_ENV); but not my expected custom variable. It also appears that I am not able to set a custom variable in this file. Please tell me if I am wrong setting environment variable in .htaccess: I added SetEnv ENVIRONMENT 'local-htaccess'. This worked as expected, overwriting all other values that were set. setting / overwriting environment variable in PHP: if (...) { putenv('ENVIRONMENT=local'); } I'm asking this question because I have a lot of local and remote testing servers, some of which may or may not allow me access to modify httpd, httpd-vhost, php.ini or environment variables. I want to understand what is best for those difference scenarios (shared hosting, heroku, local servers, etc) I obviously don't know how to properly set the environment variable in bash in a way that php can use it, I'd like to know how to do that (as I think Heroku does something similar with heroku config set...)

    Read the article

  • custom MAIL server -> Proxy Server -> Gmail Server

    - by Eugene
    So I have some custom VPS which route emails via MX record in DNS. And I need to setup gmail interface via Google Apps - this step and previous are clear. But how can I insert some middle layer, to check emails messages for special words/etc., so something like spam assasin proxy, but custom product. The question is: How could i setup proxy mail from my server = to proxy server(or application) = to gmail servers? Thank you, for any help!

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >