Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 247/312 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • How do I push my initial snapshot to a subscriber server in SQL Server 2000?

    - by Kev
    I'm configuring Transactional Replication using the Push model. The scenario is: The SQL Servers: SQL01 (publisher) and SQL02 (subscriber) - both running SQL 2000 SP4. Both servers are standalone (i.e. not domain members) Both servers have their FQDN and NETBIOS names in their HOSTS files I've managed to configure SQL01 to publish my database and configured a Push subscription for SQL02 using the Push New Subscription wizard and set the Distribution Agent to update the subscription continuously. On the Push Subscription wizard "Initialise Subscription" page I've selected "Yes, initialise the schema and data" and ticked the "Start the Snapshot Agent to begin the initialisation process immediately" option. All the required services are running (SQL Agent). When I complete the wizard and browse the Replication - Publications folder I can see my publication (blue book with arrow). The publication shows the Push subscription and its status is Pending. If I look in the c:\Program Files\Microsoft SQL Server\Mssql\Repldata folder I see a number of T-SQL scripts for each table e.g. Products.bcp, Products.sch, Products.idx. What should happen now? Should my replicated database now (magically) appear on the subscription server?

    Read the article

  • Why would I need a firewall if my server is well configured?

    - by Aitch
    I admin a handful of cloud-based (VPS) servers for the company I work for. The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting) Clearly on here people are forever asking about configuring firewalls and such like. I use a bunch of approaches to secure the servers, for example (but not restricted to) ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc https, and restricted shells (rssh) generally only from known keys/ips servers are minimal, up to date and patched regularly use things like rkhunter, cfengine, lynis denyhosts etc for monitoring I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc. Put aside for a moment the issues of physical security of the VPS. Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers. To date (touch wood) I've never had any problems with security but I am not complacent about it either.

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • ipv6 with KVM on debian

    - by Eliasdx
    I have trouble setting up IPV6 on my Proxmox (KVM) server: My ISP sent me this information(xxx=placeholder): IPs: 2a01:XXX:XXX:301:: /64 Gateway: 2a01:XXX:XXX:300::1 /59 This is the interface setup on the host server: auto vmbr1 iface vmbr1 inet static address 178.XX.XX.4 broadcast 178.XX.XX.63 netmask 255.255.255.192 pointopoint 178.XX.XX.1 gateway 178.XX.XX.1 bridge_ports eth0 bridge_stp off bridge_fd 0 iface vmbr1 inet6 static address 2a01:XXX:XXX:301::2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del 2a01:XXX:XXX:300::1 dev vmbr1 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev vmbr1 On the guest: auto eth0 iface eth0 inet static address 178.xx.xx.47 netmask 255.255.255.255 broadcast 178.xx.xx.63 gateway 178.xx.xx.1 pointopoint 178.xx.xx.1 iface eth0 inet6 static pre-up modprobe ipv6 address 2a01:XXX:XXX:301::2:2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del 2a01:XXX:XXX:300::1 dev eth0 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev eth0 Ipv4 works on both host and guest but Ipv6 only works "sometimes". It's up for minutes and then down again until I change something. However I can actually ping the host and the guest from both host and guest. host:~# ip -6 neigh 2a01:XXX:XXX:301::100:2 dev vmbr1 lladdr 00:50:56:00:00:e0 REACHABLE 2a01:XXX:XXX:300::1 dev vmbr1 lladdr 00:26:88:76:18:18 router STALE host:~# ip -6 route 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 2a01:XXX:XXX:301::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev tap101i1d0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 Does someone know why it isn't working? And is there a way to configure multiple v6 IPs from the same subnet so I can dedicate IPs to websites on a server with multiple virtualhosts?

    Read the article

  • How to rename network printer on Windows 7?

    - by Adrian McCarthy
    This question is similar to How do you rename a printer device in Windows 7 64 bit, except the answers there do not work, and I'll provide more information. This is a home network, not a domain. I have set up a Brother HL-5170DN. It is a network printer connected directly to an Ethernet hub. I can connect to it with Windows 7, but on Windows 7 it defaults to the name "binary_p1 on Brn37415f", which isn't very useful. And I cannot seem to change the name. I have it working with several Windows XP and Vista machines, and I can change the name on those machines. On Windows 7 Printer properties: I can see the "binary_p1" name on the General tab. I can select the text, but I cannot change it. The field is not grayed out, but I cannot type anything into it. On the Ports tab, all of the controls are grayed out (disabled). The selected Port is called "\\Brn_37415f\binary_p1", and it's described as "Client Side Rendering Provider" and the printer field says "binary_p1". On the Security tab, I can see that my account has "Manage this printer" permissions. If I choose Printer Server Properties, I can select the port and click Configure Port, but I get a dialog that says, "An error occurred during port configuration. This option is not supported." I have found many forums with people asking the same question without getting an answer. Update: No more bounties to offer, but I'm still looking for a solution to this problem.

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Making it Easier for Older Users to Login to Multiple Accounts

    - by Mike Hagstrom
    I currently do consulting for a small business that has multiple applications that they need to login too. I'm trying to get them to start using Basecamp and Zendesk to make all of our lives easier when it comes to collaboration on big projects and quick helpdesk ticket items. However, I have recently been informed that it is difficult for them to remember all of these websites etc... to login too. However the login information is the same. Right now they have to login to: Windows Login Gmail I want them additionally to login to Basecamp Zendesk This is just a generation or two gap between myself and them, so I'm wondering what others do to solve these problems. Is there some way we could configure USB thumbdrives that somehow have Lastpass or something on that when plugged into the computer automatically log them into their Windows account, then when they were to say visit the Basecamp account would automatically log them into that? I think the security risk (of a list thumbdrive) is well worth the ability to use these extra applications. Unless anyone else has any other ways for making it easier for users to login to multiple sites.

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Privacy, VPN and routers

    - by user123189
    Ever since this ACTA push-up the things are starting to heat up around torrents and privacy. I am using Tribler now, but this is not secure enough for me. Not enough privacy. I've been using in the past a swedish VPN PPTP connection. What I observed is that, when the VPN connection was down, Internet traffic wasn't cut off, rather the downloads were continuing, this time with my real IP, wearing off my protection. 1st : How to enforce a VPN connection that will cut all traffic when down? That is, the moment the connection is down, all internet traffic should cease as if I'd pull the network plug out. 2nd: Is PPTP good enough or should I ask for SSTP or IKEV2 ? 3rd: Should I disable IPv6 ? Is VPN no longer private if I keep IPv6 active? I 'heard some stuff' about dual vpn routers to be able to improve privacy; but nothing more about how to configure one for such a task. 4th: Is there any kind of "black box" hardware equipment that can be used in hiding IP, encrypting traffic and so on ?

    Read the article

  • How can I get web pages from sub.a.com using url sub.b.com?

    - by Steven
    I have developed www.mysite.com. This site can be "integrated" into my partners website. What I do is to create partner1.mysite.com and repalce my header and footer with my partners header and footer and replace some CSS styling. This should make it as transaprent as possible for the user, so that they think they are still browsing my partners website. There are two ways I see how I can accomplish this: 1. My partner uses an IFrame to show the content from partner1.mysite.com 2. My partner creates sub domain and points it to my sub domain. Solution 1 is easy, but I'm not sure how search engines likes this, so I will try solution 2. QUESTION Can I use mysite.partner1.com but read content from partner1.mysite.com? I don't want to forward / redirect users to partner1.mysite.com. It's important that the URL is mysite.partner1.com / mysite.partner1.com/some/page. Is this possible? For testing, I have Apache configuration more or less like this: NameVirtualHost 10.0.0.17 <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/mysite/ ServerName mysite.com </VirtualHost> <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/mysite/ ServerName site1.mysite.com </VirtualHost> // Since this is on my localhost, I also configure site1 here <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/site1/ ServerName site1.com </VirtualHost> <VirtualHost 10.0.0.17> ServerName mysite.site1.com --> DO SOME SORT OF FORWARDING HERE <-- </VirtualHost>

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • Turn computer into DAS (Direct Attached Storage)

    - by Damon
    Can we build a direct attached storage by taking a computer/server, adding an HBA, and installing some appropriate software? We would use Debian as a host OS for both the DAS and the server. If so, what software do we use? And do we simply need a HBA for the DAS and the Server? Or do we need more hardware? The goal is to use an older server that does not have enough room for drives but does have ECC memory, server processors, redundant power supplies, dual nics, etc. Then find any boxes, server or not, the key being having enough room for 8-12 drives, fans, etc. and turning them into a DAS; build two of these DAS's and have them connected to the server. Eventually we want to have two servers using DRBD and associated services like heartbeat and pace maker to create an HA setup for our server(s) but that will take a long time to configure since I have no experience with anything related to DRBD (yet) and have a learning curve I have to get past, not to mention the additional cost of more hardware (two servers vs one).

    Read the article

  • How can I change the binding order of network adapters in Windows 7?

    - by Chris Farmer
    The end goal here is that I am trying to install an Oracle 10g server on my Windows 7 x64 dev box. I use DHCP, and the Oracle installer is throwing up this warning: Checking Network Configuration requirements ... Check complete. The overall result of this check is: Failed <<<< Problem: The install has detected that the primary IP address of the system is DHCP-assigned. Recommendation: Oracle supports installations on systems with DHCP-assigned IP addresses; However, before you can do this, you must configure the Microsoft LoopBack Adapter to be the primary network adapter on the system. See the Installation Guide for more details on installing the software on systems configured with DHCP. I have installed the loopback adapter, but I am not sure how to make it the primary network adapter. I see this Microsoft KB article on the subject but it's Windows XP-oriented, and I can't seem to find a comparable one for Windows 7. Some of the options it talks about don't seem to be present in the views of the adapters that I see. So, how can I make the loopback adapter become the primary adapter?

    Read the article

  • How to restart boot Windows 7 after upgrading to a SSHD on SONY VAIO with recovery discs?

    - by Boris Okun
    The original HDD on my Sony VAIO still works, but has a damaged sector 0 and I was constantly prompted to replace the HDD because of the imminent failure. I created recovery discs as instructed, used a USB external HDD for complete back up (including Windows image back up). After installing the SSHD and using recovery discs to upload Windows and boot, I am getting the Windows welcome screen. Right after that, I'm getting the following message: Windows couldn't complete the installation. To install Windows on this computer, restart the installation. I have tried repeating the process many times all kinds of different ways and I still receive the same message. Also, when I tried to change to partitioning as the other option offered, I get the message: Windows Setup could not configure Windows to run on this computer's hardware. All troubleshooting for hardware and PCU came out solid. I tried to load the image back up from the external drive, but can't load the driver. The computer doesn't see it. Does anyone have a clue or has encountered something similar?

    Read the article

  • Printing on Remote Desktop session

    - by Arindam Banerjee
    We have to connect a Windows 2008 server using Remote Desktop from Windows XP machine. A Barcode Printer is attached with XP machine and the printer is shared as Local Resource in RDC session to the server. On the server we have to print from an application which prints either to LPT port or shared printer (UNC path). For this I use to configure print pooling combining LPT1 and (Terminal Server) TSxxx port. As I don't know the option to access the Terminal Session printer via UNC path. But I have the following issues - Every time I connect to a remote session, the printer from my local Win XP machine is showing in Printers and Faxes on Win 2008 Server (Terminal Server), but I am not allowed to manage the Win XP printer from Terminal Server to enable pooling. On the server I have to change the security permission every time and then enable print pooling. How can I keep the security permission unchanged? Secondly I created a batch file to enable print pooling. rundll32 printui.dll,PrintUIEntry /Xs /n "Printer (from CLIENT)" Portname "LPT1:,TS005" But every time the printer in terminal session connects in diffrent terminal Session port. Any solution to make the TS port fixed? Help from anyone will be highly appreciated.

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Deploying a Django application in a virtual Ubuntu Server

    - by mfsaint
    I have a virtualbox machine running Ubuntu Server 10.04LTS. My intention is to this machine to work like a VPS, this way I can learn and prepare for when I get a VPS service. Apache+mod_wsgi for deploying the Django app seems the right choice to me. I have the domain (marianofalcon.com.ar) but nothing else, no DNS. The problem is that I'm pretty lost with all the deployment stuff. I know how to configure mod_wsgi(with the django.wsgi file) and apache(creating a VirtualHost). Something is missing and I don't know what it is. I think that I lack networking skills ant that's the big problem. Trying to host the app on a virtualbox adds some difficulty because I don't know well what IP to use. This is what I've got: file placed at: /etc/apache2/sites-available: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.my-domain.com ServerAlias my-domain.com Alias /media /path/to/my/project/media DocumentRoot /path/to/my/project WSGIScriptAlias / /path/to/your/project/apache/django.wsgi ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> django.wsgi file: import os, sys wsgi_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.dirname(wsgi_dir) sys.path.append(project_dir) project_settings = os.path.join(project_dir,'settings') os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • When should NTPd broadcast/broadcastclient be used instead of client/server or peer modes?

    - by Luke404
    The NTP deamon if often used in its simplest mode, which is client/server: you specify one or more server directives in your ntp.conf and your clients will use those servers. In addition to that, when you run your own NTP servers, it is good practice to peer them together, so if one of them looses connectivity to its upstream servers, it will get time from its peers. But NTPd can also work with broadcast and/or multicast distribution of time data, with the documentation stating: broadcast and multicast modes are intended for configurations involving one or a few servers and a possibly very large client population The documentation also says elsewhere: It is possible and frequently useful to configure a host as both broadcast client and broadcast server. A number of hosts configured this way and sharing a common broadcast address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance. I can see one obvious administrative benefit: you don't have to manually specify and update your list of NTP servers in the clients ntp.conf, so to me it looks tempting to use broadcast mode even for a small client population (say 5+ clients with 3~4 servers). I expect network traffic to be a little higher with broadcasts instead of client/server associations, but given the usual gigabit ethernet LAN the impact should be negligible unless you have a very very large number of hosts in the same broadcast domain. At the end of the day, when should broadcast mode be used or avoided? Are there pros and cons I haven't seen?

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • ipv6 with KVM on debian

    - by Eliasdx
    I have trouble setting up IPV6 on my Proxmox (KVM) server: My ISP sent me this information(xxx=placeholder): IPs: 2a01:XXX:XXX:301:: /64 Gateway: 2a01:XXX:XXX:300::1 /59 This is the interface setup on the host server: auto vmbr1 iface vmbr1 inet static address 178.XX.XX.4 broadcast 178.XX.XX.63 netmask 255.255.255.192 pointopoint 178.XX.XX.1 gateway 178.XX.XX.1 bridge_ports eth0 bridge_stp off bridge_fd 0 iface vmbr1 inet6 static address 2a01:XXX:XXX:301::2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del 2a01:XXX:XXX:300::1 dev vmbr1 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev vmbr1 On the guest: auto eth0 iface eth0 inet static address 178.xx.xx.47 netmask 255.255.255.255 broadcast 178.xx.xx.63 gateway 178.xx.xx.1 pointopoint 178.xx.xx.1 iface eth0 inet6 static pre-up modprobe ipv6 address 2a01:XXX:XXX:301::2:2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del 2a01:XXX:XXX:300::1 dev eth0 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev eth0 Ipv4 works on both host and guest but Ipv6 only works "sometimes". It's up for minutes and then down again until I change something. However I can actually ping the host and the guest from both host and guest. host:~# ip -6 neigh 2a01:XXX:XXX:301::100:2 dev vmbr1 lladdr 00:50:56:00:00:e0 REACHABLE 2a01:XXX:XXX:300::1 dev vmbr1 lladdr 00:26:88:76:18:18 router STALE host:~# ip -6 route 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 2a01:XXX:XXX:301::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev tap101i1d0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 Does someone know why it isn't working? And is there a way to configure multiple v6 IPs from the same subnet so I can dedicate IPs to websites on a server with multiple virtualhosts?

    Read the article

  • Want to send my neighbors to a certain website via DNS, but don't have a clue how. [closed]

    - by Akku
    My neighbors have an unsecured WIFI router, and over the administration web-UI of the router I could log in as there was no password set. I don't know which of my neighbors these are, and I'd like to configure their router in a way that they come to my website instead of Google and Facebook, where I set up a warning in german. It this page: http://www.abelssoft.de/liebenachbarn/ Basically, I just want to see if and how this is possible - I'm aware that I could just set the WiFi-password and have them call their network provider to reset the thing, but I really want to see if this could work, because it would be a way cooler effect :-). So this router interface doesn't allow custom redirects, only filters. BUT I can set the DNS that is used, so I thought there might be the possibility to set up a custom DNS on a server, set it as the main DNS and redirect from Google to the URL above. Is this possible? If so, please try to detail a way that I have to go though to achive this. Note that I'm not the super-Linux-skilled person, I have a dyndns account and a Windows machine it points to as well as an Apache+Tomcat if that helps. I could also set up virtual machines on the windows server and redirect to those using a different port. Or is there maybe a webservice that provides such custom DNS?

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • Freebsd jail for an small company - checklist - what shouldn't forget

    - by cajwine
    Looking for an checklist for an "small company freebsd/jail server". Having pretty common starting point: FreeBSD jail (remote/headless) for the company: public web, email, ftp server, and private (maybe in the future partially public) wiki (foswiki) 4 physical persons, (6 email addresses) + one admin - others will never use ssh) have already done usual hardening on the host side (like pf, sshguard etc). my major components are: dovecot, exim, apache22, proftpd, perl5.14. Looking for an checklist, what I shouldn't forget. My plan: openssl self-signed certificates for exim, dovecot and proftpd (wildcard keys) openssl self-signed certificate for apache (later will go for "trusted-signed" key) My questions are: is is an "good practice" having one pair of wildcard SSL-certificates for many programs? (exim, dovecot, proftpd) - or should I generate one key for each service? should I add all 4 persons as standard (unix) users, or I should go with virtual users? Asking because: have only small count of users, and it is more simple to configure everything (exim, dovecot) for local users ($HOME/Maildir), plus ability to set $HOME/.forward/vacation and etc. is here some (special) things what I should consider? (e.g. maybe, in the future we want setup our own webmail - will make this any difference?) any other recommendation? Thank you, hoping that this question fit into the http://serverfault.com/faq under the: Server and Business Workstation operating systems, hardware, software Operations, maintenance, and monitoring Looking for an checklist, but please explain why you're recommending it. See Good Subjective, Bad Subjective. related: What's your suggested mail server configuration for a FreeBSD server?

    Read the article

  • Can't get DNS Alias work on Ubuntu 10.04 with Apache 2

    - by Johnny
    I want to use the DNS Alias to configure one of my domain pointing to a specific directory on the server. Here is what I've done: Change the IP address in domain setting, and it works $ ping www.example.com PING example.com (124.205.62.xxx): 56 data bytes 64 bytes from 124.205.62.xxx: icmp_seq=0 ttl=48 time=53.088 ms 64 bytes from 124.205.62.xxx: icmp_seq=1 ttl=48 time=52.125 ms ^C --- example.com ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 52.125/52.606/53.088/0.482 ms Add sites-available and sites-enabled $ ls -l /etc/apache2/sites-available/ total 16 -rw-r--r-- 1 root root 948 2010-04-14 03:27 default -rw-r--r-- 1 root root 7467 2010-04-14 03:27 default-ssl -rw-r--r-- 1 root root 365 2010-06-09 18:27 example.com $ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 26 2010-06-09 15:46 000-default -> ../sites-available/default lrwxrwxrwx 1 root root 33 2010-06-09 18:17 001-example.com -> ../sites-available/example.com But it doesn't work and when I open the browser for www.example.com, it shows an 111 error: The following error was encountered: Connection to 124.205.62.48 Failed The system returned: (111) Connection refused Here is how example.com's config: $ cat /etc/apache2/sites-enabled/001-example.com <virtualhost *:80> DocumentRoot "/vhosts/example.com/htdocs/" ServerName www.example.com ServerAlias example.com <Location /> Order Deny,Allow Deny from None Allow from all </Location> #Include /etc/phpmyadmin/apache.conf ErrorLog /vhosts/example.com/logs/error.log CustomLog /vhosts/example.com/logs/access.log combined Could you please tell me how to solve this?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >