Search Results

Search found 19928 results on 798 pages for 'multiple constructors'.

Page 105/798 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Multiple Homed Windows 2008 Server / Windows 7 Client

    - by Daniel Scott
    I have a small Windows 2008 network, with some Windows 7 clients. The clients are both laptops with docking stations and I would like them to communicate with the Windows 2008 server (for filesharing) through the wired network whilst they're docked. Internet connectivity for all machines (clients and server) is via a Wireless LAN, so the wireless adapter in the Windows 7 clients stays active while they're docked. When the laptops are un-docked, it would be nice to still be able to contact the windows 2008 server for print sharing (and slower file sharing) - hence the server also being on the wireless LAN. The windows 2008 server is running Active Directory, DHCP and DNS. It controls DHCP leases on the wired network and holds the DNS records for "myserver.mycompany.local", which is what the filesharing clients connect to. Ideally I'd like the DNS records to return the wired IP first so that this is the address that the laptops will attempt initially - but there doesn't seem to be a way to do that? At present the server's IP on the wireless LAN comes out of an nslookup above the wired Lan IP. The multi-homing works perfectly - but in the wrong order! Switch on the wireless lan and ping myserver and it goes to the wireless IP. Disable the wireless on the client and do the same ping again and after a couple of seconds it starts pinging the wired address. Does anyone have any suggestions on how to make this work in a predictable order? - or even if it can work. Alternative 1? If it can't work, then would this work: Remove the wireless adapter from the server, put a wireless router/bridge on the wired network (set up to route to/from the wireless LAN's subnet), then configure the clients with two routes to the (now) single IP of the server with metrics favouring direct communication over the wired LAN first? Alternative 2? Should I instead single-home the laptops so all of their connectivity is via the wired-LAN while they're docked? (and route via the windows 2008 server - or a dedicated wireless bridge/router)? My concern here is that I'd like undocking to be seamless - and if the clients are in the middle of downloading something from the internet I wouldn't want whatever they're doing interupted as they switch IP addresses onto the Wireless network. Perhaps this isn't the case and I'm concerned over nothing? Any thoughts? :) UPDATE I seem to have cracked it (at least DNS entries come out in the order I hope for - and pinging the server with various combinations of wired, wireless and both interfaces enabled uses the IP I want) ... I set the binding order of the NICs on the Server (which is acting as Domain Controller, DHCP and DNS server) so that the Wired NIC is before the Wireless adapter. (Start -- type "Network Interfaces" -- Select "View Network Connections" -- Press Alt to show classic dropdown menus -- Advanced -- Advanced Settings) Now, an nslookup (from the client) of the server's hostname returns the Wired IP first, followed by the Wireless IP. The wired IP now seems to be used whenever it's contactable. Incidentally, the metrics on the wired and wireless routes (on the client) also favour the wired LAN (based on Windows' automatically assigned metrics) - but this was always the case, even when I was having trouble getting the wired IP to be "favoured". I'm not entirely sure if this is coincidence - or if a DNS server running on Windows, handing back IP addresses for itself does actually take the binding order of it's own network interfaces into account? It would be interesting to hear from someone who can confirm or deny that (or confirm that the binding order on the server plays a role for some other reason?)

    Read the article

  • Set-up SSHD to handle multiple key pairs.

    - by Warlax
    Hey guys, I am trying to set up my sshd to accept users that do not have a system user account. My approach is to use DSA public/private key pairs. I generated a key pair: $ ssh-keygen -t dsa I copied id_dsa.pub to the server machine where sshd runs. I appended the line from id_dsa.pub to ~/.ssh/authorized_keys of the single existing system user account I will use for every 'external' user. I tried to ssh as the 'external' user into the machine where I set-up the authorized_keys and failed miserably. What am I missing here? Thanks.

    Read the article

  • Troubleshooting source of heavy resource-usage on a windows server 2008 running multiple sites

    - by batman_man
    Hi, I am running about 10 asp.net websites on a hosted virtual server. The server runs Server 2008 - each website is backed by its own database running on SQL server 2008 on the same box. Lately the box has seemed really slow. The only kind of discovery i could think of doing was looking in the task manager, where i can see w3wp and sqlserver.exe jumping to 40% cpu usage every 5-10 seconds. What are the steps i can take to determine which of my websites is taking these resources and or what database is getting hit the most? I have of course ssms installed on the machine as well. As you can tell, my sysadmin skills are very very limited - any help would be much appreciated.

    Read the article

  • Multiple gcc on Mac OS X

    - by snihalani
    I did a port install for gcc version 4.7.1 (MacPorts gcc47 4.7.1_2) I named the executable as g+ and placed it in one my $PATH. I use gcc 4.7.1 when I need c++11 standard. I haven't changed the original g++ so as not messup XCode. I am using eclipse-cdt and running the make all from the window. It's giving me: 20:12:40 **** Build of configuration Default for project 2804-hw2 **** make all g+ -c -Wall -std=c++11 main.cpp -o main.o make: g+: No such file or directory make: *** [main.o] Error 1 20:12:40 Build Finished (took 89ms) Here is my makefile CC=g+ CFLAGS=-c -Wall -std=c++11 LDFLAGS= SOURCES=main.cpp Vector3D.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=exec all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm $(EXECUTABLE) $(OBJECTS) How do I make eclipse detect my g+?

    Read the article

  • Multiple PHP SAPI configuration

    - by DTest
    I'm trying to build PHP for use as an apache shared module --with-apxs2 but also with the 'php-cgi' binary (fastcgi) on Mac OSX 10.6. I'm using this ./configure : /configure --prefix=/usr/local/PHP \ --with-apxs2=/usr/local/apache/bin/apxs \ --disable-ipv6 \ --enable-cgi \ --with-curl \ --with-mysqli=/usr/local/mysql/bin/mysql_config \ --with-openssl=/usr \ --enable-ftp \ --enable-shared \ --enable-soap \ --enable-sockets \ --enable-zip \ --with-zlib-dir It builds the apache php5.so module just fine, but in /usr/local/PHP/bin, there is no php-cgi file. If I build it without the --with-apxs2 option (and indeed, I don't even need the --enable-cgi option) the php-cgi file gets built with no problems. Background on my setup: PHP 5.3.4, Apache 2.2.14, Mac OSX 10.6, Tomcat with JavaBridge (which is why I need the php-cgi file) Without the apxs2 option, /usr/local/php/bin/php -v produces: PHP 5.3.4 (cli) (built: Dec 21 2010 21:35:14) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies and /usr/local/php/bin/php-cgi -v produces: PHP 5.3.4 (cgi-fcgi) (built: Dec 21 2010 21:35:12) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies My question is, what am I not understanding with php SAPIs that won't allow the building of the two modules at the same time? Also, can I build it --with-apxs2 the first time, then make clean and rebuild in the same PHP directory /usr/local/php for the php files without issue?

    Read the article

  • hyper-v multiple virtual machines with >2TB volumes from one raid

    - by wurlog
    I have a server with two Raids. Raid 0: 2x 1TB Raid 6: 8x 2TB The first raid I used for the hyper-v installation itself. The virtual machines should use the Raid 6, but how can I config it? I need at least one file server with the most of the disc space (maybe a second). But every vhd has a maximum of 2 TB and I can't use the volume directly because other virtual machines have to have access the Raid6. What do I do?

    Read the article

  • How to get average of multiple time series

    - by Supun Kamburugamuva
    I have few computers running. I want to get the average CPU usage of these computers and plot it as a graph. So I've collected CPU usage in regular intervals in these machines. So for each computer I have a data set of time and CPU usage. But the times at which CPU measurements are taken in different machines are not in sync. For example in 1st machine CPU may be measured in time 1, 5, 9. In the second machine CPU may be measured in time 2, 5, 8. I want to get an average data series from these different data sets. Could you point me to some resources? Thanks - Supun.

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • Install compiled linux program on multiple computers

    - by jtnire
    I'm sorry if this sounds like a silly question, but when I compile something on linux using the usual "./configure, make, make install" steps, how can I install the programs on other servers without having to recompile? I am trying to avoid having to install the build tools on production servers, however I need the latest version of a particular piece of software, so using RPMs isn't an option in this case. Any help is appreciated. Thanks

    Read the article

  • Multiple nt52 entries in bootmgr

    - by SLaks
    I have a machine with Windows XP, Server 2003 R2, and Server 2008 R2. Right now, bootmgr has one entry for Server 2008 R2 and one entry for ntldr, which then leads to the ntldr boot.ini menu. Is it possible to add two different nt52 entries on two partitions so that I can access all three OSes from the bootmgr menu? Right now, Server 2008 and XP are in logical drives on an extended partition, but (I assume) I can image them onto basic partitions if necessary.

    Read the article

  • DHCP server with multiple interfaces on ubuntu, destroys default gateway

    - by Henrik Kjus Alstad
    I use Ubuntu, and I have many interfaces. eth0, which is my internet connection, and it gets its info from a DHCP-server totally outisde of my control. I then have eth1,eth2,eth3 and eth4 which I have created a DHCP-server for.(ISC DHCP-Server) It seems to work, and I even get an IP-address from the foreign DHCP-server on the internet facing interface. However, for some reason it seems my gateway for eth0 became screwed after I installed my local DHCP-server for eth1-eth4. (I think so because I got an IP for eth0, and I can ping other stuff on the local network, but I cannot get access to the internet). My eth0-specific info in /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 10.0.1.1 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255 gateway 10.0.1.1 mtu 8192 auto eth2 iface eth2 inet static address 10.0.2.1 netmask 255.255.255.0 network 10.0.2.0 broadcast 10.0.2.255 gateway 10.0.2.1 mtu 8192 My /etc/default/isc-dhcp-server: INTERFACES="eth1 eth2 eth3 eth4" So why does my local DHCP-server fuck up the gateway for eth0, when I tell it not to listen to eth0? Anyone see the problem or what I can do to fix it? The problem seems indeed to be the gateways. "netstat -nr" gives: 0.0.0.0 --- 10.X.X.X ---- 0.0.0.0 --- UG 0 0 0 eth3 It should have been 0.0.0.0 129.2XX.X.X 0.0.0.0 UG 0 0 0 eth0 So for some reason, my local DHCP-server overrides the gateway I get from the network DHCP. Edit: dhcp.conf looks like this(I included info only for eth1 subnet): ddns-update-style none; not authoritative; subnet 10.0.1.0 netmask 255.255.255.0 { interface eth1; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; range 10.0.1.10 10.0.1.100; host camera1_1 { hardware ethernet 00:30:53:11:24:6E; fixed-address 10.0.1.10; } host camera2_1 { hardware ethernet 00:30:53:10:16:70; fixed-address 10.0.1.11; } } Also, it seems that the gateway is correctly set if I run "/etc/init.d/networking restart" in a terminal, but that's not helpful for me, I need the correct gateway to be set during startup, and i'd rather find the source of the problem

    Read the article

  • Only one domains not resolving via Windows DNS server at multiple locations, but is at others

    - by Brett G
    I'm having quite a weird issue. Had mail delivery issues to a specific domain. After looking closer, I realized that the DNS for that domain isn't resolving via the in-house Windows 2003 SP2 DNS server. C:\>nslookup foodmix.net Server: DC.DOMAIN.com Address: 10.1.1.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to DC.DOMAIN.com timed-out (DC.DOMAIN.com and 10.1.1.1 are generic values to replace the actual ones) Even if I run this nslookup from the DC.DOMAIN.com server, I get the same result. However, all other requests are working as they should. I tried it on severs at completely separate organizations on different networks(Windows 2003 AD servers). The weird thing is some of these were having the same exact issue. However using public DNS servers work. I have tried clearing the DNS cache, restarting the server, restarting the services, etc. Nothing has worked. One weird event I noticed in the DNS Server Event Logs that might be related is an event ID of 5504 with the following description: The DNS server encountered an invalid domain name in a packet from 192.33.4.12. The packet will be rejected. The event data contains the DNS packet. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. In the data section below, I can see the following mentioned: ns2.webhostingstar.com Which happens to be the nameserver for the domain in question. Several discussion threads and a MS KB have pointed to disabling EDNS. I have done this via "dnscmd /config /enableednsprobes 0" and it has not fixed the issue.

    Read the article

  • Privoxy-like proxy that handles multiple parallel connections?

    - by overtherainbow
    Hello I use Privoxy on my XP host to filter/rewrite web pages, but it's slower because all connections go through Privoxy's single port. According to this post on StackOverflow, by default, browsers support more than one simultaneous connection, which would explain why going through Privoxy is slower. Does someone know of a similar application that could handle more than one connection? Thank you.

    Read the article

  • Getting XAMPP to work with multiple version of PHP

    - by Pennf0lio
    How can I install XAMPP to work with different versions of PHP? I use XAMPP because some of the scripts are buggy when run in WAMP. I use WAMP because it supports different versions of PHP. But now I would like to streamline it down to just XAMPP so that my web development would be easier to manage. Is it possible to configure XAMPP to work with more than one version of PHP? Or is it something that I have to look for in an alternative solution? Note: I'm running on Windows 7.

    Read the article

  • How to set up multiple DNS servers on an intranet

    - by Brent
    We have an Active Directory network, with a mixture of Windows DNS, linux BIND servers, and want to use OpenDNS as our external DNS provider. I am wondering What is the best way to set up these servers (regarding forwarders, recursion, etc.)? Active Directory is our main internal DNS for our domain, and has 3 redundant servers. DHCP and all our servers use these as their DNS servers. Then we have a legacy AD server from an old network that is still authoritative for a bunch of domains. Finally, we have a couple of Linux Bind servers that are authoritative for a bunch of websites we host. Should our main AD servers point to our legacy AD server, which points to one of our BIND servers, which points to the other BIND server, which finally points out to openDNS? Or should our main AD servers point to all of these directly? - or is there a better option? What happens if a domain is listed in 2 places? Does DNS process the forwarders in order? What about root servers - if I want to use OpenDNS for "everything else", do I just list them as the last forwarders, and delete the root servers from all my DNS servers? How does recursion work - in this scenario, should I be using recursion or not?

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • Multiple domains, Exchange 2010, mailbox access via OWA

    - by Rob
    We currently run two separate domains where our new implementation of exchange 2010 is currently on a separate domain the users. My problem is: [email protected] cant access his mailbox at joe@domainb via OWA even though full access and sendas has been granted on domainb's mailbox to domaina's account. I keep receiving the error: Access is denied. The Active Directory resource couldn't be accessed. This may be because the Active Directory object doesn't exist or the object has become corrupted, or because you don't have the correct permissions. anyone able to help please? Take care

    Read the article

  • Running Magento for multiple clients - single Installaton vs. multiple installations

    - by Chris Hopkins
    Hi There I am looking to set-up a Magento (Community Edition) installation for multiple clients and have researched the matter for a few days now. I can see that the Enterprise Edition has what I need in it, but surprisingly I am not willing to shell out the $12,000 odd yearly subscription. It seems there are a few options available to be but I am worried about the performance I will get out of the various options. Option 1) Single install using AITOC advanced permissions module So this is really what I am after; one installation so that I can update my core files all at the same time and also manage all my store users from one place. The problems here are that I don't know anything about the reliability of this extra product and that I have to pay a bit extra. I am also worried that if I have 10 stores running off this one installation it might all slow down so much and keel over as I have heard allot about Magento's slowness. Module Link: http://www.aitoc.com/en/magentomods_advanced_permissions.html Option 2) Multiple installations of Magento on one server for each shop So here I have 10 Magento installations on one server all running happily away not using any extra money, but I now have 10 separate stores to update and maintain which could be annoying. Also I haven't been able to find a whole lot of other people using this method and when I have they are usually asking how to stop their servers from dying. So this route seems like it could be even worse on my server as I will have more going on on my server but if my server could take it each Magento installation would be simpler and less likely to slow down due to each one having to run 10 shops on its own? Option 3) Use lots of servers and lots of Magento installations I just so do not want to do this. Option 4) Buy Magento Enterprise I do not have the money to do this. So which route is less likely to blow up my server? And does anyone have experience with this holy grail of a module? Thanks for reading and thanks in advance for any help - Chris Hopkins

    Read the article

  • File/printer sharing issues on network with multiple OSes

    - by DanZ
    My workplace consists of computers running a variety of different operating systems, and I have been running into problems getting some of them to connect to a shared drive and printer over the network. Here is a brief description of the computers involved and the issues I have encountered: 1: Dell desktop, Windows Vista Business-- This is the computer I want the others to connect to. It has a USB printer and eSATA hard drive enclosure that I have set up for sharing, with different accounts for the various users. 2: Fujitsu laptop, Windows XP Tablet edition-- No problems. Can connect to both the shared printer and hard drive. 3: Lenovo laptop, Windows Vista Business 64 bit-- No problems. Can connect to both the shared printer and drive. 4: Apple MacBook, OS 10.4-- Can connect to the shared drive, but not to the shared printer. I am aware that the printer issue is due to a known incompatibility between Vista and OS 10.4 and earlier with regards to Samba. It is not a big problem, however, as this computer can access a network printer. 5: Sony laptop, Windows Vista Home Premium-- Can connect to the shared printer, but not the shared drive. It can see computer 1 and its shared drive on the network, and appears to successfully log in to user accounts. However, if you try to access the shared drive, it says you do not have permission. I have tried both standard and administrator accounts, and none can access the drive from this computer. 6: MacBook Pro, OS 10.5 (there are two of these)-- Can connect to the shared printer, but not the shared drive. They can't see computer 1 on the network. For that matter, they also can't see each other or the older Mac, but can see and access shared folders on the XP machine (computer 2) and can see other PCs in the building. I was able to add the shared printer manually by typing in its network location, but was unable to manually add the shared drive in the same way. So, what I am looking for is suggestions on how to get computers 5 and 6 to connect to the shared drive. Since they can already connect to the shared printer (which is on the same computer as the shared drive), it seems reasonable that they should be able to access the drive as well.

    Read the article

  • Plesk 10 Postfix with multiple IP adresses and SSL certificates

    - by JulianB
    We are currently running a root server with Debian 6 and Plesk 10.4.4. We have some virtual hosts using one IP adress (shared) - e.g. example1.com - and another virtual host using a dedicated IP address (example2.com). Is there a way to configure postfix to do the following Always use the IP address of the virtual host to which the e-mail account belongs (so that an e-mail from [email protected] will originate from the shared IP-Address and an e-mail from [email protected] will originate from the dedicated IP? Use different certificates for TLS for example1.com and example2.com? If the latter is not possible: Could any problems arrive when using example1.com as certificate for example2.com users? Of course, example2.com users would have to configure their clients to use example1.com as the SMTP server name to avoid annoying security warnings. But if we still would be able to get the effect of the first point that would still be acceptable.

    Read the article

  • $HTTP_HOST multiple rewrite

    - by nrivoli
    I have Apache 2.2, Ubuntu 12. I want to load a different envionment based on my HTTP_HOST, this works for the first domain only, after that I get error 500, " Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace." and I am not able to see my error... RewriteEngine on RewriteBase / RewriteCond %{HTTPS} off RewriteRule ^ - [F] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{HTTP_HOST} ^server-dev.domain.tld$ [NC] RewriteRule ^(.*)$ /api/dev.php/$1 [L] RewriteCond %{HTTP_HOST} ^server-qa.domain.tld$ [NC] RewriteRule ^(.*)$ /api/qa.php/$1 [L] #RewriteCond %{HTTP_HOST} ^localhost$ [NC] #RewriteRule ^(.*)$ /api/localhost.php/$1 [L] I am accesing to https://server-dev.domain.tl/api/whatever https://server-qa.domain.tl/api/whatever

    Read the article

  • Can't restore HDD using multiple programs

    - by jonney
    Please refer to my original question about this issue. I tried GetDataBack and it doesn't seem to scan or do anything. I chose the default settings from the first screen and then on step 1 I chose my HDD that's damaged. In step 2, nothing happens. It doesn't scan or do anythong, and on step 3 it doesn't display any of my files. I have also tried TestDisk and what happens in TestDisk is that when I try to fix and rebuild the MFT, it says this: MFT and MFT mirror are bad. Failed to repair them. Here is the status of that HDD under Testdrive: I also tried this app and got no results when I tried to recover the files. I keep getting this error: Last but not least, I tried Recuva and that simply says it's unable to read the MFT. Any suggestions? I have tried various apps already and none of them seems to work.

    Read the article

  • DNS zone file SPF configuration to support sending mail from multiple servers and gmail

    - by Tauren
    I want to configure SPF on a domain to allow mail to be sent from: the x.com website server (x.com and www.x.com - both at same IP) it's MX servers (smtp.x.com, mx.x.com, mail.x.com) another server that isn't listed as an MX server (somehost.x.com) via gmail using an account that has authenticated use of [email protected] Will this zone file work? If not, what are the problems with it? $ttl 38400 @ IN SOA ns1.x.com. hostmaster.x.com. ( 201003092 ; serial 8H ; refresh 15M ; retry 1W ; expire 1H ) ; minimum @ NS ns1.x.com. @ NS ns2.x.com. @ MX 10 mx.x.com. @ MX 20 smtp.x.com. @ MX 30 mailhost.x.com. ; SPF records @ IN TXT "v=spf1 a mx a:somehost.x.com include:_spf.google.com ~all" mx IN TXT "v=spf1 a -all" smtp IN TXT "v=spf1 a -all" mailhost IN TXT "v=spf1 a -all" Questions: Is _spf.google.com the right thing to include for gmail.com, or is it only for Google Hosted Apps? If only for Google Apps, what should I include to send from gmail.com? If mail shouldn't be sent from anywhere else, is it safe to use -all instead of ~all? Does it make sense to add specific SPF records for each of the mail servers? Any other problems with the zone file? I want to confirm these things before making changes to my zone file. The file has SPF configured basically the same now, just without google.com and somehost, but I want to make sure I won't break things when I change it.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >