Search Results

Search found 28794 results on 1152 pages for 'turn based'.

Page 50/1152 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Grant HTTP access based on unix user group

    - by Sander Marechal
    Is it possible to grant network access or HTTP access based on a user's group? At my company we want to set up an internal composer server using Satis to manage packages for the projects we write (e.g. on repository.mycompany.com), with the packages themselves in our SVN server (svn.mycompany.com). We have several webservers with many different users on them. Some users should be able to reach the composer and SVN server. Some should not. Users that should be able to reach these servers all belong to the same group. How can I set up Apache on the Composer and SVN server to only grant access to those users in that group? Alternatively, can I set up the webservers in such a way that only users from that group are able to make a connection to our Composer and SVN servers? The best thing we have come up with so far is using SSL client certificates. We simply place a client certificate on all servers which can be used to access Composer and SVN. Only the right usergroup will have read access to the certificate. A bit clunky but it may work. But I'm looking for something better.

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • trouble shooting ntfs-loop-xen combination in wubi based grub of Ubuntu

    - by Registered User
    Here is a situation I installed Ubuntu on a laptop using Wubi in Windows 7 drive.*The laptop is not mine.*I have installed and things worked by now perfectly without any problem.We are trying to set up a Xen (virtualization)environment in this laptop. After setting up every thing cleanly.When I needed to boot with following grub entries menuentry "Xen Linux 2.6.32.27" { insmod ntfs set root='(hd0,2)' loopback loop0 /ubuntu/disks/root.disk set root=(loop0) multiboot /boot/xen.gz module /boot/vmlinuz-2.6.32.27 dummy=dummy root=/dev/sda2 loop=/ubuntu/disks/root.disk ro console=tty0 module /boot/initrd.img-2.6.32.27 } I got error file not found error unknown command 'multiboot' error unknown command 'module' error unknown command 'module' Now to dig this issue further I reboot the machine and go to grub command prompt and manually pass on each of the above parameters which you see in the grub entry when I reached grub> insmod multiboot then I got following message on screen error:file not found. It looks like this wubi+ grub setup has just enough modules to use loopback file on ntfs, but the ACTUAL /boot directory is on the loopback NOT ntfs (hd0,2). Therefore any attempt to read any files from (hd0,2) simply wont work, cause there's no file there.I need to use insmod multiboot and command multiboot and module which are available in grub on a normal install without Wubi.But since the laptop is not mine so I am not allowed to partition it and have to make it work in this situation only. While a normal Kernel is still booting? How can I get module multiboot in this Wubi based install.

    Read the article

  • Best console based text editor not only for programmers [closed]

    - by robo
    I need console based text editor for writing both source codes and human readable texts such as emails. I need it to be user friendly. It mean for me: You can use it the same way as the notepad or gedit. You can use mouse there. If you need your mother of girlfriend or somebody to edit your text they will know what to do, they will not realize it is a console and will have only a feeling it is something like a notepad. copy, paste, undo works as usual with usual key combinations (Ctrl-C, Ctrl-V, Ctrl-Z). shift and arrows works as usual. They select the text. And when I return to the computer I want to use the text editor for programming. I expect: Syntax highliting auto indenting replacing spaces with tabs keyboard shortcuts for compiling possibility to configure it to use a debugger autocompletions for c#, java, c++ and other languages other things I expect from IDE's. I was working and configuring vim for a few years. But It never fulfilled all of my expectations (but it almost did). I thing I could get vim configured perfectly if I had few more weeks time for configurating it. Unfortunately I cannot afford to be configuring vim forever. Is there other alternative? Hopefully some editor I once set up and it will works forever? What do you use? I often hear people are using emacs. Is it worth learning?

    Read the article

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • Need help finding a program to split PDFs based on text in specific areas

    - by Sean
    I reeeeally need help with this. I work for a University in the admissions office and we get large PDFs of every document that an applicant uploads to us. What I need to do is split these PDFs into separate documents. Each of the separate documents has a head at the top of what type of document it is (Statement of Purpose, Transcript, etc). I found a program a few weeks ago that at first seemed to work great (A-PDF Splitter). It would look for every type of document in a combined PDF (eg. Statement of Purpose), and if it saw that heading it would create a new PDF named "ORIGINALFILENAME-Statement of Purpose". I soon discovered though that the program breaks for no reason on certain PDFs, and I have to take the time to take that PDF out of the queue and start the splitting over again (and we get 250 a day). I contacted their support and they basically told me I was SOL. So please, if you can find me a program to split PDFs into smaller PDFs based on if it finds particular text in a region, then I would be forever in dept to you. If I don't find one soon then I'm going to be spending the entire day splitting PDFs by hand and my boss isn't going to be happy.

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • Denying access to website via htaccess based on http header

    - by neekster
    I've been trying for ages to get this to work and I can't put my finger on it. What I'm trying to do is block access to a site from a number of countries, based on the CF-IPCountry header added by CloudFlare. I figured htaccess was a suitable way to do this. We are running LiteSpeed 4.2.4 on top of DirectAdmin for a control panel. The problem we having is the htaccess rule doesn't seem to do anything. Here's the rule we tried: SetEnvIf CF-IPCountry AU UnwantedCountry=1 Order allow,deny Deny from env=UnwantedCountry Allow from all That makes no difference at all, connections are still accepted. Just to check that the rule was at least being processed, I changed Allow from all to Deny from all, and connections were refused. So it appears to be a problem wit the variable. Here's the relevant headers that come in with the request. Connection: Keep-Alive Accept-Encoding: gzip CF-Connecting-IP: xx.xx.xx.xx CF-IPCountry: AU X-Forwarded-For: xx.xx.xx.xx.xx CF-RAY: c9062956e2d04b6 X-Forwarded-Proto: http CF-Visitor: {"scheme":"http"} Zone-Name: xx.com.au Hopefully someone can help me out, this has been driving me nuts for too long. Thanks

    Read the article

  • Formula-based Excel page headers

    - by Jake Krohn
    I'm using the "Rows to repeat at top" function in Excel's "Page Setup" dialog to ensure that a multi-row header block appears on every printed page of my worksheet. However, I'd like to be able to change certain bits of the header based on the content of the current page. I would simply like to display the value of one cell in the first row that is printed on the page. If this is my header: Section: xx And the data looks like this (columns are Section and Name): 1 Foo 1 Bar 2 Baz I want the "xx" in the header to be "1". If, further down on the next page, the value in the Section column is "3", I want that printed in the header of the next page. I originally thought that using the "OFFSET" function might help, e.g. ="Section: "&OFFSET(A2, 1, 0) But it only shows the offset from the original placement of the header, thus only working on page 1. The end document is a PDF, so right now I'm able to go back in with the "TouchUp Text Tool" in Acrobat and add the numbers page by page. But it gets to be a tedious process with 70+ page reports. Anyone have any better ideas that don't require me mucking up the original Excel document with inserted headers every N lines? This is Excel 2008 for Mac, if it makes a difference.

    Read the article

  • In a Shell scripts, check version of installed package, make a decision based on output

    - by DJDarkViper
    Looking to write a cross distro / cross version shell script that makes sure a forced version of PHP is installed Example: Ubuntu 12.04 has 5.3, Ubuntu 13.10 has 5.5, Debian 7 has 5.4 I need this script, when run on a distro that has an old version of PHP, to update the repo to point to a package for 5.4, and if the distro has too new of a version, can downgrade to 5.4 appropriately. Im still not entirely comprehensive of Shell/Terminals technical limit of what you can do with it, but ill be perfectly frank that im still not totally used to existing tools The best I can think at the moment is: php -v | grep "PHP 5" but that returns a bunch of potentially changeable granular characters (PHP 5.4.4-14+deb7u5 (cli) (built: Oct 3 2013 09:24:58) ). Im not sure what to pipe to after this to extract out the characters im interested in Im not sure if im being totally clear, im not sure how to ask this.. Basically, in an automated shell script for Linux distros, how do I extract the PHP version (and just the PHP version number preferably) and make a decision based on that output EDIT This line ended up doing pretty dang good php -v | grep "PHP 5" | sed 's/.*PHP \([^-]*\).*/\1/' | cut -c 1-3 Bit long in the tooth, but gives me "5.3", "5.4", and "5.5" which is exactly what I need to work with

    Read the article

  • Linux port-based routing using iptables/ip route

    - by user42055
    I have the following setup: 192.168.0.4 192.168.0.6 192.168.0.1 +-----------+ +---------+ +----------+ |WORKSTATION|------| LINUX |------| GATEWAY | +-----------+ +---------+ +----------+ 192.168.150.10 | 192.168.150.9 +---------+ | VPN | +---------+ 192.168.150.1 WORKSTATION has a default route of 192.168.0.6 LINUX has a default route of 192.168.0.1 I am trying to use the gateway as the default route, but route port 80 traffic via the VPN. Based on what I read at http://www.linuxhorizon.ro/iproute2.html I have tried this: echo "1 VPN" >> /etc/iproute2/rt_tables sysctl net.ipv4.conf.eth0.rp_filter = 0 sysctl net.ipv4.conf.tun0.rp_filter = 0 sysctl net.ipv4.conf.all.rp_filter = 0 iptables -A PREROUTING -t mangle -i eth0 -p tcp --dport 80 -j MARK --set-mark 0x1 ip route add default via 192.168.150.9 dev tun0 table VPN ip rule add from all fwmark 0x1 table VPN When I run "tcpdump -i eth0 port 80" on LINUX, and open a webpage on WORKSTATION, I don't see the traffic go through LINUX at all. When I run a ping from WORKSTATION, I get this back from some packets: 92 bytes from 192.168.0.6: Redirect Host(New addr: 192.168.0.1) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 de91 0 0000 3f 01 4ed3 192.168.0.4 139.134.2.18 Is this why my routing is not working ? Do I need to put GATEWAY and LINUX on different subnets to prevent WORKSTATION being redirected to GATEWAY ? Do I need to use NAT at all, or can I do this with routing alone (which is what I want) ?

    Read the article

  • Delete cell content in Libre (Open) Office based on the cell value

    - by take2
    I have a huge csv file (tens of thousands of rows) that I need to filter based on different criteria. After trying to find a proper CSV editor, I decided to use LibreOffice Calc. CSVed is great, but it doesn't support neither UTF-8 nor macros for advanced filtering. So, there are 4 columns, 3 of which contain numbers (with decimal numbers) and 1 of which contains text. I'm trying to find a way to delete rows with a macro code. I can achieve the desired behavior with filters too, but it's annoying to type all of the filtering values over and over again and there doesn't seem to be a way to export the filter and us it repeatedly. These rows should be deleted: The ones that don't contain certain words in textual column (column A). There are a few thousand different words used in that column and I want to keep only the rows that contain one of about 30 words in that column. Additionally, the number is the other columns should be bigger than 3.8 (column B), 4.5 (column C) and smaller than 20 (column C). The row-deletion type is "Shift up". Hopefully I have explained it well. Thanks a lot in advance for your help!

    Read the article

  • Change source address based on destination IP

    - by hgj
    We have several "router" machines that gather a lot of external IP addresses on the same host and redirect, NAT or proxy the traffic to the internal network. They also act as routers for the machines on the internal network. This works fine, however I am unable to make the routing table, so I can change the source address, based on the destination a machine from the internal network want to access. Let's say I have a router, that has public addresses P1 (5.5.5.1/24) and P2 (5.5.5.2/24). All traffic goes through P1, but if necessary, the host is reachable on P2 too. This looks like this and works fine: > ip addr ... 1: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether aa:bb:cc:dd:ee:11 brd ff:ff:ff:ff:ff:ff inet 5.5.5.1/24 brd 5.5.5.255 scope global eth1 inet 5.5.5.2/24 brd 5.5.5.255 scope global secondary eth1:p2 ... Now I want to use P2 as the source address, if I want to access the Google DNS service for example (8.8.8.8). So I add a row in the routing table like: > ip route add 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 > ip route ... default via 5.5.5.254 dev eth1 5.5.5.0/24 dev eth1 proto kernel scope link src 5.5.5.1 8.8.8.8 via 5.5.5.254 dev eth1 src 5.5.5.2 ... But this does not work. If I ping 8.8.8.8, the host still uses P1 as the source address, and does not use P2 at all for outgoing connections. Am I doing it right? I guess not...

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • VirtualName-based local development host behind corporate proxy (MAMP)

    - by geerlingguy
    I am behind a corporate proxy server/firewall, and this firewall seems to not be too happy with my idea of local development. On my home computer (Mac/Leopard), I have MAMP running, with a rule in /etc/hosts that directs dev.example.com to 127.0.0.1, and I have a virtualhost set up in the httpd.conf file which works great for me. However, at work, I set up the exact same configuration, but am not able to access dev.example.com, likely due to some address/DNS translation going on via the proxy server. Here are the relevant details from Terminal: $ ping dev.example.com PING dev.example.com (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.025 ms $ host dev.example.com Host dev.example.com not found: 3(NXDOMAIN) I've tried adding dev.example.com to the list of bypass addresses in System Preferences (the 'Bypass proxy settings for these Hosts & Domains' list), but that had no effect. Is there any way I can develop locally using name-based hosts at work? I can access localhost, but can't get to the dev.example.com (or any other custom virtualhosts) here at work, which complicates other matters related to the sites on which I'm working...

    Read the article

  • How can I turn off calculated columns in an Excel table from a macro using VBA?

    - by user41293
    I am working on a macro that inserts formulas into a cell in an Excel table. The Excel table does the automatic filling of columns and fills all the cells in that column with the formula, but all I want is one cell to have the formula. I cannot just turn off automatic formula for tables as I need to have other people use this worksheet on their systems. Is there a way to turn off the automatic filling of formulas in a table using VBA in a macro? It just needs to be temporary: I just want to turn it off, put in my formulas, then turn it back on.

    Read the article

  • How can I turn off flash fill automatically in Excel 2013?

    - by user3480643
    Flash fill breaks a lot of things in older excel documents. It causes maddeningly slow transfers from cell to cell after updating. I am trying to find a way to turn off "flash fill" in Excel 2013 automatically before rolling the product out to the rest of the staff in my company. Is there (preferably) a registry key that I can apply or a switch that I can include during the install that will turn this option off? Here is an image of the setting that I am looking to turn off: I haven't been able to find any documentation online about turning this off, other than this one page from MS: http://office.microsoft.com/en-ie/excel-help/turn-flash-fill-on-HA104043292.aspx

    Read the article

  • Issues with returned mail sent to web-based email domains

    - by Beeder
    My company is having issues with returned mail that we send out to external domains. A few weeks ago we replaced a firewall and changed ISP providers and began subsequently having issues RECEIVING emails from external sources because we hadn't updated our new IPs in the DNS records. After making the necessary configuration changes and setting up SMTP forwarding over port 25 to our mail server, everything was working fine up until a few days ago when we started having mail sent out returned to us. We aren't having any trouble communicating internally (to recipients on our domain) but it seems we're having trouble with outbound messages to web-based email recipients. (@hotmail, @live, @yahoo, @gmail...etc) Currently we are running Server 2003 SP2 and exchange 2003. I'm very unfamiliar with configuring Exchange and could really use some help in narrowing down the possibilities. I did some research and am becoming suspicious of Sender ID being the culprit due to our recent IP address change and the likelihood that Sender ID is identifying us as a fake domain. Am I going in entirely the wrong direction? Any input or guidance would be infinitely appreciated. This is the message that is returned when an outbound message fails...this particular one was sent to my @live.com account for testing purposes... Your message did not reach some or all of the intended recipients. The following recipient(s) could not be reached: [email protected] on 5/17/2012 3:02 PM There was a SMTP communication problem with the recipient's email server. Please contact your system administrator. Unfortunately, messages from xx.x.xx.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. I tried a reverse DNS lookup and found that we are set up as a Forward-confirmed reverse DNS. So do I just need to contact my ISP and have them correct their DNS records or is this something I can solve on our end??

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • How to turn off the display in Windows 8 without locking or making computer go to sleep?

    - by darshshah
    How to turn of the display in Windows 8 without locking or making computer go to sleep ? The problem is that when I enable ‘Turn off display after x minutes’ feature from control panel, the device goes into sleep after ‘x’ minutes. It seems that both the options – turn off display/put computer to sleep are connected. It devices goes automatically to sleep mode as soon as the display is turned off. So, is there any method to turn off the display of the device without it going into sleep mode ? Someone wrote in the microsoft forum " The whole key to this problem seems to be the "Turn off the Display" setting. If you have that set to 5 minutes, 10 minutes, 15 minutes, etc...the computer will go to sleep one minute after you lock the screen. With this setting set to "Never", it doesn't do it. So something is wrong there." I want to turnoff the display but don't want to lock the device. Is that also possible ? Thanks

    Read the article

  • Xml configuration versus Annotation based configuration

    - by Abarax
    In a few large projects i have been working on lately it seems to become increasingly important to choose one or the other (XML or Annotation). As projects grow, consistency is very important for maintainability. My question is, what do people prefer. Do you prefer XML based or Annotation based? or Both? Everybody talks about XML configuration hell and how annotations are the answer, what about Annotation configuration hell?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >