Search Results

Search found 489 results on 20 pages for 'routed'.

Page 12/20 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • What are different ways to reduce latency between a server and a web application? [closed]

    - by jjoensuu
    this is a question about a web application that provides SOAP web services. For the sake of this question, this web app is hosted on a server SERVER B which is located in California. We have an automated, scheduled, process running on a server SERVER A, located in New York. This scheduled process is supposed to send SOAP messages to SERVER B every so often, but this process typically dies soon after starting. We have now been told by the vendor that reason why the process dies is because of the latency between SERVER A and SERVER B. The data traffic is routed through many diffent public networks. There is no dedicated line between SERVER A and SERVER B. As a result I have been asked to look into ways to reduce the latency between SERVER A and SERVER B. So I wanted to ask, what are the different ways to reduce latency in a situation like this? For example, would it help to switch from HTTPS to some other secure protocol? (the thought here is that perhaps some other alternative would require fewer handshakes than HTTPS). Or would a VPN help? If a VPN would reduce the latency, how would it do that? NOTE: I am not looking for an explicit answer that would work in my specific situation. I am more like looking just for a simple list of what technologies could be used for this. I will still have to evaluate the technologies and discuss them internally with others, so the list would just be a starting point. Here I am assuming that there exists very few ways to reduce latency between two servers that communicate across public networks using HTTPS. Feel free to correct me if this assumption is wrong and please ask if there is a need for specific information. NOTE 2: A list of technologies is a specific answer to the question I stated in the title. NOTE 3: Its rather dumb to close this question when it is after all about me looking for information and furthermore this information can clearly be useful for others. Anyway luckily there are other sites where I can ask around. StackExchange seems to attached to their own philosophical principles. Many thanks

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • Cisco ASA and SixXS IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • Exclude specific domains from Apache2 serverAlias while using a catch all *(wildcard) alias

    - by Victor S
    I have a web application that needs to support custom domains, in that regard I have set-up the following name based virtual server: <VirtualHost *:80> ServerName example.com ServerAlias * *.example.com www.example.com example.com RailsEnv production RackEnv production DocumentRoot /srv/www/example/current/public <Directory /srv/www/example/current/public> AllowOverride all Options -MultiViews FollowSymLinks </Directory> ErrorLog /srv/www/example/log/error.log TransferLog /srv/www/example/log/access.log </VirtualHost> Notice the * as the server alias? that catches all the domains on that server. However, I have other sites on this server which I want to be excluded from this list. It is more economical for me to have a list of excluded domains than manually set every domain a user may register with at this service as a serverAlias... Perhaps this is not the best way to go, but I'm looking for help, in the best (relatively simple) way to set up a web-app that may catch any domains, while allowing other specific domains to be routed to different apps. Thanks!

    Read the article

  • Cloudmin KVM DNS hostnames not working

    - by dannymcc
    I have got a new server which has Cloudmin installed. It's working well and I can create and manage VM's as expected. The server came with a /29 subnet and I requested an additional /29 subnet to allow for more virtual machines. I didn't want to replace the existing /29 subnet with a /28 because that would have caused disruption with my existing VM's. To make life easier I decided to configure a domain name for the Cloudmin host server to allow for automatic hostname setup whenever I create a new virtual machine. I have a domain name (example.com) and I have created an NS record as follows: NS kvm.example.com 123.123.123.123 A kvm.example.com 123.123.123.123 In the above example the IP address is that of the host server, I also have two /29 subnets routed to the server. Now, I've added the two subnets to the Cloudmin administration panel as follows: I've tried to hide as little information as possible without giving all of the server details away! If I ping kvm.example.com I get a response from 123.123.123.123, if I ping the newly created virtual machine (example.kvm.example.com) it fails, and if I ping the IP address that's been assigned to the new virtual machine (from the second subnet) it fails. Am I missing anything vital? Does this look (from what little information I can show) like it's setup correctly? Any help/pointers would be appreciated. For reference the Cloudmin documentation I am using as a guide is http://www.virtualmin.com/documentation/cloudmin/gettingstarted

    Read the article

  • TS (RD) Gateway Authentication Problem "The logon attempt failed"

    - by user2059
    I've been using TS Gateway to permit remote access for our staff for a few months now, and all has been well. Users either connect to a traditional terminal server desktop or hit our website and start an TS RemoteApp application- in both cases the connection is routed through a TS Gateway. However I came into work this morning to find that has stopped authenticating users through TS Gateway, each time returning "The logon attempt failed" as seen in the image even though the credentials are correct. It should be noted that everything works fine if the Gateway is taken out of the equation, it's the TS Gateway component that is causing these problems. Users experience this problem whether they connect through XP SP3, Vista or 7. On the server a total of 4 entries appear in the Windows security log at exactly the same time for each failed logon attempt: two 4624 "An account was successfully logged on" messages for the user, immediately followed by two 4634 "An account was logged off"s. This suggests that the server is accepting the credentials as correct, then booting the user off. Nothing at all is recorded in the NPS and Terminal Server logs. A reboot doesn't change things. Neither does completely removing and reinstalling the NPS and Terminal Server roles. I'm baffled as to how this can happen suddenly without warning. Any suggestions would be greatly appreciated.

    Read the article

  • Reverse SSH Tunnel

    - by chris
    I am trying to forward web traffic from a remote server to my local machine in order to test out some API integration (tropo, paypal, etc). Basically, I'm trying to setup something similar to what tunnlr.com provides. I've initiated the ssh tunnel with the command $ssh –nNT –R :7777:localhost:5000 user@server Then I can see that server has is now listening on port 7777 with user@server:$netstat -ant | grep 7777 tcp 0 0 127.0.0.1:7777 0.0.0.0:* LISTEN tcp6 0 0 ::1:7777 :::* LISTEN $user@server:curl localhost:7777 Hello from local machine So that works fine. The curl request is actually served from the local machine. Now, how do I enable server.com:8888 to be routed through that tunnel? I've tried using nginx like so: upstream tunnel { server 0.0.0.0:7777; } server { listen 8888; server_name server.com; location / { access_log /var/log/nginx/tunnel-access.log; error_log /var/log/nginx/tunnel-error.log; proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } From the nginx error log I see: [error] 11389#0: *1 connect() failed (111: Connection refused) I've been looking at trying to use iptables, but haven't made any progress. iptables seems like a more elegant solution than running nginx just for tunneling. Any help is greatly appreciated. Thanks!

    Read the article

  • AsteriskNow Migration / Shared Extension Space

    - by Aaron C. de Bruyn
    I am testing the possibility of migrating from an old Avaya phone system to AsteriskNow. The migration would cover several hundred phones--but spread out over several years. (Management wants to move buildings to the new phone system one by one as cables get cut or time permits.) Two other directive is that extensions must not change and they want a GUI that other admins (non-Linux geeks) can manage. They currently use 9XXX for all extensions. We linked the Avaya and Asterisk box via PRI card and they both are communicating. From the Avaya side, if we move (for example) extension 9001 to Asterisk, we forward the call over the PRI to the AsteriskNow box and the SIP phone rings. In AsteriskNow we have an outgoing rule '_9XXX' that routes all 4-digit extensions starting with 9 back to Avaya. Here's the trouble. Dialing 9001 (the extension moved over to AsteriskNow) causes the call to be routed out the PRI to the Avaya box, then the Avaya box routes the call back to Asterisk, and Asterisk routes it to the SIP phone. As we get more and more users switched over, it will use up more and more channels over the PRI card. Is there a way I can ask Asterisk to check it's local extensions first--then forward off to the Avaya system if it starts with '_9XXX'? (I know how I can do it when editing the raw config files, I'm just looking for a way to do it in the GUI so other admins can manage it if necessary.) As a last-ditch plan, I know I can specifically add '_9001' as an outgoing call rule and sent it directly to extension 9001--but I'd really hate to do that for several hundred phones

    Read the article

  • OpenVPN Bridge LAN-to-LAN Configuration?

    - by Shad Reese
    I'm trying to configure an OpenVPN bridge LAN-to-LAN setup. Currently, I have the OpenVPN bridge Server/Client setup up running. On the server-side my br-lan interface has tap0, eth0, and wlan0 in the bridge group. On the client-side the br-lan interface has eth0 and wlan0 in the bridge group, the client tap0 is outside of the br-lan group. Currently the two bridge groups are connected via the wlanO interfaces (server-side is the Access Point - AP and the client-side is the wireless client). My goal is to connect the two bridge groups with a wireless VPN pipe. My network configuration: Server: br-lan: 10.4.96.50 Client: br-lan: 10.4.96.75 tap0: 10.4.96.100 <---- issued by the VPN server. Unfortunately, I'm stuck with using a bridge instead of a routed OpenVPN setup. My question is how (if possible) do I add the client tap0 interface to the client bridge group, as to ensure all traffic between the server/client bridge groups is using the VPN pipe? SERVER CONFIG FILE. config openvpn sample_server # Set to 1 to enable this instance: option enable 1 option port 1194 option proto udp option dev tap0 option key /etc/easy-rsa/keys/server.key option dh /etc/easy-rsa/keys/dh1024.pem option ifconfig_pool_persist /tmp/ipp.txt option server_bridge "10.4.96.50 255.255.255.0 10.4.96.100 10.4.96.200" list push "redirect-gateway local def1" list push "dhcp-option DNS 10.4.96.14" option duplicate_cn 1 option comp_lzo 1 option max_clients 100 option log /tmp/openvpn.log option verb 3 CLIENT CONFIG FILE: config 'openvpn' 'sample_client' option 'enable' '1' option 'client' '1' option 'dev' 'tap' option 'proto' 'udp' list 'remote' '10.4.96.50 1194' option 'status' /tmp/openvpn-status.log option 'log' /tmp/openvpn.log option 'ca' '/etc/easy-rsa/keys/ca.crt' option 'cert' '/etc/easy-rsa/keys/client.crt' option 'key' '/etc/easy-rsa/keys/client.key' option 'comp_lzo' '1' option 'verb' '5' Thanks in advance,

    Read the article

  • Address (url) forwarding with Vyatta

    - by Trikks
    Hi Got this kind of noob question i suppose. I got this very basic network setup and need help to set up some address forwarding. As seen in my illustration below all traffic enters via the eth0 interface (85.123.32.23). The external dns is setup to direct all hosts to this ip as well. Now, how on earth do I filter the incoming requests to each box? The Ip's are static! Se the network layout here: http://vyatta.org/files/u11160/setup.png I do not wish to solve this by assigning tons of ports etc. In my wishful thinking something like this would be nice :) set service nat rule 10 type destination set service nat rule 10 inbound-interface eth0 set service nat rule 10 destination address ftp.myhost.com set service nat rule 10 inside-address address 192.168.100.20 This way ALL traffic to the address ftp.myhost.com (at eth0) should be routed to the internal ip, 192.168.100.20. Right, is there anyone who could point in some direction? Maybe it's wrong to use nat? Please help me! :)

    Read the article

  • Address (url) forwarding with Vyatta

    - by Trikks
    Got this kind of noob question i suppose. I got this very basic network setup and need help to set up some address forwarding. As seen in my illustration below all traffic enters via the eth0 interface (85.123.32.23). The external dns is setup to direct all hosts to this ip as well. Now, how on earth do I filter the incoming requests to each box? The Ip's are static! My network layout: I do not wish to solve this by assigning tons of ports etc. In my wishful thinking something like this would be nice :) set service nat rule 10 type destination set service nat rule 10 inbound-interface eth0 set service nat rule 10 destination address ftp.myhost.com set service nat rule 10 inside-address address 192.168.100.20 This way ALL traffic to the address ftp.myhost.com (at eth0) should be routed to the internal ip, 192.168.100.20. Right, is there anyone who could point in some direction? Maybe it's wrong to use nat? Please help me! :)

    Read the article

  • PHP 5.3 on IIS gives 404 error in CGI mode

    - by reinier
    Slowly losing my mind here. I had PHP 5.2 working fine (ISAPI) under IIS, but for some extension I needed 5.3. So no worries, I installed this but it turns out ISAPI is not supplied anymore. I followed the install tutorials for fastcgi and ended up with a 500 internal server error for every PHP page served. So my current situation is: I have fastcgi removed. In my websites I have added PHP (head, get, post) and routed them to c:\php\php-cgi.exe. Result: every PHP page I try (even the ones with just text) gives 404 not found error. Any HTML file I put in the same folder, serves without a hitch. Who can help me please... How hard can something like this be right? For me apparently very hard. Extra information: ran the installer as suggested below. Set it to use fastcgi. my fcgiext.ini file looks like this now: [types] php=c:\php\php-cgi.exe [c:\php\php-cgi.exe] exepath=c:\php\php-cgi.exe from the command-line a 3 line PHP file with just phpinfo(); works fine from the server the same PHP file with just phpinfo(); results in the internal server 500 error. from the server a PHP file with just text works fine when changing the document types in IIS management console and point the PHP extension directly to c:\php\php-cgi.exe results in 404 for every PHP file the php.ini is the php.ini.production file which came in the distribution. No edits were made. Setting the IIS PHP handler directly to PHP (not via fastcgi) c:\php\php-cgi.exe results in the following: display a PHP page with only text....works fine display a page with only phpinfo(); results in 404 not found

    Read the article

  • Cisco ASA and static IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • Domino to Exchange 2007 (or 2010) Design Concerns?

    - by NickToyota
    Today we got the executive green light to proceed with changing from a Domino platform to Exchange. The business prefers Exchange for a messaging platform. (even though IMO IBM Domino is fine - if it aint broke, don't fix it but it was not my call). I have been put in charge of Domino to Exchange process goes smoothly as possible. I have also been told to put together costs for this project. I have some questions and concerns re: network design, licensing, costs: The current setup is as follows. 1 HQ office (100 users), 1 secondary office (50 users), 5 branch offices (under 10 users). 5 different email domains Windows Server 2003 functional level with a few 2008 R2 Servers Lotus Domino Notes Servers (one in each office) Ironmail Appliance Public Domino Web Mail server Majority G5+ Proliant Servers Domino Blackberry Enterprise license and server No VoIP phones What are the basic hardware requirements for Exchange 2007 or 2010? Can I simply purchase a single physical server? Will each office require an Exchange server or possibly additional servers (roles)? How is email routed to the smaller branch offices? Standard or Enterprise licenses? The business has been running Domino (messaging and application services) for over 10 years and also want Exchange to support email services, Blackberry, Outlook Web Access, possibly support for iPhone devices. Thank you Serverfault universe.

    Read the article

  • Problems with ipsec betwen Cisco ASA 5505 and Juniper ssg5

    - by Oskar Kjellin
    I am trying to set up an ipsec tunnel between our ASA 5505 and a Juniper ssg5. The tunnel is up and running, but I cannot get any data through it. The local network I am on is 172.16.1.0 and the remote is 192.168.70.0. But I cannot ping anything on their netowork. I receive a "Phase 2 OK" when I set up the ipsec. I think this is the part of the config that is applicable. It seems like the data is not routed through the tunnel, but I am not sure... object network our-network subnet 172.16.1.0 255.255.255.0 object network their-network subnet 192.168.70.0 255.255.255.0 access-list outside_cryptomap extended permit ip object our-network object their-network crypto ipsec ikev1 transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer THEIR_IP crypto map outside_map 1 set ikev1 phase1-mode aggressive crypto map outside_map 1 set ikev1 transform-set ESP-3DES-MD5 crypto map outside_map 1 set ikev2 pre-shared-key ***** crypto map outside_map 1 set reverse-route crypto map outside_map interface outside webvpn group-policy GroupPolicy_THEIR_IP internal group-policy GroupPolicy_THEIR_IP attributes vpn-filter value outside_cryptomap ipv6-vpn-filter none vpn-tunnel-protocol ikev1 tunnel-group THEIR_IP type ipsec-l2l tunnel-group THEIR_IP general-attributes default-group-policy GroupPolicy_THEIR_IP tunnel-group THEIR_IP ipsec-attributes ikev1 pre-shared-key ***** ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key *****

    Read the article

  • LAN->LAN IP translation (for TortoiseSVN + Artifacts + Buffalo router)

    - by Armchair Bronco
    Here's my scenario: I've got a VisualSVN server on my main dev box @ home. I'm also using Visual Studio 2010, TortoiseSVN, VisualSVN client (for source control), and Versioned 'Artifacts' (for bug tracking). (I had to modify the fake URL's below to use only one slash because as a new user, I can't post more than one real URL.) I've got my Buffalo AirStation WHR-HP-G300N router properly configured so my business partner can connect to the SVN server. I have port forwarding enabled for the internet-side IP address (like http:/99.888.77.66:443) which gets forwarded to an internal IP (like 192.168.11.6). This part is working great. The problem I'm having is with the integration piece between TortoiseSVN and my bug tracking system. I need to provide a bugtraq:url property, but I haven't been able to get relative paths to work. So I'm forced to use an absolute URL. On my end, I need to use the name of my server (for example: bugtraq:url = https:/my-server/svn/bla..), but this doesn't work for my partner. He needs to specify the IP address (for example: bugtraq:url = https:/999.888.77.66:443/svn/bla...) Is there a way to configure my router such that the IP address for this parameter gets re-routed/re-mapped to "https://my-server" if the request originates from the LAN itself? My router's software supports LAN-Internet and Internet-LAN, but I don't see LAN-LAN.

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Secondary IP (eth0:0) acts like main server IP

    - by George Tasioulis
    I have a CentOS server, configured with 4 consecutive IPs: eth0 5.x.x.251 eth0:0 5.x.x.252 eth0:1 5.x.x.253 eth0:2 5.x.x.254 The problem is that all traffic goes out to the internet with eth0:0 (5.x.x.252) as the source IP, instead of eth0. # curl ifconfig.me 5.x.x.252 How can I fix this, so that all traffic goes out via eth0, ie my main IP? PS: My server is VPS running on a Xen dom0, the latter being configured in routed mode networking. Thanks in advance! Server configuration # ifconfig eth0 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.251 Bcast:5.x.x.255 Mask:255.255.255.255 inet6 addr: fe80::x:x:x:x/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14675569 errors:0 dropped:0 overruns:0 frame:0 TX packets:9463227 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4122016502 (3.8 GiB) TX bytes:25959110751 (24.1 GiB) Interrupt:23 eth0:0 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.252 Bcast:5.x.x.255 Mask:255.255.255.224 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:23 eth0:1 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.253 Bcast:5.x.x.255 Mask:255.255.255.224 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:23 eth0:2 Link encap:Ethernet HWaddr 00:x:x:x:x:AE inet addr:5.x.x.254 Bcast:5.x.x.255 Mask:255.255.255.224 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:23 # cat /etc/hosts 127.0.0.1 localhost.localdomain localhost 5.x.x.251 [fqdn] [hostname] # cat ifcfg-eth0 DEVICE=eth0 BOOTPROTO=static ONBOOT=yes IPADDR=5.x.x.251 NETMASK=255.255.255.224 SCOPE="peer 5.x.y.82" # cat ifcfg-eth0:0 DEVICE=eth0:0 BOOTPROTO=static ONBOOT=yes IPADDR=5.x.x.252 NETMASK=255.255.255.224 # cat route-eth0 ADDRESS0=0.0.0.0 NETMASK0=0.0.0.0 GATEWAY0=5.x.y.82 # netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 5.x.y.82 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 5.x.x.224 0.0.0.0 255.255.255.224 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 5.x.y.82 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • How can I get my routers to forward ports correctly?

    - by Giffyguy
    My network currently looks like this (simplified): Note that Router #2 is connected to the LAN interface of Router #1. This should be familiar to anyone who has seen a standard static-IP setup with an additional firewall for a residence or other small building. Router #1 is actually my cable gateway, but since it is a fully functional router/firewall, I am going to refer to it as a router. Now, I need to open various ports in both firewalls for incoming communication to my server - port 80 is a good example. So I've opened up port 80 in Router #2, and so far all incoming traffic at the public IP X.X.X.129 is being routed correctly. The problem is that I also need my server to respond to incoming traffic at the public IP X.X.X.130 on the WAN interface of Router #1. Naturally, I can't just tell Router #1 to forward port 80 to another public IP. Port forwarding is only supported when the traffic is being directed to the LAN subnet. I am willing to restructure my network topology if required, with the following conditions: Router #1 cannot have its WAN IP reassigned - X.X.X.130 is mandatory. Router #1 cannot be moved or disconnected from the cloud. The server cannot be given a second IP address. I would prefer the server to have a private IP address - e.g. 10.0.0.10 I'd like to keep Router #2, but it can have a private IP - e.g. 10.0.1.10 Following these rules, I need to get my server to receive incoming traffic on port 80 from both public IP addresses. Does anyone on SU know if this is possible? So far my only theories have been to set up a static route on either router, or to somehow combine my two subnets into a single subnet.

    Read the article

  • How to configure TFTPD32 to ignore non PXE DHCP requests?

    - by Ingmar Hupp
    I want to give our Windows guy a way of easily PXE booting machines for deployment by plugging his laptop into one of our site networks. I've set up a TFTPD32 configuration which does just that, and our normal DHCP server ignores the PXE DHCP requests due to them having some magic flag, so this part works as desired. However I'm not sure how to configure TFTPD32 to only respond to PXE DHCP requests (the ones with the magic flag) and ignore all normal DHCP requests (so that the production machines don't get a non-routed address from the PXE server). How do I configured TFTPD32 to ignore these non-PXE DHCP requests? Or if it can't, is there another equally easy to use piece of software that he can run on his Windows laptop? Since the TFTPD part is working fine, a DHCP server with the ability to serve PXE only would do. Worst case I'll have to set up a virtual machine with all this, but I'd much prefer a small, simple solution. I'm not interested in solutions that involve using the existing DHCP servers or separating machines on the network for deployment, the whole point is to be simple and stand-alone.

    Read the article

  • Overriding routes on Openvpn client, iproute, iptables2

    - by sarvavijJana
    I am looking for some way to route packets based on its destination ports switching regular internet connection and established openvpn tunnel. This is my configuration OpenVPN server ( I have no control over it ) OpenVPN client running ubuntu wlan0 192.168.1.111 - internet connected if Several routes applied on connection to openvpn from server: /sbin/route add -net 207.126.92.3 netmask 255.255.255.255 gw 192.168.1.1 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 5.5.0.1 And I need to route packets regarding it's destination ports for ex: 80,443 into vpn everything else directly to isp connection 192.168.1.1 What i have used during my attempts: iptables -A OUTPUT -t mangle -p tcp -m multiport ! --dports 80,443 -j MARK --set-xmark 0x1/0xffffffff ip rule add fwmark 0x1 table 100 ip route add default via 192.168.1.1 table 100 I was trying to apply this settings using up/down options of openvpn client configuration All my attempts reduced to successful packet delivery and response only via vpn tunnel. Packets routed bypassing vpn i have used some SNAT to gain proper src address iptables -A POSTROUTING -t nat -o $IF -p tcp -m multiport --dports 80,443 -j SNAT --to $IF_IP failed in SYN-ACK like 0 0,1 0,1: "70","192.168.1.111","X.X.X.X","TCP","34314 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18664016 TSER=0 WS=7" "71","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584430 TSER=18654692 WS=5" "72","X.X.X.X","192.168.1.111","TCP","81 > 34314 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1428 TSV=531584779 TSER=18654692 WS=5" "73","192.168.1.111","X.X.X.X","TCP","34343 > 81 [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=18673732 TSER=0 WS=7" I hope someone has already overcome such a situation or probably knows better approach to fulfill requirements. Please kindly give me a good advice or working solution.

    Read the article

  • nginx reverse ssl proxy with multiple subdomains

    - by BrianM
    I'm trying to locate a high level configuration example for my current situation. We have a wildcard SSL certificate for multiple subdomains which are on several internal IIS servers. site1.example.com (X.X.X.194) -> IISServer01:8081 site2.example.com (X.X.X.194) -> IISServer01:8082 site3.example.com (X.X.X.194) -> IISServer02:8083 I am looking to handle the incoming SSL traffic through one server entry and then pass on the specific domain to the internal IIS application. It seems I have 2 options: Code a location section for each subdomain (seems messy from the examples I have found) Forward the unencrypted traffic back to the same nginx server configured with different server entries for each subdomain hostname. (At least this appears to be an option). My ultimate goal is to consolidate much of our SSL traffic to go through nginx so we can use HAProxy to load balance servers. Will approach #2 work within nginx if I properly setup the proxy_set_header entries? I envision something along the lines of this within my final config file (using approach #2): server { listen Y.Y.Y.174:443; #Internally routed IP address server_name *.example.com; proxy_pass http://Y.Y.Y.174:8081; } server { listen Y.Y.Y.174:8081; server_name site1.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8081; } server { listen Y.Y.Y.174:8081; server_name site2.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8082; } server { listen Y.Y.Y.174:8081; server_name site3.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer02:8083; } This seems like a way, but I'm not sure if it's the best way. Am I missing a simpler approach to this?

    Read the article

  • IIS 404 custom error

    - by Greg B
    I've deployed an ASP.NET 3.5 app to a 64bit Windows 2003 R2 server. In the web.config I have the following <customErrors mode="RemoteOnly" defaultRedirect="/404/"> <error statusCode="404" redirect="/404/"/> <error statusCode="500" redirect="/500/"/> </customErrors> In the website properties in IIS Manager I have set the 404 and 500 errors to Type = "URL" and the same URLs as in the web.config. I have a wildcard application map to the .NET 2.0 aspnet_isapi.dll with "Verify file exists" turned off. If I try to hit a fake .aspx file I successfully get sent to the 404 page. I belive this is because there is an explicit mapping for .aspx to the .NET DLL. If I try to access a fake directory I simply recieve a plain text response saying: The system cannot find the file specified. It would appear that these requests for directories are not being routed through the .NET pipeline, which is what I would expect (and need) to happen becuase of the wildcard application mapping. Any ideas?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >