Search Results

Search found 5212 results on 209 pages for 'forward'.

Page 151/209 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Troubleshooting an overheating CPU

    - by Jeff Fry
    I & my father just recently put together a new PC. Specs below. From the very beginning, on boot it will often complain that the CPU is too hot. If I sit in BIOS and watch the CPU, it'll drop back down from red to blue (<72C), at which point I've tended to just boot into Windows...and haven't had any problems. In fact, I've played a couple hours straight of Skyrim at max settings, and not had any visible issues. That said, I've occasionally walked away & come back to find that it's crashed. Yesterday, it crashed (while idle) twice in 12 hours, which shifted the balance from busy-with-life to nervous-I'm-about-to-melt-something. I just installed Core Temp which is showing my 4 cores fluxuating between 70-98C. I'm guessing at this point that the CPU fan may be incorrectly installed or defective. My first thought is to either (a) add water cooling (which the case supports) and / or (b) replace the CPU fan with an after-market one. That said, I'm very open to suggestions. A note, while I certainly don't want to burn money here, I have a baby coming any day now and am still unpacking from a recent move so if I have a choice between an option that costs money and another that takes a while...I'll happily spend a bit extra. Side question: Should I be nervous to even have this on at this point? Let me know if there's something useful I could add to my report. Otherwise, I'm looking forward to your suggestions! Thanks. CPU Intel i7-2600 CPU w/ stock fan Other HW ASUS P8Z68-V Pro motherboard 64G SSD boot drive 4 older SATA HDs GIGABYTE ATI Radeon HD6950 1 GB DDR5 8G Kingston T1 Series RAM Corsair 650W Gold Certified power supply Antec P280 case

    Read the article

  • Iptables ignoring a rule in the config file

    - by Overdeath
    I see lot of established connections to my apache server from the ip 188.241.114.22 which eventually causes apache to hang . After I restart the service everything works fine. I tried adding a rule in iptables -A INPUT -s 188.241.114.22 -j DROP but despite that I keep seeing connections from that IP. I'm using centOS and i'm adding the rule like thie: iptables -A INPUT -s 188.241.114.22 -j DROP Right afther that I save it using: service iptables save Here is the output of iptables -L -v ` Chain INPUT (policy ACCEPT 120K packets, 16M bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any c-98-210-5-174.hsd1.ca.comcast.net anywhere 0 0 DROP all -- any any c-98-201-5-174.hsd1.tx.comcast.net anywhere 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any www.dabacus2.com anywhere 0 0 DROP all -- any any 116.255.163.100 anywhere 0 0 DROP all -- any any 94.23.119.11 anywhere 0 0 DROP all -- any any 164.bajanet.mx anywhere 0 0 DROP all -- any any 173-203-71-136.static.cloud-ips.com anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any 74.122.177.12 anywhere 0 0 DROP all -- any any 58.83.227.150 anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 186K packets, 224M bytes) pkts bytes target prot opt in out source destination `

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    Hi, I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • Fetch new Mails (Also from Subfolders) from another IMAP server as new Mail in Postfix

    - by Tobi
    everyone. I have installed Postfix on a server with Aliases and Domains from a MySQL Database. It is configured to forward some adresses to other Mail Accounts and also delivers some mails in local mailboxes that will be queried over a dovecot imap server. For this example let there be two users: [email protected] what is a user that gets its mail just forwarded to let's say [email protected] [email protected] what is a user that accesses its mail from local IMAP. Now, I want to fetch some Mails from another mailserver and handle them as if they were sent to a user of my Mailserver. Lets say those corelations exist: [email protected] has two external accounts: [email protected] and [email protected] [email protected] has also one external account [email protected] The Problem is the new mails on that other Mailserver is not always in the inbox, it might be in subdirectories: mailinglists/all or mailinglists/it but also in mailinglists/some-other-department which is not interesting and should not be delivered. I already found a programm called fetchmail but I cannot find how to fetch subdirectories or decide which subdirectories are fetched.

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Office 2010 OCT Outlook Filepaths

    - by vlannoob
    I'm playing around with customizing Office 2010 installs on my network, normally I just do a full manual install, but as the environment grows and the lazier I get its becoming a pain to do it manually every time. I've read up and downloaded the Office 2010 OCT tool and it looks relatively straight forward - with one exception - the Outlook Profile. I can 'get around it' by just leaving it all as default (or not enabling offline use) but I'd like to customise it slightly so that its all setup no matter who logs onto the PC. The only issue I have, and my question is: In the OCT - Outlook section What do you enter into the Path and Filename for the OST file and the Offline Address book seetings under Enable Offline Use section? I'm sweet with everything else - just that one section, and I think if I bugger that one it will kill the whole Outlook Profile?? It would need to go into each users unique filepath for their profile correct? I have a fair idea of what should be there but I'm struggling with the correct syntax. I know this is a stupid question....but its late in the day and my brain is fried ;) As usual - any and all help/assistance is appreciated ;)

    Read the article

  • How to have LiveJournal delegate my OpenID to something else?

    - by T-Boy
    I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the LiveJournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get LiveJournal, when asked by a web site, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. ETA: Doing a little more reading up, and examining the source for my LiveJournal user page, I note that this particular line in the file's <header> area: <link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" /> I suspect that changing this will allow me to forward OpenID requests to whomever I wish, I think; so far so good. Now comes the hard part -- figuring out how to change all of that using LiveJournal's customization options, if that is at all possible (here's hoping I don't need to pay to get that functionality).

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • Slow tracepath on local LAN

    - by Simone Falcini
    I am on EXSi and I have 2 instances: Ubuntu and CentOS. These are the network configurations Ubuntu eth0 Link encap:Ethernet HWaddr 00:50:56:00:1f:68 inet addr:212.83.153.71 Bcast:212.83.153.71 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76059 errors:0 dropped:26 overruns:0 frame:0 TX packets:7224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6482760 (6.4 MB) TX bytes:2080684 (2.0 MB) eth1 Link encap:Ethernet HWaddr 00:0c:29:46:5a:f2 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:252 errors:0 dropped:0 overruns:0 frame:0 TX packets:608 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42460 (42.4 KB) TX bytes:82474 (82.4 KB) /etc/iptables.conf *nat :PREROUTING ACCEPT [142:12571] :INPUT ACCEPT [5:1076] :OUTPUT ACCEPT [8:496] :POSTROUTING ACCEPT [8:496] -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j MASQUERADE COMMIT *filter :INPUT ACCEPT [2:72] :FORWARD ACCEPT [4:336] :OUTPUT ACCEPT [6:328] -A INPUT -i eth1 -p tcp -j ACCEPT -A INPUT -i eth1 -p udp -j ACCEPT -A INPUT -i eth0 -p tcp --dport ssh -j ACCEPT COMMIT CentOS eth0 Link encap:Ethernet HWaddr 00:0C:29:74:1C:55 inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe74:1c55/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:499 errors:0 dropped:0 overruns:0 frame:0 TX packets:475 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:68326 (66.7 KiB) TX bytes:82641 (80.7 KiB) The main problem is that if i execute this command from the CentOS instance ssh 192.168.1.2 it takes more than 20s to connect. It seems like it's routing the connection to the wrong network. What could it be? Thanks!

    Read the article

  • Ways to parse NCSA combined based log files

    - by Kyle
    I've done a bit of site: searching with Google on Server Fault, Super User and Stack Overflow. I also checked non site specific results and and didn't really see a question like this, so here goes... I did spot this question, related to grep and awk which has some great knowledge but I don't feel the text qualification challenge was addressed. This question also broadens the scope to any platform and any program. I've got squid or apache logs based on the NCSA combined format. When I say based, meaning the first n col's in the file are per NCSA combined standards, there might be more col's with custom stuff. Here is an example line from a squid combined log: 1.1.1.1 - - [11/Dec/2010:03:41:46 -0500] "GET http://yourdomain.com:8080/en/some-page.html HTTP/1.1" 200 2142 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; C) AppleWebKit/532.4 (KHTML, like Gecko)" TCP_MEM_HIT:NONE I'd like to be able to parse n logs and output specific columns, for sorting, counting, finding unique values etc The main challenge and what makes it a little tricky and also why I feel this question hasn't yet been asked or answered, is the text qualification conundrum. When I spotted asql from the grep/awk question, I was very excited but then realised that it didn't support combined out of the box, something I'll look at extending I guess. Looking forward to answers, and learning new stuff! Answers doesn't have to be limited to platform or program/language. For the context of this question, the platforms I use the most are Linux or OSX. Cheers

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • Easiest way to allow direct HTTPS connection in Intercept mode?

    - by Nicolo
    I know the SSL issue has been beaten to death I'm using DNS redirect to force my clients to use my intercept proxy. As we all know, intercepting HTTPS connection is not possible unless I provide a fake certificate. What I want to achieve here is to allow all HTTPS requests connect directly to the source server, thus bypassing Squid: HTTP connection Proxy by Squid HTTPS connection Bypass Squid and connect directly I spent the past few days goolging and trying different methods but none worked so far. I read about SSL tunneling using the CONNECT method but couldn't find any more information on it. I tried a similar method in using RINETD to forward all traffic going through port 443 of my Squid back to the original IP of www.pandora.com. Unfortunately, I did not realize all other HTTPS requests are also forwarded to the IP of www.pandora.com. For example, https://www.gmail.com also takes me to https://www.pandora.com Since I'm running the Intercept mode, the forwarding needs to be dynamic and match each HTTPS domain name with proper original IP. Can this be done in Squid or iptables? Lastly, I'm directing traffic to my Squid server using DNS zone redirect. For example, a client requests www.google.com, my DNS server directs that request to my Squid IP, then my transparent Squid will proxy that request. Will this set up affect what I'm trying to achieve? I tried many methods but couldn't get it to work. Any takes on how to do this?

    Read the article

  • Firefox v15.0 about:newtab inaccessible via the back button

    - by Willem
    As of this morning I upgraded my Firefox to version 15.0. Or well, it did it for me. That's when my issue began. I've configured Firefox so that when I open up a new tab via Ctrl-T that it loads the about:newtab page which was released in version 14.x(?). I use this functionality extensively. I visit a page from the about:newtab page, read it, press the back button and pick another frequent website from the display I want to visit. However as of v15.0 the latter part is no longer functioning, after visiting a website from the about:newtab page the back button will not let me go back to the about:newtab page. Firefox acts as if that history entry was never there. I've tried searching the release notes to see if this change was intentional or a reported bug, but I have not found anything related so far. Is there anyway to (re)configure the new Firefox v15.0 to let me access the about:newtab page via a tab's history? EDIT: It appears that this only affects the initial 'visit' to the about:newtab page when opening a new tab, manually visiting about:newtab will register it as a history entry and allow it to be navigated to via the back and forward buttons. So I guess this changes my question to 'How can I make Firefox 15.0 treat the inital page from a newtab as a history entry'.

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • 7ZIP - Command Line Compression | Can Never Keep it Simple

    - by OneTwoYou
    I've been Googleing for a few hours on how to just compress a file inside a directory and I can't find anything. I found how to just compress a folder in general. Now I wish to know how I can compress a folder in a folder with a file. Current code: 7zG.exe a -tzip "test.zip" dontcompressme/compressme/new.txt pause As you can see above, I don't want to compress the first folder, but only the second and what ever is within that folder. I have the 7zG.exe sitting in the main folder and I have some files that are three folders in, but I don't know how to only compress those. Here is my directory list: Folder One (don't compress) Folder Two (don't compress) Folder Three (okay to compress) Document One.txt (okay to compress) Document Two.txt (okay to compress) Index.html (okay to compress) Does anyone know how I can do this in the most simplest way ever invented by man? Cause whenever I go to a website using Google it goes throw all these methods on how to compress a folder, but not do it the way I wish it to do. It makes me kinda upset cause I can't get a simple and straight forward answer. Thank you if you answer my question.

    Read the article

  • Bandwidth monitoring with iptables for non-router machine

    - by user1591276
    I came across this tutorial here that describes how to monitor bandwidth using iptables. I wanted to adapt it for a non-router machine, so I want to know how much data is going in/coming out and not passing through. Here are the rules I added: iptables -N ETH0_IN iptables -N ETH0_OUT iptables -I INPUT -i eth0 -j ETH0_IN iptables -I OUTPUT -o eth0 -j ETH0_OUT And here is a sample of the output: user@host:/tmp$ sudo iptables -x -vL -n Chain INPUT (policy ACCEPT 1549 packets, 225723 bytes) pkts bytes target prot opt in out source destination 199 54168 ETH0_IN all -- eth0 * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1417 packets, 178128 bytes) pkts bytes target prot opt in out source destination 201 19597 ETH0_OUT all -- * eth0 0.0.0.0/0 0.0.0.0/0 Chain ETH0_IN (1 references) pkts bytes target prot opt in out source destination Chain ETH0_OUT (1 references) pkts bytes target prot opt in out source destination As seen above, there are no packet and byte values for ETH0_IN and ETH0_OUT, which is not the same result in the tutorial I referenced. Is there a mistake that I made somewhere? Thanks for your time.

    Read the article

  • Fortigate restrict traffic through one external IP

    - by Tom O'Connor
    I've got a fortigate 400A at a client's site. They've got a /26 from British Telecom, and we're using 4 of those IPs as a NAT Pool. Is there a way to say that traffic from 172.18.4.40-45 can only ever come out of (and hence go back into) x.x.x.140 as the external IP? We're having some problems with SIP which looks like it's coming out of one, and trying to go back into another. I tried enabling asymmetric routing, didn't work. I tried setting a VIP, but even when I did that, it didn't appear to do anything. Any ideas? I can probably post some firewall snippets if need be.. Tell me what you want to see. SIP ALG config system settings set sip-helper disable set sip-nat-trace disable set sip-tcp-port 5061 set sip-udp-port 5061 set multicast-forward enable end Interesting Sidenote VoIP phones, with no special configuration can register fine to proxy.sipgate.co.uk, which has an IP address of 217.10.79.16. Which is cool. Two phones are using a different provider, whose proxy IP address is 178.255.x.x. These phones can register for outbound, but inbound INVITEs never make it to the phone. Is it possible that the Fortigate is having trouble with 178.255.x.x as it's got a 255 in it? Or am I just imagining things?

    Read the article

  • Failed none and iptables

    - by Michael
    The problem is that when I ssh to my host with putty and enter user name, after that the password prompt delays. Found this is directly related to my iptables and can solve by changing default policy to ACCEPT. If default INPUT policy is ACCEPT, then password prompt is coming immediately. Mar 13 00:05:01 server-ubuntu sshd[6154]: Connection from 192.168.0.10 port 26304 Mar 13 00:05:06 server-ubuntu sshd[6154]: Failed none for acid from 192.168.0.10 port 26304 ssh2 However, if default INPUT policy is DROP, I got slight delay in getting password prompt after I enter username Mar 13 00:07:12 server-ubuntu sshd[6177]: Connection from 192.168.0.10 port 26333 Mar 13 00:07:35 server-ubuntu sshd[6177]: Failed none for acid from 192.168.0.10 port 26333 ssh2 For the second case, I tried to set default policy for FORWARD and OUTPUT chains to ACCEPT, but it didn't help. The only rule in this case is: -A INPUT -i eth1 -m mac --mac-source 00:26:XX:XX:XX:XX -j ACCEPT 00:26:XX:XX:XX:XX is the mac address from which I am trying to ssh to server's LAN(eth1). I'm sure there has to be some rule, which I can use while default INPUT chain policy is DENY in order to get password prompt immediately. I realize that the error message in the log is something normal and part of some verification procedure.

    Read the article

  • Did my registrar screw up or is this how name server propagation works?

    - by Brad
    So my company has a number of domains with a large registrar that shall go unnamed. We are making some changes to our DNS infrastructure and the first of those is we are moving our secondary DNS from one server on site to four servers offsite. So we updated the name servers for each domain at the registrar by removing the entry for the old secondary name server and adding the four new ones. I monitored the old secondary server for requests and when I saw no new requests had been made for 24 hours I shut it down. That was this morning. I assumed at this point everything was good. Unfortunately this was my mistake. I should have gone and made sure name servers at large were returning the correct NS records. So this afternoon we were performing maintenance on our primary DNS server and we shut it down. This is when I started getting alerts from our external monitoring. I checked and sure enough, the DNS server used there reported the only NS record for our primary domain was the primary name server. The new secondary servers were not listed and neither was the old secondary. Is it unreasonable of me to have assumed that because the update was from ns1.mydomain.com ns2.mydomain.com to ns1.mydomain.com ns1.backupdns.com ns2.backupdns.com ns3.backupdns.com ns4.backupdns.com in one step at the registrar that there should be no intermediate state where the only NS record was for ns1.mydomain.com? Going forward to be safe obviously I will always leave the old name servers alone until after I'm 100% sure the new ones have propagated and only then remove the old name servers from the registrar. However, I'd still like to know if my registrar screwed up or if my expectation was unreasonable.

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • Remote network traffic not passing through VPN

    - by John Virgolino
    We have the following topology: LAN A LAN B LAN C 10.14.0.0/16 <-VPN-> 10.18.0.0/16 --- SONICWALL <-VPN-> M0N0WALL --- 10.32.0.0/16 Traffic between LAN A and LAN B works perfectly. Traffic between LAN C and LAN B works perfectly. Traffic between LAN A and LAN C, not so much. LAN A's gateway has a route to LAN C that points to the Sonicwall. The Sonicwall has a route to LAN A pointing to the VPN gateway connecting LAN B to LAN A. Tracing packets on the Sonicwall shows the LAN C destined traffic to arrive on the Sonicwall, but it does not forward the traffic, it dies there. Traffic from LAN B gets forwarded. Tracing packets on the Sonicwall while sending traffic from LAN C destined for LAN A shows nothing. This tells me that the M0N0WALL is not forwarding traffic for the 10.14.0.0 network and the Sonicwall is not forwarding from 10.14.0.0. The SA on the Sonicwall terminates on the WAN ZONE and is defined to use an address group that incorporates both the 10.14.0.0 and 10.18.0.0 networks. The M0N0WALL is configured for the 10.18.0.0 network and I have tried with both a static route to 10.14.0.0 and without on the M0N0WALL. I tried manually adding the 10.14.0.0 network to the SA on the M0N0WALL, but that really aggravated it and the SA never came up, so I reverted. I have checked all the firewall rules to make sure nothing is blocked. All of the Sonicwall auto-added rules look right. Specs: Sonicwall TZ200, Enhanced OS M0N0WALL v1.32 I'm at a loss at this point. Any help would be appreciated.

    Read the article

  • Cache Control Headers with IIS 7.5

    - by Brad
    I'm trying to wrap my head around client side (web browser) caching and how it works in relation to IIS 7.5 cache control headers. In particular: If we want to force clients to reload cached resources, how must IIS be configured? Do we need to set expire web content immediately if the resources on the server have a more recent Modified Date (or ETag value)? Right now we're not setting any cache headers. So if I set a cache header of no-cache (which I think is the equivalent of expire web content immediately) will that force the web browser to obtain a new version of a particular file. Or will the browser only request a new version after it deems its current copy to be stale and then from that point forward not cache it? Would a best practice be to set a cache control flag of 1 week, then 8 days before I know I am going to make a change set the cache control down to for instance 30 minutes? But if I do that and then need to immediately expire an item from users caches because there was an issue with it how do I do that?

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • Active Directory: Determining DN or OU from log in credentials [closed]

    - by Christopher Broome
    I'm updating a PHP login process to leverage active directory on a Windows server. The logging in process seems pretty straight forward via a "ldap_bind", but I also want to pull some profile information from the AD server (first name, last name, etc...) which seems to require a robust distinguished name (DN). When on the windows server I can grab this via 'dsquery user' on the command prompt, but is there a way to get the same value from just the user's login credentials in PHP? I want to avoid getting a list of hundreds of DNs when on-boarding clients and associating each with one of our users, so any means to programmatically determine this would be preferential. Otherwise, I'll know the domain and host for the request so I can at least set the DC portions of the DN, but the organizational units (OU) seem to be pretty important for querying data. If I can find some of the root level OU values associated with the user I can do a ldap_search and crawl. I browsed through the existing questions and found some similar but nothing that really addressed this, so my apologies if the obvious answer is out there. Thanks for the help.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >