Search Results

Search found 50527 results on 2022 pages for 'http expires'.

Page 236/2022 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • WCF Service in Windows Services

    - by sivakumar
    I create WCF service library and i test that working fine on WCF Test client(default). when i host the WCF service in winodws service that time i got the error. I am using windows XP sp3, .Net 3.5 and Visual Studio 2008. i got error. Error opening host : HTTP could not register URL "http://+:8731/WCFServerDLL/Service1/." Your process does not have access rights to this namespace (see "http://go.microsoft.com/fwlink/?LinkId=70353" for details). the above link for microsoft i implement the httpcfg. Here i run the "httpcfg.exe set urlacl /u http://localhost:8731/WCFServerDLL/Service1/ /a" i get the result HttpSetServiceConfiguration completed with 0. what is the problem i got same error. can you give me a suggation.

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • Problem adding public key for apt

    - by highBandWidth
    I was trying to get the official mongodb for Ubuntu, following the instructions at http://www.mongodb.org/display/DOCS/Ubuntu+and+Debian+packages After adding the deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen line in my sources, I need to add the pgp key since synaptic says W: GPG error: http://downloads-distro.mongodb.org dist Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9ECBEC467F0CEB10 Again following instructions, I did sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 this says Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 7F0CEB10 gpg: requesting key 7F0CEB10 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0 Interestingly, I also get $ apt-key list gpg: fatal: /home/myname/.gnupg: directory does not exist! secmem usage: 0/0 bytes in 0/0 blocks of pool 0/32768 How can I get apt to use this source?

    Read the article

  • iOrgSoft Video Converter for Mac

    - by terryhao
    [url=http://www.iorgsoft.com/Video-Converter-for-Mac/]video converter for mac[/url] IOrgSoft[url=http://www.iorgsoft.com/Video-Converter-for-Mac/]video converter for mac[/url] is an excellent video converting and editing software for Macintosh users. A built-in powerful video player, trimming, splitter/joiner/merger tools give you everything you need to manage your videos on mac. This mac converter supports many video formats like AVI, MP4, WMV, MPEG-1,2, YouTube(FLV), Limewire, Realplayer(RM,RMVB), Quicktime(MOV), MKV, MOD, TOD, ASF, 3GP, 3G2, AVCHD/M2TS/MTS/TS/TRP/TS, MXF, etc. Video Converter for Mac features a very clean user interface which makes this task a breeze. You can trim/clip any segments and optionally merge/join and sort them to create your personal movie, crop frame size to remove any unwanted area in the frame just like a pair of smart scissors and set the output video parameters such as video resolution, video frame rate, audio codec, video codec and video quality. Converted videos can be imported into imovie/itunes/FCE/FCP/QuickTime Pro or played on iPad, iPod touch, iPod classic, iPod nano, iPhone, iPhone 3GS, Apple TV, PSP, PS3, Creative Zen, iRiver PMP, Archos, mobile phones and other MP4/MP3 players. Video Converter for Mac makes video conversion easy. Free download now and have a try for yourself! [url=http://www.iorgsoft.com/Video-Editor-for-Mac/]Video Editor for Mac[/url] [url=http://www.iorgsoft.com/Mod-Converter/]mod converter[/url] [url=http://www.iorgsoft.com/Mod-Converter-for-Mac/]mod converter for mac[/url]

    Read the article

  • Squid Proxy: url_regex acl is not working?

    - by bharathi
    I am using squid proxy 3.1 in ubuntu machine. I want to allow only urls matching our pattern through our proxy server. I configured acl like below. Acl for dstdomain is working fine. If i access any url besides .zmedia.com , I got proxy connection refused. But the url_regex is not working. What i am trying here is. Allow only request from ".zmedia.com" domain and the request url should be in "/blog" context. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 ::1 acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$ acl allowdomain dstdomain .zmedia.com acl Safe_ports port 80 8080 8500 7272 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl SSL_ports port 7272 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager http_access deny !allowdomain http_access allow urlwhitelist http_access allow CONNECT SSL_ports http_access deny CONNECT !SSL_ports # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid append_domain .zmedia.com # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 Please correct me , If i did anything wrong?

    Read the article

  • Nginx access log shows authenticated user "admin"

    - by bearcat
    I came across a line in my Nginx access log: 218.201.121.99 - admin [12/Dec/2012:18:33:18 +0800] "GET /manager/html HTTP/1.1" 444 0 "-" "-" Let me stress that there is only 1 record with this IP. Notice the authenticated user admin. After some googling, I was able to find out only that this is authenticated user (http://wiki.nginx.org/HttpCoreModule#.24remote_user), which was authenticated by the Auth Basic Module (http://wiki.nginx.org/HttpAuthBasicModule). However, nowhere in my site (configuration) do I use HTTP basic authentication. What is going on? How did it get there? Was the user authenticated?

    Read the article

  • asp.net mvc 2 web application inside a Web site?

    - by Amitabh
    I have a Asp.Net Web Site deployed as a WebSite inside IIS 7.5. http://localhost/WebSite Then I have a second Asp.Net MVC 2 web application which is deployed as Sub Application inside the above WebSite. So the mvc aplication should work on the following Url. http://localhost/WebSite/MvcApp/ The web site works fine but when I browse the mvc Url http://localhost/WebSite/MvcApp/ It gives following error. HTTP Error 403.14 - Forbidden The Web server is configured to not list the contents of this directory.

    Read the article

  • Apache MatchRedirect exception regex

    - by Arash Mousavi
    I want to redirect any URL that is Https and hasn't start with "system_" to the same URL with http. for exapmle for this url : https://exsite.tld/some/thing/that/not/start/with/pattern to : http://exsite.tld/some/thing/that/not/start/with/pattern but this url: https://exsite.tld/system_aas3f4 Shouldn't redirect. I try: RedirectMatch ^/?((?!(system_)).*) http://exsite.tld/$1 but it won't work. I don't know what's the problem.

    Read the article

  • fail2ban with Cloudflare

    - by tatersalad58
    I'm using fail2ban to block web vulnerability scanners. It is working correctly when visiting the site if CloudFlare is bypassed, but a user can still access it if going through it. I have mod_cloudflare installed. Is it possible to block users with IPtables when using Cloudflare? Ubuntu Server 12.04 32-bit Access.log: 112.64.89.231 - - [29/Aug/2012:19:16:01 -0500] "GET /muieblackcat HTTP/1.1" 404 469 "-" "-" Jail.conf [apache-probe] enabled = true port = http,https filter = apache-probe logpath = /var/log/apache2/access.log action = iptables-multiport[name=apache-probe, port="http,https", protocol=tcp] maxretry = 1 bantime = 30 # Test Apache-probe.conf [Definition] failregex = ^<HOST>.*"GET \/muieblackcat HTTP\/1\.1".* ignoreregex =

    Read the article

  • Creating yahoo pipe from google cal feed results in german language headings [closed]

    - by kevyn
    I'm trying to create a Yahoo pipe which combines 4 google calendar RSS feeds into a single feed sorted by date. I've created a yahoo pipe to do this (Which can be found here) The problem is, the headings all appear in German! I've searched online and the only suggestion to be made is this one which suggests that: It's actually Google doing the translation based on the requester IP and doing a geolocation based on that IP. and they suggest changing the .com to a .co.uk, however this does not work for me as yahoo pipes cannot find the feed (403 error) Does anyone have a solution? if there is another solution other than yahoo pipes then I'm all ears! here are the feeds i'm trying to combine: http://www.google.com/calendar/feeds/8tqsfkbs00erv85u2shenea60s%40group.calendar.google.com/public/basic http://www.google.com/calendar/feeds/di85fkb2u1m4si1sqar9d73ghk%40group.calendar.google.com/public/basic http://www.google.com/calendar/feeds/oq5k4pevdjgb4o59muiml72i2k%40group.calendar.google.com/public/basic http://www.google.com/calendar/feeds/f1gg60fr3esdovp15gp83traec%40group.calendar.google.com/public/basic thanks in advance :-)

    Read the article

  • Centos does not open port/s after the rule/s are appended

    - by Charlie Dyason
    So after some battling and struggling with the firewall, i see that I may be doing something or the firewall isnt responding correctly there is has a port filter that is blocking certain ports. by the way, I have combed the internet, posted on forums, done almost everything and now hence the website name "serverfault", is my last resort, I need help What I hoped to achieve is create a pptp server to connect to with windows/linux clients UPDATED @ bottom Okay, here is what I did: I made some changes to my iptables file, giving me endless issues and so I restored the iptables.old file contents of iptables.old: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT after iptables.old restore(back to stock), nmap scan shows: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 13:54 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.014s latency). Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 4.95 seconds if I append rule: (to accept all tcp ports incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 13:58 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 3.77 seconds *notice it allows and opens port 443 but no other ports, and it removes port 113...? removing previous rule and if I append rule: (allow and open port 80 incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -p tcp --dport 80 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:01 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.014s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 5.12 seconds *notice it removes port 443 and allows 80 but is closed without removing previous rule and if I append rule: (allow and open port 1723 incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -p tcp --dport 1723 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:05 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.015s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 5.16 seconds *notice no change in ports opened or closed??? after removing rules: iptables -A INPUT -i eth0 -m tcp -p tcp --dport 80 -j ACCEPT iptables -A INPUT -i eth0 -m tcp -p tcp --dport 1723 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:07 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.015s latency). Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 113/tcp closed ident Nmap done: 1 IP address (1 host up) scanned in 5.15 seconds and returning rule: (to accept all tcp ports incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:07 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 3.87 seconds notice the eth0 changes the 999 filtered ports to 858 filtered ports, 139 closed ports QUESTION: why cant I allow and/or open a specific port, eg. I want to allow and open port 443, it doesnt allow it, or even 1723 for pptp, why am I not able to??? sorry for the layout, the editor was give issues (aswell... sigh) UPDATE @Madhatter comment #1 thank you madhatter in my iptables file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT # ----------all rules mentioned in post where added here ONLY!!!---------- -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT if I want to allow and open port 1723 (or edit iptables to allow a pptp connection from remote pc), what changes would I make? (please bear with me, my first time working with servers, etc.) Update MadHatter comment #2 iptables -L -n -v --line-numbers Chain INPUT (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 9 660 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 3 0 0 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 4 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 5 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 6 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 6 packets, 840 bytes) num pkts bytes target prot opt in out source destination just on a personal note, madhatter, thank you for the support , I really appreciate it! UPDATE MadHatter comment #3 here are the interfaces ifconfig eth0 Link encap:Ethernet HWaddr 00:1D:D8:B7:1F:DC inet addr:[server ip] Bcast:[server ip x.x.x].255 Mask:255.255.255.0 inet6 addr: fe80::21d:d8ff:feb7:1fdc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36692 errors:0 dropped:0 overruns:0 frame:0 TX packets:4247 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2830372 (2.6 MiB) TX bytes:427976 (417.9 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) remote nmap nmap -p 1723 [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-11-01 16:17 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). PORT STATE SERVICE 1723/tcp filtered pptp Nmap done: 1 IP address (1 host up) scanned in 0.51 seconds local nmap nmap -p 1723 localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-11-01 16:19 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.000058s latency). Other addresses for localhost (not scanned): 127.0.0.1 PORT STATE SERVICE 1723/tcp open pptp Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds UPDATE MadHatter COMMENT POST #4 I apologize, if there might have been any confusion, i did have the rule appended: (only after 3rd post) iptables -A INPUT -p tcp --dport 1723 -j ACCEPT netstat -apn|grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1142/pptpd There are not VPN's and firewalls between the server and "me" UPDATE MadHatter comment #5 So here is an intersting turn of events: I booted into windows 7, created a vpn connection, went through the verfication username & pword - checking the sstp then checking pptp (went through that very quickly which meeans there is no problem), but on teh verfication of username and pword (before registering pc on network), it got stuck, gave this error Connection failed with error 2147943625 The remote computer refused the network connection netstat -apn | grep -w 1723 before connecting: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd after the error came tried again: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd tcp 0 0 41.185.26.238:1723 41.13.212.47:49607 TIME_WAIT - I do not know what it means but seems like there is progress..., any thoughts???

    Read the article

  • Apache mod_proxy with SSL not redirecting

    - by simonszu
    I have a custom server running behind an apache reverse proxy. Since the custom server can only handle HTTP traffic, i am trying to use apache for wrapping proper SSL around it, and for some kind of HTTP authentication. So i enabled mod_proxy and mod_ssl and modified sites-available/default-ssl. The config is as following: <Location /server> order deny,allow allow from all AuthType Basic AuthName "Please log in" AuthUserFile /etc/apache2/htpasswd Require valid-user ProxyPass http://192.168.1.102:8181/server ProxyPassReverse http://192.168.1.102:8181/server </Location> The custom server is accessible from the internal network via the location specified in the ProxyPass directive. However, when the proxy is accessed from the outside, it presents the login prompt, and after successfully authenticated, i get a blank page with the words The resource can be found at http://192.168.1.102:8181/server. When i type the external URL again in an already authenticated browser instance, i am properly redirected to the server frontend. The access.log is full of entrys stating that my browser does successful GET requests, and the proxy is happily serving the /server ressource. However, the ressource isn't containing the server's frontend, but this blank page with these words on it.

    Read the article

  • CNet router - no field for private port

    - by Aadit M Shah
    I'm trying to configure port forwarding on my CNet router for a locally hosted HTTP server. The model number of my router is CQR-981 and the firmware version is 1.0.43. The problem is that there's no field to enter the private port of the HTTP server (the local port). According to the manual there should be one. Here's a picture of the manual: Here's a screenshot of my router page for port forwarding (with no field for private port): Is there some way I can circumvent this problem. Perhaps manually make an HTTP request to the HTTP server on the router to update the table with the private port number, or perhaps update my firmware to solve this problem.

    Read the article

  • httpd.conf for case-insensitive file serving

    - by Anton Gogolev
    I'm a complete newbie with regard to managing Apache, so excuse me if I'm phrasing something incorrectly. I have a web site -- say, http://domain.com. The problem is that when I try to open http://domain.com/index.html in a web browser it displays the page, but when I attempt to access http://domain.com/Index.html (note capital I), it responds with HTTP 404. How do I configure Apache to serve both these files (and directories, for that matter) in a case-insensitive manner? Current httpd.conf is here. EDIT Dan C, thanks for a hint. I basically want to allow users to download files from my server and don't really want them to be aware that Index.html and index.html are in fact different. I'm also very willing to know as to what are the ramifications of this decision.

    Read the article

  • What resources are best for staying current about information security?

    - by dr.pooter
    What types of sites do you visit, on a regular basis, to stay current on information security issues? Some examples from my list include: http://isc.sans.org/ http://www.kaspersky.com/viruswatch3 http://www.schneier.com/blog/ http://blog.fireeye.com/research/ As well as following the security heavyweights on twitter. I'm curious to hear what resources you recommend for daily monitoring. Anything specific to particular operating systems or other software. Are mailing lists still considered valuable. My goal would be to trim the cruft of all the things I'm currently subscribed to and focus on the essentials.

    Read the article

  • Wrong Outlook anywhere settings

    - by Ken Guru
    Hey all I wanted to enable NTLM authentication on OutlookAnywhere, and after doing the command Set-OutlookAnywhere -IISAuthenticationMethods Basic,NTLM, my settings got changed. This is a dump before I run the command: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN= EXCAS01,CN=Servers,CN=Exchange Administrative Grou p (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Fi rst Organization,CN=Microsoft Exchange,CN=Services ,CN=Configuration,DC=asp,DC=ssc,DC=no Identity : EXCAS01\Rpc (Default Web Site) Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 05.01.2011 16:59:55 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : IsValid : True Noticde the settings for "Name", "DistinguishedName", and "Identity". After I run the command, I ended up with this: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic, Ntlm} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : EXCAS01 DistinguishedName : CN=EXCAS01,CN=HTTP,CN=Protocols,CN=EXCAS01,CN=Serv ers,CN=Exchange Administrative Group (FYDIBOHF23SP DLT),CN=Administrative Groups,CN=First Organizatio n,CN=Microsoft Exchange,CN=Services,CN=Configurati on,DC=asp,DC=ssc,DC=no Identity : EXCAS01\EXCAS01 Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 06.01.2011 09:43:50 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : ASP-DC-2. IsValid : True Now, the "Name", "DistinguishedName" and "Identity" has changed, and when I try to change it back by running "Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)", I get the following error: [PS] C:\Windows\system32Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)" Set-OutlookAnywhere : The operation could not be performed because object 'EXCA S01\Rpc (Default Web Site)' could not be found on domain controller 'ASP-DC-2.'. Remember, the RPC over HTTP works fine with Basic authentication (even with the wrong settings), but NTLM still doesnt work. How do I change back the settings?

    Read the article

  • Samba - Is my server vulnerable to CVE-2008-1105?

    - by Joao Heleno
    Hi! I have a CentOS server that is running Samba and I want to verify the vulnerability addressed by CVE-2008-1105. What scenarios can I build in order to run the exploit that is mentioned in http://secunia.com/advisories/cve_reference/CVE-2008-1105/? http://secunia.com/secunia_research/2008-20/advisory/ says that "Successful exploitation allows execution of arbitrary code by tricking a user into connecting to a malicious server (e.g. by clicking an "smb://" link) or by sending specially crafted packets to an "nmbd" server configured as a local or domain master browser." More info: http://www.samba.org/samba/security/CVE-2008-1105.html http://secunia.com/secunia_research/2008-20/advisory/

    Read the article

  • Start screen with bash command

    - by Jeje
    I need to start screen with some bash command to execute. Trying screen -S test -d -m bash -c './test.php' but have no result, screen didn't apear. Even more, let's that i need to start something like that vlc -I ncurses --http-reconnect http://ip/ --sout '#duplicate{dst=std{access=http{user=,pwd=},mux=ts,dst=:51001}}' --ttl=255 --loop --repeat

    Read the article

  • Using AT on Ubuntu to Background Downloads (w/ Queue)

    - by Nicholas Yost
    I am writing a PHP script, but I want to use the AT command in Ubuntu to fetch a remote file via WGET. I'm basically looking to background the process, so PHP can finish fairly quickly. I cannot find any questions on here about how to use both, but I basically want to do the following pseudo-code: <?php exec('at now -q queuename wget http://path.to/remote/file.ext'); ?> Additionally, I'd like to queue this between providers. I'd like to have each path.to have its own queue, so I only download one file from each provider at a time. Meaning: <?php exec('at now -q remote wget http://path.to/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); exec('at now -q vendortwo wget http://vendor.two/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); ?> This should download the files from path.to, vendor.one, vendor.two immediately, and when the first file is finished downloading from vendor.one, it starts the second file. Does that make sense? I can't find anything like this anywhere on the web, much less on SO/SF. If we can use the crontab to run a one-off wget command, thats fine too.

    Read the article

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Need help to configure file:default on apache2

    - by turk182
    hi all!! im trying to use xen on ubuntu 8.04 hardy heron, because it is a project that assign to me in my new job, i have already installed xen and im running the virtual machines. according to the guide that they give me, i have to configure de file: default, from apache2 directory, like this: vi /etc/apache2/sites-available/default inside of this file i have to write the next information: NameVirtualHost * VirtualHost * ServerName "www".ejemplo.com ServerAlias ejemplo.com DocumentRoot /var/www/ ProxyRequests Off Proxy * Order deny,allow Allow from all /Proxy ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=BALANCEID nofailover=On ProxyPassReverse / "http"://http1.ejemplo.com/ ProxyPassReverse / "http"://http2.ejemplo.com/ Proxy balancer://mycluster BalancerMember "http://10.10.2.101:8080 loadfactor=1 BalancerMember "http://10.10.2.102:8080 loadfactor=2 ProxySet lbmethod=byrequests /Proxy Location /balancer-manager SetHandler balancer-manager Order deny,allow Allow from all /Location /VirtualHost in the section of balancermember im using the ip of the virtual machine: virtual machine 1 has ip 10.10.2.101 and virtual machine 2 has ip 10.10.2.102 then i have to install apache2 on each virtual machine and restart apache2 the question is what i hace to do to verify if all of this works allegedly i have to open a browser and write "www.ejemplo.com" and suppost show something thats the reason that im ask for help cause i dont know what to do, im looking for on the web and i cant find nothing related with this... ill appreciatte your help. THXS!!! pd. i closed "www" and "HTTP" in quotes by rules of this sites cause im a new user

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

  • idn domain not showing correct in firefox

    - by kakaala
    I have bought this domain bilvärdering.nu http://xn--bilvrdering-o8a.nu/ in firefox adressbar it show up like http://xn--bilvrdering-o8a.nu/ but in chrome and IE it show up with the correct characters http://bilvärdering.nu/ How can I make firefox show correct characters in its adressbar?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >