Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 11/232 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Route all requests through PageController except existing controllers (Zend Framework)

    - by ChrisRamakers
    For a new CMS i've developed a Pages module that allows me to manage the site's tree structure. Each page is reachable from the url http://www.example.com/pageslug/ where pageslug identifies the page being called. What I want to achieve now is a route that allows me to route all incoming requests to a single PagesController unless it's a request to an existing controller (like images for example). It's easy enough to catch all requests to the Pages Controller but how to exclude existing controllers? This is my module bootstrap. How can i achieve this in the most preferrable way <?php class Default_Bootstrap extends Zend_Application_Module_Bootstrap { protected function _initRoute() { $this->bootstrap('frontController'); /* @var $frontcontroller Zend_Controller_Front */ $frontcontroller = $this->getResource('frontController'); $router = $frontcontroller->getRouter(); $router->addRoute( 'all', new Zend_Controller_Router_Route('*', array('controller' => 'pages', 'action' => 'view') ) ); } }

    Read the article

  • Getting 401 on Twitter OAuth POST requests

    - by Baishampayan Ghose
    I am trying to use Twitter OAuth and my POST requests are failing with a 401 (Invalid OAuth Request) error. For example, if I want to post a new status update, I am sending a HTTP POST request to https://twitter.com/statuses/update.json with the following parameters - status=Testing&oauth_version=1.0&oauth_token=xxx& oauth_nonce=xxx&oauth_timestamp=xxx&oauth_signature=xxx& oauth_consumer_key=xxx&in_reply_to=xxx&oauth_signature_method=HMAC-SHA1` My GET requests are all working fine. I can see on the mailing lists that a lot of people have had identical problems but I could not find a solution anywhere. I am using the oauth.py Python library.

    Read the article

  • Jaxer and HTTP proxy requests...

    - by rakhavan
    Thanks to everyone in advance. I'm using Jaxer.sandbox and making requests just fine. I'd like these requests to go through my http proxy (like squid for example). Here is the code I that is currently working for me. window.onload = function() { //the url to scrape var url = "http://www.cnn.com/"; //our sandboxed browser var sandbox = new Jaxer.Sandbox(); //open optons var openOptions = new Jaxer.Sandbox.OpenOptions(); openOptions.allowJavaScript = false; openOptions.allowMetaRedirects = false; openOptions.allowSubFrames = false; openOptions.allowSubFrames = false; openOptions.onload = function() { //do something onload }; //make the call sandbox.open(url, null, openOptions); //write the response Jaxer.response.setContents(sandbox.toHTML()); }; How can I send this request through a proxy server? Thanks, Reza.

    Read the article

  • Compressing as GZip WCF requests (SOAP and REST)

    - by Joannes Vermorel
    I have a .NET 3.5 web app hosted on Windows Azure that exposes several WCF endpoints (both SOAP and REST). The endpoints typically receive 100x more data than they serve (lot of data is upload, much fewer is downloaded). Hence, I am willing to take advantage from HTTP GZip compression but not from the server viewpoint, but rather from the client viewpoint, sending compressed requests (returning compressed responses would be fine, but won't bring much gain anyway). Here is the little C# snippet used on the client side to activate WCF: var binding = new BasicHttpBinding(); var address = new EndpointAddress(endPoint); _factory = new ChannelFactory<IMyApi>(binding, address); _channel = _factory.CreateChannel(); Any idea how to adjust the behavior so that compressed HTTP requests can be made?

    Read the article

  • email tracking image duplicate requests

    - by DEH
    I am embedding tracking images within emails that are being sent from a custom-built opt-in CRM system. The image src is an encoded .gif, such as src="12_34_675.gif". The image is served by an ASP.NET httphandler that decodes the src encoding and serves a transparent image. Everything works fine, but some email clients request the image multiple times, creating duplicate entries. Some clients make three calls all within one second, and some seem to make tens of calls over a day or so. Mostly email clients make single calls, but these few duplicates are very perplexing. I know I can code around them, but I'd really like to understand what's going on. I've checked the IIS log files, which show that the duplicate requests are coming from the client machines. I can't think what might be causing these duplicate http requests. Help!

    Read the article

  • Unknown http requests of type http://<domain>/cache/<32-digit-alphanumeric-key>

    - by Siva Bathula
    I am getting a lot of incoming requests with this structure: //domain_name/cache/22092e9b25c40809dfb94b6179166b26. I am running a .NET 4.0 website served from IIS 7.5. A lot of these URLs have no referrer URLs and come in randomly with a different 32 digit alphanumeric key. And I do not have any resource like '.../cache/...' on my website. I just want to eliminate such requests and want to understand where these are coming from at all. Any help would be appreciated.

    Read the article

  • What's the requests/second standard for scraping websites?

    - by feydr
    This was the closest question to my question and it wasn't really answered very well imo: http://stackoverflow.com/questions/2022030/web-scraping-etiquette I'm looking for the answer to #1: How many requests/second should you be doing to scrape? Right now I pull from a queue of links. Every site that gets scraped has it's own thread and sleeps for 1 second in between requests. I ask for gzip compression to save bandwidth. Are there standards for this? Surely all the big search engines have some set of guidelines they follow in regards to this.

    Read the article

  • Does Google appengine cache external requests?

    - by Andy Hume
    I have a very simple application running on appengine that requests a web page every five minutes and parses for a specific piece of data. Everything works fine except that the response I get back from the external request (using urllib2) doesn't reflect the latest changes to the page. Sometimes it takes a few minutes to get the latest, sometimes over an hour. Is there a transparent layer of caching that appengine puts in place? Or is there something else I am missing here? I've looked at the caching headers of the requested page and there is no Expires or LastModified's sent. Update: Sometimes, it will get the new version of the page for a number of requests and then randomly later get an old out of date version.

    Read the article

  • How to deal with multiple Facebook requests simultaneously

    - by Peter Warbo
    I'm using the Facebook SDK for my app. I have created a singleton class FacebookHelper to deal with all Facebook related logic. Whenever I make a Facebook request (i.e download friends) I set an enum i.e FacebookRequestDownloadFriends so that FacebookHelper knows how to handle errors and success for that request (since handling can be different between the different requests) This solution has worked out fine until now, because now I'm doing 2 Facebook Requests at the same time so when I set the enum for the first request i.e FacebookRequestDownloadFriends and then it will be overwritten just shortly with another request FacebookRequestDownloadEvents, so there will be confusion obviously. How can I deal with this issue without having to refactor too much code?

    Read the article

  • receiving OPTIONS instead of GET requests?

    - by Urs
    Hi, all I want to achieve is to implement a servlet providing a json feed for my fullcalendar application. When I inspect http://arshaw.com/js/fullcalendar/examples/json.html with Bugzilla, I see that GET-requests are sent to receive the json feed. However, when I use this example within my scenario, fullcalendar seems to send OPTIONS-requests. The only difference is that I replaced "events: "json-events.php" with "http://localhost:8080/CalendarServletTest/HelloWorldServlet" (the url of my servlet). What do I miss? or is this really a bug?

    Read the article

  • Distributing requests to Selenium Grid RC's?

    - by intervigil
    I've got a situation here where I have a central selenium grid hub, and several RC's running on my gogrid account. When I access it to run tests, it basically queues all the incoming test requests and executes them serially on only one of the RC's, instead of spreading them out to use available RC's. The tests come from multiple projects, so I'm not looking to parallelize the tests themselves, just to split the requests that come from multiple projects across the multiple RC's. From everything I've read, it seems like selenium grid should be doing this already, yet I only see one RC used to run every single test. Is there something I'm missing?

    Read the article

  • Detecting metadata-only read requests in windows filesystem

    - by HyLian
    Hello, I'm developing a kind of filesystem driver. All of read requests that windows makes to my filesystem goes by the driver implementation. I would like to distinguish between "normal" read requests and those who want to get only the metadata from the file. ( Windows reads first 4K of the file and then stop reading ). Does Windows mark this metadata reads in some way? It would be very useful in order to treat that two kind of operations in a different way. In a typical CreateFile call, we have AccessMode, ShareMode, CreationDisposition and FlagsAndAttributes parameters ( being DWORD ), i'm not sure if it's possible to extract some clue of the operation requested. Thanks for reading :)

    Read the article

  • Sending multiple requests simultaneously to the Server using Selenium with Java

    - by gagneet
    I wish to send multiple requests to the server, simultaneously. The problem statement will be: Read a text file containing multiple URL’s. Open each URL in the web browser. Collect the Cookie information for each call, and store it to a file. Send another call: http://myserver.com:1111/cookie?out=text Store the output (body text) of this file to a separate file for each call made in 4 Open the next URL in the text file given in 1 and repeat steps 1-6. The above is to be run with multi-threading, so that I can send around 5-10 URL requests simultaneously. I have implemented something in Selenium using Java, but have not been able to do the multi-threading approach. Code is given below: package com.cookie.selenium; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import com.thoughtworks.selenium.*; public class ReadURL extends SeleneseTestCase { public void setUp() throws Exception { setUp("http://www.myserver.com/", "*chrome"); } public static void main(String args[]) { Selenium selenium = new DefaultSelenium("localhost", 4444, "*chrome", "http://myserver"); selenium.start(); selenium.setTimeout("30000000"); try { BufferedReader inputfile = new BufferedReader(new FileReader("C:\\url.txt")); BufferedReader cookietextfile = new BufferedReader(new FileReader("C:\\text.txt")); BufferedWriter cookiefile = new BufferedWriter(new FileWriter("C:\\cookie.txt")); BufferedWriter outputfile = null; String str; String cookiestr = "http://myserver.com:1111/cookie?out=text"; String filename = null; int i = 0; while ((str = inputfile.readLine()) != null) { selenium.createCookie("T=222redHyt345&f=5&r=fg&t=100",""); selenium.open( str ); selenium.waitForPageToLoad("120000"); String urlcookie = selenium.getCookie(); System.out.println( "URL :" + str ); System.out.println( "Cookie :" + urlcookie ); cookiefile.write( urlcookie ); cookiefile.newLine(); selenium.open( cookiestr ); selenium.waitForPageToLoad("120000"); String bodytext = selenium.getBodyText(); System.out.println("Body Text :" + bodytext); filename = "C:\\cookies\\" + i + ".txt"; outputfile = new BufferedWriter(new FileWriter( filename )); outputfile.write( bodytext ); outputfile.newLine(); i++; } inputfile.close(); outputfile.close(); cookiefile.close(); selenium.stop(); } catch (IOException e) { } } } What basically I am trying to do here is, open the first set of URL from a text file (which has list given of all the URL's i wish to open). Then when I capture the cookie information from here and store it, I open another window to output all the cookie information for that server to my browser window. This works fine when I do outside of Selenium code, but when I do it within the above code, it opens a "Save As..." popup and my tests stop. :-( I wish to save the contents of that second call to a new file, but have not been able to do the same. Also, if I have to send multiple such requests to the server, how would that be possible in Java using a Selenium Framework. Currently, I am opening multiple instances of the framework and running them with different parameters :-(

    Read the article

  • Dual AJAX Requests at different times

    - by Nik
    Alright, I'm trying to make an AJAX Chat system that polls the chat database every 400ms. That part is working, the part of which isn't is the Active User List. When I try to combine the two requests, the first two requests are made, then the whole thing snowballs and the usually timed (12 second) Active User List request starts updating every 1ms and the first request NEVER happens again. Displayed is the entire AJAX code for both requests: var waittime=400;chatmsg=document.getElementById("chatmsg"); room = document.getElementById("roomid").value; chatmsg.focus() document.getElementById("chatwindow").innerHTML = "loading..."; document.getElementById("userwindow").innerHTML = "Loading User List..."; var xmlhttp = false; var xmlhttp2 = false; var xmlhttp3 = false; function ajax_read(url) { if(window.XMLHttpRequest){ xmlhttp=new XMLHttpRequest(); if(xmlhttp.overrideMimeType){ xmlhttp.overrideMimeType('text/xml'); } } else if(window.ActiveXObject){ try{ xmlhttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try{ xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e){ } } } if(!xmlhttp) { alert('Giving up :( Cannot create an XMLHTTP instance'); return false; } xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4) { document.getElementById("chatwindow").innerHTML = xmlhttp.responseText; setTimeout("ajax_read('methods.php?method=r&room=" + room +"')", waittime); } } xmlhttp.open('GET',url,true); xmlhttp.send(null); } function user_read(url) { if(window.XMLHttpRequest){ xmlhttp3=new XMLHttpRequest(); if(xmlhttp3.overrideMimeType){ xmlhttp3.overrideMimeType('text/xml'); } } else if(window.ActiveXObject){ try{ xmlhttp3=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try{ xmlhttp3=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e){ } } } if(!xmlhttp3) { alert('Giving up :( Cannot create an XMLHTTP instance'); return false; } xmlhttp3.onreadystatechange = function() { if (xmlhttp3.readyState==4) { document.getElementById("userwindow").innerHTML = xmlhttp3.responseText; setTimeout("ajax_read('methods.php?method=u&room=" + room +"')", 12000); } } xmlhttp3.open('GET',url,true); xmlhttp3.send(null); } function ajax_write(url){ if(window.XMLHttpRequest){ xmlhttp2=new XMLHttpRequest(); if(xmlhttp2.overrideMimeType){ xmlhttp2.overrideMimeType('text/xml'); } } else if(window.ActiveXObject){ try{ xmlhttp2=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try{ xmlhttp2=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e){ } } } if(!xmlhttp2) { alert('Giving up :( Cannot create an XMLHTTP instance'); return false; } xmlhttp2.open('GET',url,true); xmlhttp2.send(null); } function submit_msg(){ nick = document.getElementById("chatnick").value; msg = document.getElementById("chatmsg").value; document.getElementById("chatmsg").value = ""; ajax_write("methods.php?method=w&m=" + msg + "&n=" + nick + "&room=" + room + ""); } function keyup(arg1) { if (arg1 == 13) submit_msg(); } var intUpdate = setTimeout("ajax_read('methods.php')", waittime); var intUpdate = setTimeout("user_read('methods.php')", waittime);

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • TMG Forefront Proxy blocking internal HTTP requests

    - by Pascal
    I have TMG Forefront with Proxy installed and configured. However, whenever I make internal HTTP requested to servers on the internal network with a fully qualified dns name, the proxy denies the connection. Denied Connection FRW-02 18/03/2011 20:06:37 Log type: Web Proxy (Forward) Status: 12202 Forefront TMG denied the specified Uniform Resource Locator (URL). Rule: Default rule Source: Internal (10.50.75.21:21492) Destination: Internal (10.50.75.10:8080) Request: GET http://app-01.mydomain.com.br:9871/internalwebserver_deploy/MyServiceService.svc?wsdl Filter information: Req ID: 0a157279; Compression: client=No, server=No, compress rate=0% decompress rate=0% Protocol: http User: anonymous How can I get around this block? This is an internal call, so it should block it. If I use only http://app-01:9871/internalwebserver_deploy/MyServiceService.svc?wsdl, without the domain after the server name, then it doesn't get blocked. 10.50.75.10 is the firewall's ip, and the internal network's gateway.

    Read the article

  • Exchange migration to 2007 making Outlook 2003 unable to read meeting requests

    - by Kvad
    Hi, We are currently moving from Exchange 2003 to 2007 (8.2 build 176.2). We have encounted an issue with one user. In Outlook 2003 when getting a meeting request: "Can't open this item. Could not complete the operation. One or more parameter values are nto valid." The item cannot be previewed in the reading pane either. The item can be viewed in OWA and iPhone fine. I've tried with cache mode off and on. Different computers. Same issue. There are the following entries on the account: SMTP [email protected] [email protected] X400 C=AU;A= ;P=Company Name;O=Exchange;S=LastName;G=FirstName; I'm loathe to recreate the account. This will be an extreme last resort. Any ideas? Thanks in advance.

    Read the article

  • How do I make my internal dns forward requests to a given server

    - by ankimal
    We have a DNS server internally that looks up IP addresses for all internal hosts and connects to root dns servers for all other domains (the rest of the internet). Here is my config options { listen-on port 53 { 127.0.0.1;any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {192.168.1.0/24; 127.0.0.1; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view “internal” { // What the home network will see match-clients { 127.0.0.1;any; }; match-destinations { 127.0.0.1;any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "internal_zones.conf"; }; We need to tweak this to go to our ISPs dns, x.y.z.w instead of the root dns servers if the host cannot be resolved internally. Config: Fedora 10/Bind 9.5.2

    Read the article

  • Route specific HTTP requests through pfSense OpenVPN

    - by DennisQ
    Hi, to start, I have very little knowledge on routes, iptables, etc. That said, here's what I'm trying to accomplish and where I think I'm stumped: Problem: We have an external website which we recently firewalled so it only accepts traffic from our office IP addresses. This works well at the office, but doesn't work for remote access through VPN as we don't route all traffic through OpenVPN. I would rather avoid forcing everyone to route all traffic through just to accommodate this one site. Environment: Main router box is running pfSense. Em0 is internal IP, Em1 is external. Internal net is 10.23.x and VPN is 10.0.8.0/24 I believe what I need to do is add a route to the VPN server config to send all traffic to that IP over the VPN tunnel. I think that part's working, but I don't get a response back, so I'm assuming that I need some NAT config on the VPN server to route the response back over the tunnel? What I've found so far is to try the following, but since this is a pfSense box on FreeBSD, I can't run iptables, etc. Make sure ip forwarding is enabled: echo 1 /proc/sys/net/ipv4/ip_forward Setup NAT back out: iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o em0 -j MASQUERADE Am I on the right path, and if so how do I accomplish this through pfSense UI or FreeBSD CLI? Thanks!

    Read the article

  • DHCP Requests Failing

    - by Jon Rauschenberger
    Clients on our network recently started receiving this error when attemping to acquire an IP Address from our DHCP Server: "the name specified in the network control block (ncb) is in use on a remote adapter" The DHCP Server is a Windows Server 2008 R2 machine, most of the client are Win 7. Can't find much on that error, anyone have an idea what could cause it? Thanks, jon

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i.
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

  • Firefox requests the master password twice

    - by Mehper C. Palavuzlar
    I've set a master password for Firefox. When Firefox starts, it strangely opens two separate password request windows. When I type in the master password and hit enter, Firefox opens without problems, but the other password request window stays there. I simply close it but it's annoying. Why are there 2 windows as it's enough to type the password once? I've upgraded Firefox from 3.5.5 to 3.5.6 but the problem remains. Any comments? PS: The latest news from this issue can be followed from the related Mozilla Support Forum.

    Read the article

  • How to protect my VPS from winlogon RDP spam requests

    - by Valentin Kuzub
    I got some hackers constantly hitting my RDP and generating thousands of audit failures in event log. Password is pretty elaborate so I dont think bruteforcing will get them anywhere. I am using VPS and I am pretty much a noob in Windows Server security (am a programmer myself and its my webserver for my site). Which is a recommended approach to deal with this? I would rather block IPs after some amount of failures for example. Sorry if question is not appropriate.

    Read the article

  • IIS - Forwarding requests to a folder to another port

    - by user1231958
    Context I currently installed Glassfish 3 in a server that currently holds ASP and PHP inside Internet Information Server 7 so we can start moving to a new system architecture (the information system is being remade). Obviously, Glassfish uses another port and without too much configuration (all I had to do is to install it) it worked. If I write www.domain.com:8080, the person will be redirected to the Glassfish server. Issue Obviously I don't want the person to write the port! I also believe it might also hold some security issues. Requirement I need the server to take an address of the form www.domain.com/gf or new.domain.com or something alike, and when it receives such a request, "redirect" (masking the URL) the user to the Glassfish website (www.domain.com:8080). Thank you beforehand!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >