Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 204/232 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Tracking down origin of I/O in multi-process server

    - by Craig Ringer
    I'm currently trying to track down some phantom I/O in a PostgreSQL build I'm testing. It's a multi-process server and it isn't simple to associate disk I/O back to a particular back-end and query. I thought Linux's perf tool would be ideal for this, but I'm struggling to capture block I/O performance counter metrics and associate them with user-space activity. It's easy to record block I/O requests and completions with, eg: sudo perf record -g -T -u postgres -e 'block:block_rq_*' and the user-space pid is recorded, but there's no kernel or user-space stack captured, or ability to snapshot bits of the user-space process's heap (say, query text) etc. So while you have the pid, you don't know what the process was doing at that point. Just perf script output like: postgres 7462 [002] 301125.113632: block:block_rq_issue: 8,0 W 0 () 208078848 + 1024 [postgres] If I add the -g flag to perf record it'll take snapshots of the kernel stack, but doesn't capture user-space state for perf events captured in the kernel. The user-space stack only goes up to the entry-point from userspace, like LWLockRelease, LWLockAcquire, memcpy (mmap'd IO), __GI___libc_write, etc. So. Any tips? Being able to capture a snapshot of the user-space stack in response to kernel events would be ideal.

    Read the article

  • My MYSQL & PHP while code is out of memory, when there is only one row

    - by Sam
    Hey all, why is this code throwing an out of memory error, when there is only 1 row in the database.. $request_db = mysql_query("SELECT * FROM requests WHERE haveplayed='0'") or die(mysql_error()); $request = mysql_fetch_array( $request_db ); echo "<table border=\"1\" align=\"center\">"; while ( $request['haveplayed'] == "0" ) { echo "<tr><td>"; echo $request['SongName']; echo "</td><td>"; echo "<tr><td>"; echo $request['Artist']; echo "</td><td>"; echo "<tr><td>"; echo $request['DedicatedTo']; echo "</td><td>"; } echo "</table>"; Thanks.

    Read the article

  • Soap 1.2 Endpoint Performance

    - by mflair2000
    I have a Client WCF Host Service SOAP 1.2 Service setup and i'm having performance issues on the SOAP Java proxy. I have no control over how the Java service is setup, aside from the endpoint config. I have the Client running Asynchronously, and the host WCF service running in Async pattern, but i see the SOAP 1.2 proxy bottlenecking and handling the requests in a Synchronous way. Can someone take a look at the auto-generated SOAP 1.2 configuration below, from the SOAP 1.2 Service wsdl? Is there a way to configure this for Async way and improve performance? Could be configured for SOAP 1.1? <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true"/> </system.web> <system.serviceModel> <bindings> <customBinding> <binding name="WebserviceListenerSoap12Binding" closeTimeout="00:30:00" openTimeout="00:30:00" receiveTimeout="00:30:00" sendTimeout="00:30:00"> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="20000000" maxArrayLength="20000000" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> <httpTransport manualAddressing="false" maxBufferPoolSize="20000000" maxReceivedMessageSize="20000000" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="false" decompressionEnabled="true" hostNameComparisonMode="StrongWildcard" keepAliveEnabled="true" maxBufferSize="20000000" proxyAuthenticationScheme="Anonymous" realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false" useDefaultWebProxy="true" /> </binding> </customBinding> </bindings> <client> <endpoint address="http://10.18.2.117:8080/java/webservice/WebserviceListener.WebserviceListenerHttpSoap12Endpoint/" binding="customBinding" bindingConfiguration="WebserviceListenerSoap12Binding" contract="ResolveServiceReference.WebserviceListenerPortType" name="WebserviceListenerHttpSoap12Endpoint" /> </client> </system.serviceModel> </configuration>

    Read the article

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • How to drop/restart a client connection to a flaky socket server

    - by Steve Prior
    I've got Java code which makes requests via sockets from a server written in C++. The communication is simply that the java code writes a line for a request and the server then writes a line in response, but sometimes it seems that the server doesn't respond, so I've implemented an ExecutorService based timeout which tries to drop and restart the connection. To work with the connection I keep: Socket server; PrintWriter out; BufferedReader in; so setting up the connection is a matter of: server = new Socket(hostname, port); out = new PrintWriter(new OutputStreamWriter(server.getOutputStream())); in = new BufferedReader(new InputStreamReader(socket.getInputStream())); If it weren't for the timeout code the request/receive steps are simply: String request="some request"; out.println(request); out.flush(); String response = in.readLine(); When I received a timeout I was trying to close the connection with: in.close(); out.close(); socket.close(); but one of these closes seems to block and never return. Can anyone give me a clue why one of these would block and what would be the proper way to abandon the connection and not end up with resource leak issues?

    Read the article

  • Regex to GENERATE thumbnails!?!?! (but that's crazy!)

    - by CryptoMonkey
    Hello everyone! So here is my situation, and the solution that I've come up with to solve the problem. I have created an application that includes TinyMCE to allow users to create HTML content for publishing. The user can include images in their markup, and drag/resize those images effecting the final Width/Height attributes in the IMG tag. This is all great, the users can include images and resize/relocate them to their desired appearance. But one big problem is that I am now sending a (possibly) much larger image to the client, only to have the browser resize the image into the requested Width/Height attributes. All that bandwidth and lost load time.... So my solution is to pre-process my users markup content, scanning all of the IMG tags and parsing out the Height/Width/Src attributes. Then set each img's SRC tag to a phpThumb request with the parsed Height/Width passed into the thumbnails URL. This will create my reduced size image (optimising bandwidth at the expense of CPU and caching). What do you think about this solution? I've seen other posts where people were using mod_rewrite to do something similar, but I want to effect the content on the page service and not manipulate the image requests as they're being received. .... Any thoughts about this design? I need some help with the fine details as my regex skills need some work, but I'm very short on time and promise to pay my technical knowledge debt soon. To make the regex's easier, I can be sure of some things. Only img tags that need this processing will have an existing width="" height="" attributes (with the double quotes, and lower cased text, but I suppose matching the text case insensitive would be better if TinyMCE changes) So a regex to match only the necessary Img tags, and maybe another three regex's to extract the src, the width, and the height? Thanks everyone.

    Read the article

  • Threading in client-server socket program - proxy sever

    - by crazyTechie
    I am trying to write a program that acts as a proxy server. Proxy server basically listens to a given port (7575) and sends the request to the server. As of now, I did not implement caching the response. The code looks like ServerSocket socket = new ServerSocket(7575); Socket clientSocket = socket.accept(); clientRequestHandler(clientSocket); I changed the above code as below: //calling the same clientRequestHandler method from inside another method. Socket clientSocket = socket.accept(); Thread serverThread = new Thread(new ConnectionHandler(client)); serverThread.start(); class ConnectionHandler implements Runnable { Socket clientSocket = null; ConnectionHandler(Socket client){ this.clientSocket = client; } @Override public void run () { try { PrxyServer.clientRequestHandler(clientSocket); } catch (Exception ex) { ex.printStackTrace(); } } } Using the code, I am able to open a webpage like google. However, if I open another web page even I completely receive the first response, I get connection reset by peer expection. 1. How can I handle this issue Can i use threading to handle different requests. Can someone give a reference where I look for example code that implements threading. Thanks. Thanks.

    Read the article

  • MDX: Filtering a member set by a measure's table values

    - by oyvinro
    I have some numbers in a fact table, and have generated a measure which use the SUM aggregator to summarize the numbers. But the problem is that I only want to sum the numbers that are higher than, say 10. I tried using a generic expression in the measure definition, and that works of course, but the problem is that I need to be able to dynamically set that value, because it's not always 10, meaning users should be able to select it themselves. More specifically, my current MDX looks like this: WITH SET [Email Measures] AS '{[Measures].[Number Of Answered Cases], [Measures].[Max Expedition Time First In Case], [Measures].[Avg Expedition Times First In Case], [Measures].[Number Of Incoming Email Requests], [Measures].[Avg Number Of Emails In Cases], [Measures].[Avg Expedition Times Total],[Measures].[Number Of Answered Incoming Emails]}' SET [Organizations] AS '{[Organization.Id].[860]}' SET [Operators] AS '{[Operator.Id].[3379],[Operator.Id].[3181]}' SET [Email Accounts] AS '{[Email Account.Id].[6]}' MEMBER [Time.Date].[Date Period] AS Aggregate ({[Time.Date].[2008].[11].[11] :[Time.Date].[2009].[1].[2] }) MEMBER [Email.Type].[Email Types] AS Aggregate ({[Email.Type].[0]}) SELECT {[Email Measures]} ON columns, [Operators] ON rows FROM [Email_Fact] WHERE ( [Time.Date].[Date Period] ) Now, the member in question is the calculated member [Avg Expedition Times Total]. This member takes in two measures; [Sum Expedition Times] and [Nr of Expedition Times] and splits one on the other to get the average, all this presently works. However, I want [Sum Expedition Times] to only summarize values over or under a parameter of my/the user's wish. How do I filter the numbers [Sum Expedition Times] iterates through, rather than filtering on the sum that the measure gives me in the end?

    Read the article

  • JS file called twice in Wordpress

    - by Dxr Tw
    I'm trying to reduce number of requests by reducing number of JS files on WP site. I successfully combined around 7 javascript files into one (site.js). Now, I'm using a plugin which has its own JS file (pluginA.js), I want to include (pluginA.js) in that site.js. However, if I simply copy pluginA JS content to site.js and then change location to /files/site.js, firebug's NET tab shows that site.js is requested/called twice. I presume this is due to wp_enqueue_script. How can I make it not call site.js second time but just look into already loaded site.js? Maybe there's alternative to wp_enqueue_script? Plugin's php file: add_action( 'wp_enqueue_scripts', 'testplugin_scripts'); function testplugin_scripts() { /*global $testplugin_version; */ $default_selector = 'li:has(ul) > a'; $default_selector_leaf = 'li li li:not(:has(ul)) > a'; wp_enqueue_scripts('test-plugin', site_url('/files/site.js', __FILE__), array('jquery'), $testplugin_version); $params = array( 'selector' => apply_filters('testplugin_selector', $default_selector), 'selector_leaf' => apply_filters('testplugin_selector_leaf', $default_selector_leaf) ); wp_localize_script('test-plugin', 'testplugin_params', $params); }

    Read the article

  • jQuery.getJSON: how to avoid requesting the json-file on every refresh? (caching)

    - by Mr. Bombastic
    in this example you can see a generated HTML-list. On every refresh the script requests the data-file (ajax/test.json) and builds the list again. The generated file "ajax/test.json" is cached statically. But how can I avoid requesting this file on every refresh? // source: jquery.com $.getJSON('ajax/test.json', function(data) { var items = []; $.each(data, function(key, val) { items.push('<li id="' + key + '">' + val + '</li>'); }); $('<ul/>', { 'class': 'my-new-list', html: items. }).appendTo('body'); }); This doesn't work: list_data = $.cookie("list_data"); if (list_data == undefined || list_data == "") { $.getJSON('ajax/test.json', function(data) { list_data = data; }); } var items = []; $.each(data, function(key, val) { items.push('<li id="' + key + '">' + val + '</li>'); }); $('<ul/>', { 'class': 'my-new-list', html: items. }).appendTo('body'); Thanks in advance!

    Read the article

  • Help with HTTP Intercepting Proxy in Ruby?

    - by Philip
    I have the beginnings of an HTTP Intercepting Proxy written in Ruby: require 'socket' # Get sockets from stdlib server = TCPServer.open(8080) # Socket to listen on port 8080 loop { # Servers run forever Thread.start(server.accept) do |client| puts "** Got connection!" @output = "" @host = "" @port = 80 while line = client.gets line.chomp! if (line =~ /^(GET|CONNECT) .*(\.com|\.net):(.*) (HTTP\/1.1|HTTP\/1.0)$/) @port = $3 elsif (line =~ /^Host: (.*)$/ && @host == "") @host = $1 end print line + "\n" @output += line + "\n" # This *may* cause problems with not getting full requests, # but without this, the loop never returns. break if line == "" end if (@host != "") puts "** Got host! (#{@host}:#{@port})" out = TCPSocket.open(@host, @port) puts "** Got destination!" out.print(@output) while line = out.gets line.chomp! if (line =~ /^<proxyinfo>.*<\/proxyinfo>$/) # Logic is done here. end print line + "\n" client.print(line + "\n") end out.close end client.close end } This simple proxy that I made parses the destination out of the HTTP request, then reads the HTTP response and performs logic based on special HTML tags. The proxy works for the most part, but seems to have trouble dealing with binary data and HTTPS connections. How can I fix these problems?

    Read the article

  • What's the point of indicating AllowMultiple=false on an abstract Attribute class?

    - by tvanfosson
    On a recent question about MVC attributes, someone asked whether using HttpPost and HttpDelete attributes on an action method would result in either request type being allowed or no requests being allowed (since it can't be both a Post and a Delete at the same time). I noticed that ActionMethodSelectorAttribute, from which HttpPostAttribute and HttpDeleteAttribute both derive is decorated with [AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = true)] I had expected it to not allow both HttpPost and HttpDelete on the same method because of this, but the compiler doesn't complain. My limited testing tells me that the attribute usage on the base class is simply ignored. AllowMultiple seemingly only disallows two of the same attribute from being applied to a method/class and doesn't seem to consider whether those attributes derive from the same class which is configured to not allow multiples. Moreover, the attribute usage on the base class doesn't even preclude your from changing the attribute usage on a derived class. That being the case, what's the point of even setting the values on the base attribute class? Is it just advisory or am I missing something fundamental in how they work? FYI - it turns out that using both basically precludes that method from ever being considered. The attributes are evaluated independently and one of them will always indicate that the method is not valid for the request since it can't simultaneously be both a Post and a Delete.

    Read the article

  • Web scrapping from a Google Chrome extension

    - by limoragni
    I've started to develop a Chrome extension to navigate and prform actions on a website. Until now the extension is able to receive a couple of parameters and check a set of radio-buttons, fill in a few inputs of a form and then submit it. What I want to do now is to repeat the process, but I'm stuck when the page is reloaded. And I don't know how can I do to make the script reacts to the finish of the request. The workflow I want to achieve is the following (is for automaticly copying a certain object): Popup side Enter the number of the Master object to copy Enter the base name of the copies (example Mod, so the I can iterate and add mod1, mod2, modn) Enter the number of copies Background side Select master Select standard options Fill in inputs Submit form Wait for the page to complete the request and continue to the next copy. (here I need help) The problem is on the repetition, the rest is taking care of. I assume that must be a way of dealing with requests. Any ideas? By the way I'm doing it all with the extension and tabs methods of google chrome plus javascript and jquery.

    Read the article

  • Context path routing in Tomcat ( service swapping )

    - by jojovilco
    Here is what I would like to achieve: I have a web service A which I want to be able to deploy side by side with other web services of type A - different version(s). For now I assume 2 instances side by side. I need it because the service has a warm up stage, which takes some time to build up stuff from DB and only after it is ready it can start serving requests ... I was thinking to deploy to Tomcat context paths like: "/ServiceA-1.0", "/ServiceA-2.0" and then have a "virtual" context like "/ServiceA" which will point to the desired physical service e.g. "/ServiceA-1.0". So external world will know about ServiceA, but internally, my ServiceA related stack would know about versioned ServiceA url ( there are more components involved but only ServiceA is serving outer world ). When new service is ready, I would just reconfigure the "virtual" context to point to a new service. So far, I was not able to find out how to do this with Tomcat and starting to tkink it is not possible. I found suggestions to place Apache Server in front of Tomcat and do the routing there, but I do not want to enroll another piece of software unless necessary. My questions are: - is this kind of a "virtual" context and routing possible to do with Tomcat? - any other options, wisdom and lessons learned how to achieve this kind of service swapping scenario? Best, Jozef

    Read the article

  • jquery ajax problem in chrome

    - by spaceman
    i have the following jquery code running on my page just fine in FF and IE, but chrome seems to be freaking out.. in FF and IE the call is made and the result is appended to the div. in chrome, it calls ajaxfailed on failure. the XMLHttpRequest passed to the AjaxFailed function has a status code of "200" and the statusText is "ok". the readystate is 4 and the responseText is set to the data i wish to append to the div.. basically from what i can see its calling the failure method but it isn't failing.. i have tried with both get and post requests and it always breaks in chrome. function getBranchDetails(contactID, branchID) { $.ajax({ type: "GET", url: urlToRequestTo, data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: branchDetailsSuccess, error: AjaxFailed }); } function branchDetailsSuccess(result) { $("#divBranchControl").empty(); $("#divBranchControl").append(" " + result); $("#branchDiv").tabs(); } function AjaxFailed(result) { alert("FAILED : " + result.status + ' ' + result.statusText); }

    Read the article

  • Single-entry implementation gone wrong

    - by user745434
    I'm doing my first single-entry site and based on the result, I can't see the benefit. I've implemented the following: .htaccess redirects all requests to index.php at the root Url is parsed and each /segment/ is stored as an element in an array First segment indicates which folder to include (e.g. "users" » "/pages/users/index.php"). index.php file of each folder parses the remaining elements in the segments array until array is empty. content.php file of each folder is included if there are no more elements in the segments array, indicating that the destination file is reached Sample File structure ( folders in [] ): [root] index.php [pages] [users] index.php content.php [profile] index.php content.php [edit] index.php content.php [other-page] index.php content.php Request: http://mysite.com/users/profile/ .htaccess redirects request to http://mysite.com/index.php URL is parsed and segments array contains: [1] users, [2] profile index.php maps [1] to "pages/users/index.php", so includes that file pages/users/index.php maps [2] to pages/users/profile/index.php, so includes that file Since no other elements in the segments array, the contents.php file in the current folder (pages/users/profile) is included. I'm not really seeing the benefit of doing this over having functions that include components of the site (e.g. include_header(), include_footer(), etc.), so I conclude that I'm doing something terribly wrong. I'm just not sure what it is.

    Read the article

  • Effect.toggle and AJAX Calls

    - by kunalsawlani
    Hi, I am using the following code to capture all the AJAX calls being made from my app, in order to show a busy spinner till the call is completed. It works well when the requests are well spaced out. However, when the request are made in quick successions, some of the AJAX calls do not get registered, or the onCreate and onComplete actions get kicked off at the same time, and more often than not, the busy spinner continues to appear on the screen, after all the calls have successfully completed. Is there any check I can perform at the end of the call, to check if the element is visible, I can hide it. document.observe("dom:loaded", function() { $('loading').hide(); Ajax.Responders.register({ //When an Ajax call is made. onCreate: function() { new Effect.toggle('loading', 'appear'); new Effect.Opacity('display-area', { from: 1.0, to: 0.3, duration: 0.7 }); }, onComplete: function() { new Effect.toggle('loading', 'appear'); new Effect.Opacity('display-area', { from: 0.3, to: 1, duration: 0.7 }); } }); }); Thanks!

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • How do I choose what and when to cache data with ob_start rather than query the database?

    - by Tim Santeford
    I have a home page that has several independent dynamic parts. The parts consist of a list of recent news from the company, a site statistics panel, and the online status of certain employees. The recent news changes monthly, site statistics change daily, and online statuses change on a per minute bases. I would like to cache these panels so that the db is not hit on every page load. Is using ob_start() then ob_get_contents() to cache these parts to a file the correct way to do this or is there a better method in PHP5 for doing this? In asking this question I'm trying to answer these additional questions: How can I determine the correct approach for caching this data without doing extensive benchmarking? Does it make sense to cache these parts in different files and then join them together per requests or should I re-query the data and cache once per minute? I'm looking for a rule of thumb for planning pages and for situations where doing testing is not cost effective (The client is not paying enough for it I mean). Thanks!

    Read the article

  • (Python/Pyramid) Better ways to have standard list/form editors?

    - by badcat
    I'm working on a number of Pyramid (former Pylons) projects, and often I have the need to display a list of some content (let's say user accounts, log entries or simply some other data). A user should be able to paginate through the list, click on a row and get a form where he/she can edit the contents of that row. Right now I'm always re-inventing the wheel by having Mako templates which use webhelpers for the pagination, Jquery UI for providing a dialog and I craft the editor form and AJAX requests on the client and server side by hand. As you may know, this eats up painfully much time. So what I'm wondering is: Is there a better way of providing lists, editor dialog and server/client communication about this, without having to re-invent the wheel every time? I heard Django takes off a big load of that by providing user accounts and other stuff out of the box; but in my case it's not just about user accounts, it can be any kind of data that is stored on the server-side in a SQL database, which should be able to be edited by a user. Thanks in advance!

    Read the article

  • Website hosting from home - IIS6 [closed]

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

  • How to use the values from session variables in jsp pages that got saved using @Scope("session") in the mvc controllers

    - by droidsites
    Doing a web site using spring mvc. I added a SignupController to handle all the sign up related requests. Once user signup I am adding that to a session using @Scope("session"). Below is the SignupController code, SignupController.java @Controller @Scope("session") public class SignupController { @Autowired SignupServiceInter signUpService; private static final Logger logger = Logger.getLogger(SignupController.class); private String sessionUser; @RequestMapping("/SignupService") public ModelAndView signUp(@RequestParam("userid") String userId, @RequestParam("password") String password,@RequestParam("mailid") String emailId){ logger.debug(" userId:"+userId+"::Password::"+password+"::"); String signupResult; try { signUpService.registerUser(userId, password,emailId); sessionUser = userId; //adding the sign up user to the session return new ModelAndView("userHomePage","loginResult","Success"); //Navigate to user Home page if everything goes right } catch (UserExistsException e) { signupResult = e.toString(); return new ModelAndView("signUp","loginResult", signupResult); //Navigate to signUp page back if user does not exist } } } I am using "sessionUser" variable to store the signed up User Id. My understanding is that when I use @Scope("session") for the controller all the instance variables will added to HttpSession. So by that understanding I tried to access this "SessionUser" in userHomePage.jsp as, userHomepage.jsp Welcome to <%=session.getAttribute("sessionUser")%> But it throws null. So my question is how to use the values from session variables in jsp pages that got saved using @Scope("session") in the mvc controllers. Note: My work around is that pass that signed User Id to jsp page through ModelAndView, but it seems passing the value like these among the pages takes me back to managing state among pages using QueryStrings days.

    Read the article

  • How do browser cookie domains work?

    - by Vilx-
    Due to weird domain/subdomain cookie issues that I'm getting, I'd like to know how browsers handle cookies. If they do it in different ways, it would also be nice to know the differences. In other words - when a browser receives a cookie, that cookie MAY have a domain and a path attached to it. Or not, in which case the browser probably substitutes some defaults for them. Question 1: what are they? Later, when the browser is about to make a request, it checks its cookies and filters out the ones it should send for that request. It does so by matching them against the requests path and domain. Question 2: what are the matching rules? Added: The reason I'm asking this is because I'm interested in some edge cases. Like: Will a cookie for .example.com be available for www.example.com? Will a cookie for .example.com be available for example.com? Will a cookie for example.com be available for www.example.com? Will a cookie for example.com be available for anotherexample.com? Will www.example.com be able to set cookie for example.com? Will www.example.com be able to set cookie for www2.example.com? Will www.example.com be able to set cookie for .com? Etc. Added 2: Also, could someone suggest how I should set a cookie so that: It can be set by either www.example.com or example.com; It is accessible by both www.example.com and example.com.

    Read the article

  • Redirecting http to https for a directory, via .htaccess, using mod_alias only

    - by Belinda
    I have the common problem of wanting to redirect requests for a certain restricted access directory from http to https, so that users' login credentials are sent in a secure way. However, I do not have mod_rewrite enabled on my server. I only have mod_alias. This means I have to use the RedirectMatch command. I can't use the usual solutions that use RewriteCond and RewriteRule. (A note on the politics: I am a small-fry subsite maintainer in a very large organisation, so the server admins are unlikely to be willing to change the server config for me!) The following line works, but forms an infinite loop (because the rewritten URL is still caught by the initial regular expression): RedirectMatch permanent ^/intranet(.*)$ https://example.com/intranet$1 One of my internal IT guys has suggested I avoid the infinite loop by moving the files to a new directory with a new name (eg /intranet2/). That seems pretty ugly to me. And people could still accidentally/deliberately revert to an insecure connection by visiting http://example.com/intranet2/ directly. Then I tried this, but it didn't work: RedirectMatch permanent ^http:(.*)/intranet(.*)$ https://example.com/intranet$1 I suspect it didn't work because the first argument must be a file path from the root directory, so it can't start with "http:". So: any better ideas how to do this?

    Read the article

  • recursive wget with hotlinked requisites

    - by dongle
    I often use wget to mirror very large websites. Sites that contain hotlinked content (be it images, video, css, js) pose a problem, as I seem unable to specify that I would like wget to grab page requisites that are on other hosts, without having the crawl also follow hyperlinks to other hosts. For example, let's look at this page https://dl.dropbox.com/u/11471672/wget-all-the-things.html Let's pretend that this is a large site that I would like to completely mirror, including all page requisites – including those that are hotlinked. wget -e robots=off -r -l inf -pk ^^ gets everything but the hotlinked image wget -e robots=off -r -l inf -pk -H ^^ gets everything, including hotlinked image, but goes wildly out of control, proceeding to download the entire web wget -e robots=off -r -l inf -pk -H --ignore-tags=a ^^ gets the first page, including both hotlinked and local image, does not follow the hyperlink to the site outside of scope, but obviously also does not follow the hyperlink to the next page of the site. I know that there are various other tools and methods of accomplishing this (HTTrack and Heritrix allow for the user to make a distinction between hotlinked content on other hosts vs hyperlinks to other hosts) but I'd like to see if this is possible with wget. Ideally this would not be done in post-processing, as I would like the external content, requests, and headers to be included in the WARC file I'm outputting.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >