Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 445/683 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Dns works, can ping, but cannot load web pages in browser

    - by user1224595
    Yesterday I changed routers, and my desktop computer started acting up. I could ping websites, and nslookup was able to resolve names to addresses, but neither chrome, firefox, nor ie could load any webpages. None of my other computers connected to the same wireless router have any problems. I connect my desktop to the router through a cheap wifi dongle. I did a wireshark capture of the browser request, and I have uploaded the pcap here. https://drive.google.com/file/d/0B7AsPdhWc-SwbTV0bUJLQXo4UUE/edit?usp=sharing One strange thing I noticed was the spamming of ssdp packets. I am not super familiar with networking, but it seems that it is not a problem with the router, as dns works, and so does dhcp (the desktop is assigned an address correctly). Any help would be appreciated.

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • How to deal with bad developers who hold back the project

    - by ILovePaperTowels
    We're at the end of a project, but we continue to run into issues because of a single piece of the project. This piece is handled by a specific developer. Finally, we grabbed latest and started reviewing it. It's just horrendous code! Trying to step through it was difficult and it's a relatively simple workflow. The point of this question is, how to deal with this situation. The developer has a hard time accepting criticism (constructive or otherwise) and feels he is more knowledgeable than others on the team who are, well, highly decorated, experienced and accomplished developers. It's difficult to even get into a topic about development because it turns into "I know what I'm talking about and you're just wrong!" type of conversation. A request has already been put in to replace this developer but it is a hard sell since devs are in short supply where we are and this is a corporation with a LOT of political bs. Management has been notified a few times but nothing is happening.

    Read the article

  • Default documentroot apache does not work

    - by James Wise
    I have apache version 2.2 and php 5.3.15 on a single server. I configured virtual hosting and a default vhost. 0_default_.conf - goes to /var/www/default sub.domain.com.conf - goes to /var/www/sub.domain.com My question is, how could I set the default documentroot to sub.domain.com permanently? That means all request should be redirected to sub.domain.com. I try to remove 0_default_.conf but when viewing the page it display the php source code of sub.domain.com. Here is my configurations -- http://pastebin.com/4e3awUJ4 Although I can create index.php to /var/www/default and permanently redirect to sub.domain.com site but it's not viable solution for me because what if I didn't point the ip address of sub.domain.com to the server so user cannot view that subdomain. I would appreciate if anyone could share their knowledge and wisdom. Thanks. JamesW

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • caches domain user on local PC

    - by user630320
    We have a fully working domain in UK and around the world we have user who use VPN ( checkpoint) to connect to or domain. One of the user in USA has a laptop which he never logged on to before ( it does caches the user login details). Does anyone know how to cache user login information on this laptop. I have tried netdom trust to add this user to the laptop but i was not able to do this. At the moment user is logging in with a local administrator account and then using VPN to log on to our domain but when it comes to accessing files on domain user get access deieded. When user try to login it gets There are currently no log on servers available to service the logon request Does anyone know how to add user.

    Read the article

  • Crontab + .sh + php

    - by Kristaps Karlsons
    Hi. I'm trying to call a shell script every 5 minutes, witch executes php file under root. # crontab -l */5 * * * * /home/regularuser/call.sh permissions: -rw-rw-rw- 1 root root 162 Jun 6 23:40 call.php -rwxr-xr-x 1 root root 66 Jun 7 01:20 call.sh call.sh contents: #!/bin/bash php -q /home/regularuser/call.php echo "request processed" My problem is that my php file doesn't get executed via crontab. However, if I call call.sh - everything works perfectly. I'm new to crontab and shell scripting, so any advice/resources are welcome.

    Read the article

  • Laptop power button not working

    - by DyP
    I have a (seemingly rather exotic) Dell Latitude XT2 notebook running Lubuntu 12.04, fresh install. Tried to get the power button to work as expected (opening lubuntu-logout, just as I think it was meant to be out-of-the box), but no success: power button does nothing but forced power-off on long press. Here some more information, please ask if something more could help: Power button is recognized as a device: xinput lists Power Button as a slave keyboard at /dev/input/event1, and I can run a tool to listen to that device and invoke some action (successfully). But using .config/openbox/lubuntu-rc.xml to assign some action to XF86ShutDown or XF86PowerOff does nothing. Works well for any key of the keyboard. xinput --test reports a keycode (key press, key release) of 124 when pressing & releasing the power button. xmodmap -pk lists 124 mapped to XF86PowerOff which seems correct to me (btw: is this the same as XF86ShutDown?) When assigning another keysymname to the 124 keycode, e.g. 'w', still nothing happens when pressing the power button (no window seems to receive a 'w'). Works well for any key on the keyboard. The outputs of xev for a press/release of the power button irritate me: MappingNotify event, serial 41, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248 FocusOut event, serial 41, synthetic NO, window 0x2a00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 41, synthetic NO, window 0x2a00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 41, synthetic NO, window 0x0, keys: 93 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (instead of 93, I also get different values) I might have missed something very basic since I'm not very familiar with both openbox/lubuntu and the X input system. Btw.: same goes for three other special buttons on my laptop..

    Read the article

  • Possible to redirect from HTTPS to HTTP behind load-balancer?

    - by Derek Hunziker
    I have a basic ASP.NET application that sits behind an F5 load-balancer. Incoming SSL requests (over HTTPS) terminate at the load-balancer and all internal communication between the load-balancer and my application servers is unsecure (over HTTP). When a unsecure request comes in, my app is able to use Response.Redirect("https://...") to redirect a secure URL with no problems. However, the other direction appears to be impossible - I cannot redirect from HTTPS to HTTP using Response.Redirect() from my application. The URL remains HTTPS for the client and does not change. Could the F5 be preventing the redirect for ever reaching the client? Is there any special configuration necessary to let this happen?

    Read the article

  • WebsitePanel 2 totally NOT working on Windows Server 2012 on Azure

    - by Carmine Giangregorio
    I’m having many troubles installing WebSitePanel on an Azure Virtual Machine, with Windows Server 2012. I followed the steps in http://www.websitepanel.net/documentation/deployment-guide/server-configuration/preparing-windows-server-2008-r2-for-websitepanel-installation/ and installed everything I needed. Then, I installed the WebSitePanel Standalone Server package with the installer. I opened the endpoint for the port 9002 on Windows Azure; so I pointed my browser to myhostname.cloudapp.net (note: in Azure you don’t have a static IP address, instead you have an hostname like [hostname].cloudapp.net). So, loading myhostname.cloudapp.net:9002 fails, and any browser shows something like “Unable to load page”. Notice: if I try to load the WebSitePanel Portal directly on the server, I get an error HTTP 400 Bad Request. How come? IIS works perfectly on the server, in fact the default website runs without problems on port 80.

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • IOException opening RFCOMM on openSUSE

    - by Chief A-G
    I have a permissions problem on openSUSE with Bluetooth /dev/rfcomm0. I've written a small test application which opens /dev/rfcomm0 and sends a request message and retrieves a response message. At first my problem was a permission denied error on /dev/rfcomm0 until I added the user account to the dialup users group. Now I get a System.IO.IOException Interrupted system call error whenever I run the app. I can sudo my application and it runs fine. I'm not sure how and which permissions to set to get this work work under my normal user account.

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • Proxy - Some general questions

    - by user68802
    Is it possible to accomplish the following scenario with a proxy server? We are having one internet facing server that we want to put behind a proxy for some reasons. We want everything to work as before. When they do a request all connections will be forward to the internal server which will send back the information through the proxy. We want to be able to change to proxy to show an maintenance page whenever we are doing maintenance and change it back to forwarding traffic when we are done. We do also want to be able to keep forwarding all users that are using the sites but show an maintenance page for all new users for a time before showing the maintenance page for everyone in order to give the users some time to finish their work before kicking them out.

    Read the article

  • .htaccess redirect root directory and subpages with parameters

    - by wali
    I am having difficulty trying to redirect a root directory while at the same time redirect pages in a sub directory to a different URL. For example: http://test.example.com/olddir/sub/page.php?v=one to http://test.example.com/new/one while also redirecting the any request to the root of the olddir folder. I have tried RewriteCond %{QUERY_STRING} v=one RewriteRule ^/olddir/sub/page.php /new/? [R=301] and RedirectMatch /oldir "test.example.com" RedirectMatch /olddir/sub/page.php?v=one "test.example.com/new/one" Any help at this point will be extremely appreciated...Thanks!

    Read the article

  • How should I group these variables?

    - by stariz77
    I have a shape that will be defined by: char s_type; char color; double height; double width; These variables are scanned in from a request string sent to my server and passed into my printing function, which then prints out the shape. Currently they are just local variables sitting in my main(); however, I was wondering if there would be any advantage in creating a struct containing these variables, and then passing the struct to my printing function? or how else might I improve my program's structure/style, would passing a struct by reference have any kind of performance benefit if there were many requests and therefore many printing function calls? printer(char st, char cr, double ht, double wd); int main() { // Other main functionality. char s_type; char color; double height; double width; sscanf (serv_req, "GET /%c/%c/%lf/%lf", &s_type, &color, &height, &width); printer(s_type, color, height, width); // Other main functionality. return 0; } It seemed "neater" if I had a struct or something that didn't leave me with declarations in the middle of everything else going on in main. I'm interested in structure/style as well as performance. EDIT: didn't mean to put printer declaration inside main.

    Read the article

  • eCryptfs on ubuntu server : How to keep the home mounted without being over ssh?

    - by Bebeoix
    I have a daemon program who need to read in a file who is saved somewhere in my home folder. But every time I close my ssh connection, this daemon can't read the file because it appear that eCryptfs unmount the home. Maybe there is an option to force eCryptfs to not only mount with an ssh connection ? I didn't found it. Thanks. PS : I know this thread, http://askubuntu.com/questions/165608/why-is-ecryptfs-only-mounting-private-home-directory-over-ssh, but this is not the proper/good way to deal with the request.

    Read the article

  • Any recommendations on a NAS for a home-super-user?

    - by marc_s
    Can anyone recommend a good NAS for use in a home-server environment? I would request at least 2, preferably 4 disks, and I am most interested in good to excellent throughput for file-server and backup purposes - don't need any of the fancy media-streaming or -sharing features, that's not of interest to me. For a 4 or more disk solution, support for the various RAID levels (0, 1, 1+0, 5) would be a plus - especially if supported in hardware (rather than just a software emulation). I just need a place to put my collection of data, ISO images, and so forth - and since several external disks (self-built and off-the-shelf) have failed so far, I'm looking into a more reliable solution. Marc

    Read the article

  • When using ssh with priv/pub keys, how to connect to the destination using a user different from the origin machine?

    - by lpacheco
    I need to connect to hostB using user2 from hostA where I´m connected using user1. I've run ssh-keygen -t rsa on hostA and copied the public key generated in ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of user2 in hostB. Then I tried to connect from hostA to hostB using the command: $user1@hostA> ssh user2@hostB I still get a request for password: user2@hostB's password: If I try to connect using the same user on both hosts, it works correctly: $user1@hostA> ssh user1@hostB Enter passphrase for key '/home/user1/.ssh/id_rsa': What am I missing?

    Read the article

  • what can be reasons for localhost responding super-slow the first time a page is requested?

    - by frequent
    Still learning my server-ways with Apache2.2/MySQL5.2/Coldfusion8 on localhost (running Windows XP) What I notice is every time I request a page for the first time after firing up Coldfusion and Apache, localhost needs forever (1+ minute to respond and send the inital page). After that all seems to run at normal loading time. I'm using require.js to pull in Jquery, Jquery Mobile and two other plugins on first page load, but loading the same page from a real server works normally, so I'm also ruling out this as a probable cause. Since it happens regardless of which page I'm loading first it shoud not be page related, so I'm looking for other clues on why this could happen. Thanks for some thought!

    Read the article

  • Some $_SERVER parameters missing when accessing PHP script via Cron

    - by Jakobud
    I have a script that I need to run with PHP via cron. The original author of the script made a lot of user of certain $_SERVER parameters (like REQUEST_URI). But it appears that certain variables don't exist when running PHP via command line or via CRON. For example, there is no request uri, so it makes sense that the REQUEST_URI parameter wouldn't be available. Is there any way around this other than to completely rewrite the script in order to avoid using special $_SERVER parameters that aren't universally available?

    Read the article

  • Sun Directory Server 5.2 performance

    - by tmow
    Hi all, I'm using logconv.pl (provided by Sun), to measure performance on my server. These two metrics results, are worrying me a bit: Binds: 192164 Unbinds: 111569 In fact the difference between the two it's quite big, how can I determine which are the unbound requests? As stated by Lodovic: Many applications just close the connections without sending an Unbind request. This simply can explain the difference. But the logconv.pl doesn't show details about the unbound requests, do you know any other tools or can you suggest some queries or whatever that can help me find out the root cause? Do you think anyway that the performances may improve fixing the issue?

    Read the article

  • Random timeout now and then

    - by KenavR
    Maybe this is a to generic question, but since we have this issue for quite a while now, I give it a shot. We have some applications which use HTTP for the connection between the client (website or fat-client) and the server. The Computer who runs this applications is in a Network behind a firewall and a proxy, the server isn't inside the same network. The problem is that every now and then the https Request times out and depending on the Client the Application "hangs" or does some other funky stuff. The problem is definitely inside our network, because if i try the applications outside our network it works fine. Can you give me a hint where i can most likely find the problem?

    Read the article

  • Updating, etc., automatically

    - by Steve D
    Is there a way to set up Ubuntu 12.04 (or earlier versions) so that all recommended updates are done automatically, say once a week? When I say automatically, I mean no password entry or user intervention required. This sounds like a stupid request, so let me tell why I'm asking. My grandfather knows nothing about computers; he uses his solely to read Yahoo! mail. I want to get rid of his clunky, spyware-ridden Windows XP and install Ubuntu. I want to set it up so when he turns the computer on, after a couple minutes, voila!, Yahoo! mail, already signed in, ready to go. The problem is I don't want to have to go over there every week or so and make sure everything is up-to-date, he hasn't accidentally installed any spyware, etc. So can this be done? Is this the best way to set things up for my grandfather? Are there other things I should be worried about when it comes to keeping things hassle-free for him? Please don't post anything like "why not teach him how to... blah blah blah". My grandfather is 80 years old and has made it clear email is the only thing he will ever use a computer for! Thanks!

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >