Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 120/232 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Cannot connect to IIS 7 from localhost

    - by Wout
    I cannot connect to the local IIS 7 using "http://localhost" in IE. "http://127.0.0.1" doesn't work either. The strange thing is that if I add a binding on e.g. port 81, then I can reach "http://localhost:81". Also turning off the firewall on the local machine doesn't help. The site is reachable from the internet. The local requests don't seem to hit IIS (no entries in the IIS log files). IIS is hosted on Windows Server 2008 R2 from behind a hardware firewall device. Note that I'm a programmer, not a network administrator, so I'm having a hard time trouble shooting this.

    Read the article

  • Web site kills hard disk I/O, how to prevent?

    - by Taras Voynarovsky
    The situation: I have a server, on which we have 2-3 projects. Starting not long ago, the server started hanging up (We could not connect to it by ssh, and the connected clients had to wait 20 minutes for top to give results) Early today I managed to execute gstat while it was in this state and saw, that it stays on 100% on da0, da0s1 and da0s1f. I dont quite know what those ids meen, but I understand that some processes just kill the HD by bombing it down with requests. I ask of some propositions. I dont know how to find the culpit and can't prevent this. I have freebsd on server.

    Read the article

  • Is Sql Azure useful without windows azure?

    - by KallDrexx
    I am currently doing some research to get some preliminary IT cost projections for a project, and I was looking at Azure. Since this is a startup, I do not want to deal with the IT operations myself and instead am looking at having it all professionally hosted. I am looking at azure due to the SLA assurances, already in place disaster recovery operations, and the reliability. I'm playing with some numbers, and I am wondering if hosting my database on Sql Azure is an option, while hosting the actual webpage on another host until I need the frontend scalability of Azure. Is this actually feasible or will the latency in requests between the web host and azure be too much and I would be better off hosting both on the same service?

    Read the article

  • Sonicwall Global VPN Client fails to connect, despite successful connections from other computers from behind the same router

    - by JesperE
    I've recently been unable to connect to our Sonicwall VPN at work. The Sonicwall client is stuck on "connecting", and the log says "The peer is not responding to phase1 ISAKMP requests". The weird thing is that this is not an issue with my own PC, only my work laptop (Lenovo W530 running Windows 7 64-bit), and this has only appeared recently. This ought to rule out any problems with my ISP blocking VPN, or issues with the router itself. My company's IT department says that they cannot see anything in their logs when I'm trying to connect. My conclusion is that something is wrong on the laptop itself. Disabling the firewall does not help. Can the VPN connection be blocked in other ways? What should I be looking for? EDIT: This problem has "magically" disappeared, without any changes done in my network. I can only assume that this was caused by some network glitch with my ISP.

    Read the article

  • Create room mailbox in Exchange 2007 - cannot view calendar

    - by David Neale
    I'm an application developer and I'm trying to play around with Exchange in order to integrate a room booking system with it. I've created a room mailbox and have set it so that it auto-accepts appointment requests. When creating an appointment as a standard user I can add the room as a resource and its availability will display. However, I can add it as a shared calendar to Outlook 2003 (Unable to display the folder. The Calendar folder could not be found) nor can I return the calendar folder using Exchange Web Services (again, could not find the folder). I've also created an appointment via Exchange Web Services with a room as a resource. The resource was successfully booked (as confirmed when opening it as the room's delegate) but it does not appear on the meeting as viewed by any of the attendees. Is there anything further I need to do in order to share this calendar? How do most organisations set up their Exchange with regards to rooms?

    Read the article

  • Iptables REDIRECT + openvpn problem

    - by Emilio
    I want to redirect connection to port 22 to my openvpn binded port, on 60001. Openvpn is running on server on 60001 server:~$ sudo netstat -apn | grep openvpn udp 0 0 67.xx.xx.137:60001 0.0.0.0:* 4301/openvpn I redirect on server port 22 to 60001 server:~$ sudo iptables -F -t nat server:~$ sudo iptables -A PREROUTING -t nat -p udp --dport 22 -j REDIRECT --to-ports 60001 I start openvpn client (openvpn.conf is correct, it works with remote IP 22 replaced with remote IP 60001) client:~$ ./openvpn openvpn.conf Tue Apr 27 00:42:50 2010 OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [EPOLL] built on Mar 23 2010 Tue Apr 27 00:42:50 2010 UDPv4 link local (bound): [undef]:1194 Tue Apr 27 00:42:50 2010 UDPv4 link remote: 67.xx.xx.137:22 Tue Apr 27 00:42:52 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Tue Apr 27 00:42:55 2010 read UDPv4 [ECONNREFUSED]: Connection refused (code=111) ... It doesn't connect. iptables shows requests from client to server but no answers. What's wrong with it?

    Read the article

  • Has ec2 made self-hosting possible for 'amateur' sysadmins possible?

    - by Blankman
    I'm a developer, and it seems ec2 has made it possible for a amateur sysadmin like me to setup and maintain a fairly large set of servers. Now I don't mean to undermine real sys admins, as I know the value of them but what I am trying to get at is that someone like me can setup and maintain a cluster of servers (front end web servers, with some db servers) using tools like ec2 and capistrano with the help of google. Now this isn't something I would do as a long term thing, but as a startup, one-man operation, I think I can pull this off until business takes off and I can hire this important role out. With ec2, I get my firewall, so I basically open up port 80 on my public facing server, which will run haproxy and route requests to my cluster of servers. Ofcourse I am simplifying the setup, but just want a feel for what you guys think about my perception. My application is a web application, that will be runing Ruby on rails (passenger) and talking to mysql or postgresql.

    Read the article

  • Examples of applications where messages are pushed from a server and display on clients immediately

    - by James Hay
    I'm trying to find an example to show where data is sent from a server and is pushed to a/multiple clients which are updated immediately, i.e. the client doesn't make requests for updates. It doesn't matter whether we're talking mobile, desktop or whatever. An even better example would be where there were multiple recipients for the same message. It doesn't matter what the data is or the context it's used in, only the immediacy of receiving it. I was thinking that there might be some example in finance and the stock markets, but I haven't been able to find any through googling. IM clients are a great example of this and are on my list of one ;) If anyone works on applications of this nature or knows of particular implementations, can you give me a quick run down of the use case and if it's commercial software the name of the software. This is all basically for research purposes so doesn't have to be particularly detailed. If anyone can help, thanks.

    Read the article

  • nginx + IIS + GET

    - by Eralde
    I have nginx on pc "A" & IIS with ASP.NET on pc "B". nginx is configured like this: ... location ~ ((Web|Script)Resource.*)$ { proxy_pass "B"/$1; proxy_redirect off; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header REQUEST_URI $request_uri; proxy_set_header HTTP_REFERER $http_referer; #proxy_set_header REQUEST_URI $request_uri; proxy_set_header QUERY_STRING $query_string; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }... but requests to "B"/WebScript?a=b&c=d aren't able to deliver GET data (a=b&c=d) to IIS part. Could anyone help with this?

    Read the article

  • Problem running application on windows server 2008 instance using amazon ec2 service and WAMP

    - by Siddharth
    I have a basic (small type) windows server 2008 instance running on amazon ec2. I've installed WAMP server on to it, and have also loaded my application. I did this using Remote desktop Connection from my windows machine. I'm able to run my application locally on the instance, however when I try to access it using the public DNS given to it by amazon, from my browser, I'm unable to do so. My instance has a security group that is configured to allow HTTP, HTTPS, RDP, SSH and SMTP requests on different ports. In fact I have the exact same security group as the one used in this blog, http://howto.opml.org/dave/ec2/ I did almost everything same as the blog, except for using a different Amazon Machine Image. This is my first time using amazon ec2, and i can't figure out what I'm doing wrong here

    Read the article

  • what is the best setting for using lighttpd on 8G ram?

    - by user39639
    I have running 8GB ram and 8 x Xeon 3361 system! What is the best setting for running simultaneous connection! What is the maximum? Is setting like this correct? server.max-keep-alive-requests = 0 server.max-keep-alive-idle = 10 server.max-read-idle = 60 server.max-write-idle = 60 server.event-handler = "linux-sysepoll" server.max-fds = 2048 fastcgi.server = ( ".php" = ( "localhost" = ( "socket" = "/tmp/php-fastcgi.socket", "bin-path" = "/usr/bin/php-cgi", "max-procs" = 20, "bin-environment" = ( "PHP_FCGI_CHILDREN" = "40", "PHP_FCGI_MAX_REQUESTS" = "800" ), "broken-scriptfilename" = "enable" ) ) ) please help me!

    Read the article

  • Apache Balancing by source IP

    - by Daniel
    I am using Apache's Proxy Balancer to balance one sub domain (e.g. subdomain.domain.com) to an application which is located on 2 servers. Here an extract from my Apache configuration file: <Proxy *> Order deny,allow Allow from all </Proxy> <Proxy balancer://cluster1> BalancerMember http://server1:28081 route=w1 BalancerMember http://server2:28082 route=w2 </Proxy> ProxyPass /path balancer://cluster1/path ProxyPassReverse /path balancer://cluster1/path My question is, if it's possible to decide with the source IP-address which BalancerMember should be used for the request? To e.g. Requests from 1.2.3.4 to Member 1?

    Read the article

  • puppet propagate variable from node to erb tamplate?

    - by picca
    Is it possible to declare variable in node and than propage it way down to the erb template? Example: node basenode { $myvar = "bar" # default include myclass } node mynode extends basenode { $myvar = "foo" } class myclass { file { "/root/myfile": content => template("myclass/mytemplate.erb") ensure => present, } } Source of mytemplate.erb: myvar has value: <%= myvar %> I know that my example might be complicated. But I'm trying to propagate file on (almost) all my nodes and I want its content to be altered depending on the node which requests the file. The $myvar = "bar" statement should be default when node does not override its value. Is there a solution to my problem? I'm using puppet 0.24.5

    Read the article

  • How to reach a Global Scope IPv6 host?

    - by Vaibhav Bajpai
    I have setup DNS64+NAT64 on a machine with 2 interfaces: eth0: public IPv4 address (connected to outside world) eth1: global scope IPv6 address: 2001::/64 I can successfully use ping6 google.com on this machine. Now I want to connect my MacBook to this machine by making it an IPv6-only client and perform some tests, but I do not have an IPv6 address assigned on this MacBook. I'm wondering, how should I manually assign one so as to route all my IPv6 traffic (I will disable IPv4 on my MacBook) to this machine, which will be picked up by DNS64+NAT64 to be converted to IPv4 requests and sent to the outside world?

    Read the article

  • Tuning MySQL to consume less memory

    - by Alex
    I have a VM which has 2GB Ram, (full specs) And I am setting up a site which has one table in particular with over a million records. There's little or no usage of this particular database (perhaps once or twice a day) but simply running mysql grinds the whole server to a halt. I've looked through the top results but nothing is really denting the CPU however the memory seems to be the issue. The site isnt even live of taking requests yet. the memory situation looks like this: # free -m total used free shared buffers cached Mem: 2006 1880 126 0 3 53 -/+ buffers/cache: 1823 183 Swap: 2047 345 1702 Are there any good pointers to tune mysql to stop hogging the system memory? Thanks very much EDIT: (requested by 8bit): http://tny.cz/b41a0b12

    Read the article

  • Proxy service likes Apache Http

    - by Aptos
    Currently I try to simulate my app as distributed servers, so I let them run on localhost:9000 and localhost:9001, i tried using apache load balancer but it is really hard to config on mac, my idea is the second server localhost:9001 will be kept ideal and the requests only be redirected to them when the first server is downed. Is there any good free program can do that ? (except Apache httpd). Extra functions: my application is written in java and maintain an in-memory object, is there any service that can synchronize that object between 2 servers so they can keep uptodate status of other (the second one takes state of the first one)? Is there any app can support that? Thank you very much.

    Read the article

  • Logs being flooded from Squid for having intercepted and authentication enabled together

    - by Horace
    I have done some hefty Google'ing and I can't seem to find a single solution to this issue that I cam currently experiencing. Here is a sample configuration from squid that I have: # # DIGEST Auth # auth_param digest program /usr/sbin/digest_file_auth /etc/squid/digpass auth_param digest children 8 auth_param digest realm LHPROJECTS.LAN Network Proxy auth_param digest nonce_garbage_interval 10 minutes auth_param digest nonce_max_duration 45 minutes auth_param digest nonce_max_count 100 auth_param digest nonce_strictness on # Squid normally listens to port 3128 # Squid normally listens to port 3128 http_port 192.168.10.2:3128 transparent https_port 192.168.10.2:3128 intercept http_port 192.168.10.2:3130 As noted above, I have three ports defined, 2 of them are transparent/intercept and one is a regular http port (which I use for authentication). Which works rather well in this configuration however my logs are getting flooded of this entry authentication not applicable on intercepted requests whenever a transparent connection is made. So far, I can't seem to find any documentation that would describe how to suppress these messages ?

    Read the article

  • How to install a proxy LDAP

    - by Jean-Claude
    I have to install an LDAP proxy on a compute cluster frontend. The idea is to avoid the compute nodes to make too many requests on the campus LDAP server. How can I install this to make it work with the school's LDAP? The frontend OS is a RHEL 6.2. I found that I have to install the LDAP server and configure it as a proxy. But all I can find is examples of /etc/openldap/slapd.conf file configuration but after testing different configuration, no results. Furthermore, according to RHEL 6 - Deployment Guide, this config file is obsolete: OpenLDAP no longer reads its configuration from the /etc/openldap/slapd.conf file. Instead, it uses a configuration database located in the /etc/openldap/slapd.d/ directory. Any help is welcomed. Thank you

    Read the article

  • Dhcp server change fail after successful importing to new machine

    - by Tathagata
    I transfered the configs of a dhcp server from one server to another both running Windows Server 2003 R2 following http [://] support.microsoft.com/kb/325473. The new server has a statically configured ip(outside the scope) like the old one. Stopped the server on the old, and started up in the new server (authorized too) - but when I ipconfig /renew from a client its network interface fails with all 0.0.0.0 (or 169...*). I read somewhere I need to reconcile the scope to sync the new registry values ('ll try this tomorrow). What other troubleshooting steps can I take other than these (which didn't help)? Things work fine when the old server resurrects and the new one is taken down. The new server showed there was no requests for offer.

    Read the article

  • Difference between key_buffers and recommendation

    - by Typeoneerror
    I'm looking to add a bit of memory to MySQL on a Linode VPS server on which I've got a small facebook (canvas app) PHP app using MySQL running. I'm not super familiar with MySQL optimization so I'm hoping to find a simple answer. I think I want to increase the key_buffer size (the default is 16M) to something like 32M to start, but I'm not sure if I need to tweak anything else as well. All I've done so far is increase the query_cache_size to 32M from 16M. There's also key_buffer under [mysqld] and key_buffer under [isamchk]. What are the difference between those two? If I have Linode 2048MB (http://www.linode.com) VPS, what would recommend I set the buffers to? I don't expect this site to have tons of visitors, but I'd like it to be as optimized as possible. Definitely way more heavy on the database access than PHP and very few HTTP requests.

    Read the article

  • What configuration changes needed on tcServer to work with Apache Web server

    - by aos37
    Hi, I have Apache Webserver 2.2.17 and tcServer-6.0.20 and I want to dispatch requests from apache to tcserver. I am using mod_jk.so and I have the following in httpd.conf LoadModule jk_module modules/mod_jk.so <IfModule jk_module> JkWorkersFile /x/y/apache2/conf/workers.properties JkLogFile /x/y/apache2/logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkMount /xyz/* ww </IfModule> My workers.properties file under /x/y/apache2/conf/workers.properties has worker.list= ww worker.ww.type=ajp13 worker.ww.port=12000 worker.ww.host=www.abc.com I'm new to tcServer (and tomcat) and I don't know what changes I have to make in server.xml on tcServer to get this to work with Apache. Any help would be appreciated.

    Read the article

  • How to install Gitlab in a VM on a production server?

    - by Michaël Perrin
    I have a production server running Ubuntu 12.04 and I would like to install on it a VM with Gitlab (using Vagrant and Virtualbox). Let's say that the address to access Gitlab is gitlab.mydomain.com . The DNS zone has been configured to point to the IP address of the server. I want users to be able to access to Gitlab (either for pushing to a repository, or for accessing to the web interface) from the outside. The VM has been configured to have an IP address. It means that when browsing http://gitlab.mydomain.com for instance, the request has to be forwarded to the VM on the server, ie. to the VM IP address. What are the ways to configure this? Can Apache be used as a proxy? In this case, I guess it only works for HTTP requests, but not for pushing to a Git repository on the VM.

    Read the article

  • System Center 2012 Service Manager change request status stuck at new

    - by Chuck Herrington
    The guy that built and setup this system left rather abruptly and I've taken over. My current issues are I have several change requests that are stuck at New. They do not move to Pending or In Progress. The system is not sending emails when incidents are getting assigned to people. This used to work on this system. I have done a lot of searching and the usual solution to this of stopping and restarting the system center services does not help. Can anyone give me any ideas of where else to look? Update: From all the searching I have done it seemed like I was at the point of re-installing. My initial installation of SCSM 2012 was on a machine that was upgraded from SCSM 2010 and also hosted SCCM 2007 and WSUS. We decided to give it a fresh start on a new server by installing a second instance of the SCSM server on a brand new 2008 R2 server then promoting the new server to the workflow master using the procedures outlined in this article - Dealing with Multiple management Servers. I've gotten to the point where we have both the old and the new server up and the new server has been promoted. I had hopped to get spammed by emails all the sudden due to the workflow taking off, but no such luck. Once all the clients are reconfigured to point to the new server we still plan to decommission the old server but at this point it seems to be that the problem is in the database. Short of any other input from the community, my next plan is to install a 180 day trial on a test server, complete with a separate database so that I can do a side by side comparison between a completely fresh install and what I have now and see if I can find any differences. While that install is running I also plan on investigating the event logs to see if there is anything in there that can shed some light on what is happening on the new server. Update 2: So I've now got a test SCSM server up with a completely fresh install including Database and it seems to be able to transition Change Requests from New to In Progress. I'm attempting to find differences between the two. Stay Tuned! Update 3: In looking through the event log on the new SCSM machine i discovered: Log Name: Operations Manager Source: OpsMgr Root Connector Date: 10/9/2013 3:48:18 PM Event ID: 28000 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: scsm02 Description: The Root connector received an exception from the SDK Service while submitting task status: Cannot set availability on a health service that doesn't exist. This lead me to Event ID 2800 logged after installing secondary server for System Center 2012 Service Manager SP1. I contacted MS to obtain the hotfix, BIG warning here, turns out the hotfix is not so "hot". In order to apply this hotfix, you have to uninstall then reinstall using the files they supply. :( This is where I am at now ... Update 4: Not much luck after the re-install. The errors in the event log have gone away on the new server but the workflows still aren't running and neither the event log nor the workflow status screen seem to indicate why. I've done a comparison of the Activity and the Change Request Event Workflows and I've removed everything from the production system that is not in my fresh test system (which is everything), shut down the services, cleared out the cache folders and restarted the services and still no joy. At the moment the only thing I can think to do is either a)nuke the entire system including the database and start over, losing all of our data in the process or b)contact MS (which is probably going to cost us a butt load of money and time in the end to only advise us to do the same thing. Maybe more idea's will come after coffee ... No answers came after coffee. Attempting to contact MS. Managed to get to their first line of defense, gave them our SA number and someone is supposed to call me back. I am trying to log into my incident on their site to update my ticket with the link to this thread but when i click on the link in the email they sent me it goes to a "Sorry, the page you requested is not available" page ... Linux is looking better and better all the time.

    Read the article

  • Opening firewall to incoming port 443

    - by jrdioko
    I recently set up the ufw firewall on a Linux machine so that outgoing connections are allowed, incoming connections are denied, and denied connections are logged. This seems to work fine for most cases, but I see many denied connections that are incoming on port 443 (many with IPs associated with Facebook). I can open that port to incoming connections, but first wanted to ask what these could be. Shouldn't HTTPS requests be initiated by me and be treated as outbound, not inbound connections? Is it typical to open incoming port 443 on consumer firewalls?

    Read the article

  • Amazon EC2 Elastic Load Balancing - strategy for zero downtime server restart

    - by Yoga
    I have 5 web servers (Apache/mod_perl) behind Amazon EC2 Elastic Load Balancing, when I deploy codes to the web servers, I am doing this.. For each machine, shutdown the Apache Update the code Start over the server and proceed to the next server I think when my server is shutdown, ELB will not distribute request to my server, but how about the request still serving? I think a better approach is Stop accepting new request from ELB Sleep for sometimes, shutdown web server only if all requests are responded Update the codes Start the server again But how to perform (1) and (2) from my local sever? Do I need to use AWS API? or other easy way to do it? Thanks.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >