Search Results

Search found 15035 results on 602 pages for 'request'.

Page 346/602 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Why is DAVExplorer not connecting?

    - by C.W.Holeman II
    DAVExplorer is not connecting. Connecting to a WebDAV Server states: Once you have entered a location URL, and (if necessary) your login name and password, DAV Explorer will connect to the remote WebDAV server, and request a listing of the resources there. A hierarchical view of the sub-collections will be displayed Invoke Apache Jackrabbit $ java -jar jackrabbit-standalone-2.0.0.jar --port 8200 Welcome to Apache Jackrabbit! ------------------------------- Using repository directory jackrabbit Writing log messages to jackrabbit/log Starting the server... Apache Jackrabbit is now running at http://localhost:8200/ Use DAVExplorer $ java -jar DAVExplorer.jar Then connect to localhost:8200/repository/default/ which pops up: Login ===== Login name: [admin] Password: [admin] <OK> The pop up closes then nothing changes. Using cadaver confirms Jackrabbit is working: $ cadaver http://localhost:8200/repository/default/ Authentication required for Jackrabbit Webdav Server on server `localhost': Username: admin Password: dav:/repository/default/> ls Listing collection `/repository/default/': succeeded. Coll: com 0 Mar 13 11:07 Coll: it 0 Mar 13 11:07 Coll: net 0 Mar 13 11:07 Coll: org 0 Mar 13 11:07 Coll: za 0 Mar 13 11:07

    Read the article

  • Error installing Network driver

    - by Simon Carpentier
    I'm tring to install NetLimiter 3 (3.0.0.10, x64) on my Windows 7 x64 machine. At the driver installation step, I'm getting an error: Error 0x8004a029: Couldn't install the network component. (InstallNdisIMDriver) UPDATE 2010-09-20: I received a response to my support request: Hi Simon, It seems that problem is rather on your system than in NL3. Too many networking applications (VPN, Virtual computers) are installed on your machine and they are preventing NL3 from proper installation. In order to install NetLimiter 3, try to disable/temporarily remove these apps. Several users had the same problem and shuffling with adapter and networking software helped them. Please, let us know which action helps you (if any). Sincerely, Jan Bilek Any thoughts on this?

    Read the article

  • POST data not being received

    - by Alexander
    I've got an iPhone App that is supposed to send POST data to my server to register the device in a MySQL database so we can send notifications etc... to it. It sends it's unique identifier, device name, token, and a few other small things like passwords and usernames as a POST request to our server. The problem is that sometimes the server doesn't receive the data. And by this I mean, its not just receiving blank values for the POST inputs but, its not receiving ANY post data at all. I am logging all POST inputs to my server into some log files and when the script that relies on the POST data from the device fails (detects no data) I notice that its because NO POST data was sent. Is this a problem on the server, like refusing data or something or does this have to be on the client's side? What could be causing this?

    Read the article

  • How do the routers communicate with each other ?

    - by Berkay
    Let's say that i want make a request a to a web page which is hosted in Europe (i live in USA).My packets only consist the IP address of the web page, first the domain name to ip address transformation is done, then my packets start their journey through to europe. i assume that MAC addresses never used in this situation? are they? First, my packets deal with many routers on way how these routers communicate with each other?, are router addresses added to my packet headers ? Second, is there a specific path router to router comminication or which conditions affect this route? Third to cross the Atlantic Ocean, are cables used or... ?

    Read the article

  • Are you a GPGPU developer? Participate in our UX study

    - by Daniel Moth
    You know that I work on the parallel debugger in Visual Studio and I've talked about GPGPU before and I have also mentioned UX. Below is a request from my UX colleagues that pulls all of it together. If you write and debug parallel code that uses GPUs for non-graphical, computationally intensive operations keep reading. The Microsoft Visual Studio Parallel Computing team is seeking developers for a 90-minute research study. The study will take place via LiveMeeting or at a usability lab in Redmond, depending on your preference. We will walk you through an example of debugging GPGPU code in Visual Studio with you giving us step-by-step feedback. ("Is this what you would you expect?", "Are we showing you the things that would help you?", "How would you improve this") The walkthrough utilizes a “paper” version of our current design. After the walkthrough, we would then show you some additional design ideas and seek your input on various design tradeoffs. Are you interested or know someone who might be a good fit? Let us know at this address: [email protected]. Those who participate (and those who referred them), will receive a gratuity item from a list of current Microsoft products. Comments about this post welcome at the original blog.

    Read the article

  • Adding a virtual directory IIS 7.5 Windows 7 Ultimate x64

    - by Dave
    Trying to get my IIS 7.5 playing nice with VS 2008 on Windows 7 Ultimate 64-bit. I'm getting this error: System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. This happens when accessing a virtual directory outside C:\inetpub\wwwroot. I'd like to be able to create virtual directories outside the root if I can. I've added the NETWORK SERVICE to the folder hosting the virtual directory, still no luck. This folder is on my C: drive, not a share. TIA

    Read the article

  • Windows clients not using NTP server provided via DHCP

    - by gencha
    I have a network consisting mostly of Windows Vista and 7 clients and an Ubuntu server. The server provides both the DHCP and NTP services through dhcp3-server and openntpd. In my dhcpd.conf, the subnet is declared as follows: subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.200; option broadcast-address 10.10.10.255; option routers 10.10.10.1; option ntp-servers 10.10.10.1; } The clients don't seem to be using the NTP server though. When I capture the network traffic with Wireshark during the DHCP process, I also see no mention of the NTP option in the DHCP offer message. I am not quite sure if the clients would have to specifically request that option to receive it or if I have to make another configuration to offer the option.

    Read the article

  • E-Business Suite Proactive Support - Workflow Analyzer

    - by Alejandro Sosa
    Overview The Workflow Analyzer is a standalone, easy to run tool created to read, validate and troubleshoot Workflow components configuration as well as runtime. It identifies areas where potential problems may arise and based on set of best practices suggests the Workflow System Administrator what to do when such potential problems are found. This tool represents a proactive way to verify Workflow configuration and runtime data to prevent issues ahead of time before they may become of more considerable impact on a production environment. Installation Since it is standalone there are no pre-requisites and runs on Oracle E-Business applications from 11.5.10 onwards. It is installed in the back-end server and can be run directly from SQL*Plus. The output of this tool is written in a HTML file friendly formatted containing the following on both workflow Components configuration and Workflow Runtime data: Workflow-related database initialization parameters Relevant Oracle E-Business profile option values Workflow-owned concurrent programs schedule and Workflow components status Workflow notification mailer configuration and throughput via related queues and table Workflow-relevant recommended and critical one-off patches as well as current code level Workflow database footprint by reading Workflow run-time tables to identify aged processes not being purged. It also checks for large open and closed processes or unhealthy looping conditions in a workflow process, among other checks. See a sample of Workflow Analyzer's output here.  Besides performing the validations listed above, the Workflow Analyzer provides clarification on the issues it finds and refers the reader to specific Oracle MOS documents to address the findings or explains the condition for the reader to take proper action. How to get it? The Workflow Analyzer can be obtained from Oracle MOS Workflow Analyzer script for E-Business Suite Workflow Monitoring and Maintenance (Doc ID 1369938.1) and the supplemental note How to run EBS Workflow Analyzer Tool as a Concurrent Request (Doc ID 1425053.1) explains how to register and run this tool as a concurrent program. This way the report from the Workflow Analyzer can be submitted from the Application and its output can be seen from the application as well.

    Read the article

  • Bizarre image loading problem from apache2

    - by NateDSaint
    Users have complained a few times about seeing a bizarre set of pink or green stripes on our webpage. At first I thought there were a rash of video card outages, but then someone sent me a screenshot from their browser (IE8). I later saw the same thing, but with slightly different colors on Chrome. Users have experienced this on their iPads and iPhones (iOS Safari). Because I've optimized the site to cache images, the bad image stays around until you clear your cache, so once you do, it resolves itself. My assumption is that the transmission of the image is being cut off mid-stream and then staying that way, but I can't for the life of me figure out why. Here's what I've checked: Header length is being sent, and transmission looks okay (wget sample below): wget http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg --2012-04-05 08:46:00-- http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg Resolving www.superiorlivestock.com (www.superiorlivestock.com)... [ip redacted] Connecting to www.superiorlivestock.com (www.superiorlivestock.com)|[ip redacted]|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45926 (45K) [image/jpeg] Saving to: `wallbg2.jpg' Images are not being served gzipped (apache conf below): SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary The site is www.superiorlivestock.com, and here's a sample of the bad page load: Is there something obvious I'm missing? Am I saving my images in the wrong format somehow?

    Read the article

  • Ubuntu 13.10: nslookup not automatically appending DNS suffixes

    - by Alex
    When I configure Ubuntu 13.10 server I ran into a problem: Usally (working on 12.10 machines) I add the following information in my /etc/resolv.conf file: nameserver 192.168.2.180 domain our.domain.com Normally, when I then ping a given host , .e.g: ping host01 It would resolve the FQDN to host01.our.domain.com However in ubuntu 13.10 this doesn't seem to be working, it just returns the following: ~# nslookup host01 Server: 192.168.2.180 Address: 192.168.2.180#53 ** server can't find host01: SERVFAIL Which is normaly since the DNS server doesnt respont to a 'host01' request. However if I do the same nslookup on an Ubuntu 12.10 machine it automatically appends the 'our.domain.com' suffix to whatever I throw at it that doens't already have this suffix. Is this a 13.10 bug, or am I doing something wrong?

    Read the article

  • Move flag for follow of a specific color to a folder in Outlook 2003

    - by Campo
    I have a user request to be able to create a rule that would move an email in outlook 2003 that the user flagged for follow up to a specific folder. That seemed simple enough till he requested that depending on the flag color they were to be moved to a specific folder. Issue is that in outlook 2003 that's not an option when creating a rule. I know that this is very straight forward in outlook 2007 and 2010 and using the categories feature is very convenient as it displays as a list when you right click.... Though in 2003 categories are not so convenient. as an example the user will flag for follow up as so... Red Flag for sales Blue Flag for requests Green Flag for personal They want a rule that will move all items with a red flag to the sales folder, Green flag to the requests folder and so on.... Thank you for your suggestions.

    Read the article

  • Install proprietary drivers 14.04 NVIDIA (steam segmentation issue)

    - by allthosemiles
    Recently, I finally got the official drivers for my NVIDIA 560 Ti card installed on Ubuntu 14.04 (hooray) However I started looking into installing Steam and I'm getting segmentation errors when I try to run the software. I tried installing 32-bit libs and it seemed like they weren't available or they were already installed. Upon further investigation, I found that a solution is to install the proprietary drivers, install steam then switch back to the other drivers. I'm not really sure what "proprietary drivers" are in all honesty. Has anyone gone through this process that could provide some insight here? (I installed the official 64-bit driver from the NVIDIA site for my 560 Ti just for reference. And the Ubuntu version installed is 64-bit as well) Update: This is the error text I get when trying to run steam after installing it via the ubuntu store. Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 3943 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete! Restarting Steam by request... Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME has been set by the user to: /home/dbrewer/.steam/ubuntu12_32/steam-runtime Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 4066 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" What I get when I run "steam --reset" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete!

    Read the article

  • How to merge (and not replace) folders when copying, on mac?

    - by Cawas
    There's a similar question about windows. This is the same, but for mac. If I try to copy or move a folder to somewhere it already exists, it asks to replace it. That would result in deleting the target. Rather I want to merge. There's already a aquataskforce request about this, and it's a discussion going for a lont time if it's even something that should exist on Mac, due to its whole philosophy. Discussions at apple are outdated and didn't help much as well. As usual, there are professional solutions for doing this, such as Changes and Araxis. And there is the rsync or command line alternatives. But I want a free and simple solution, something like how it is done in Windows or Linux. I won't be doing it much anyway. By the way, PathFinder don't have such option as well and FolderMerge doesn't work on Snow Leopard as far as my 1 test went.

    Read the article

  • Multiple vulnerabilities in Oracle Java Web Console

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-5333 Information Exposure vulnerability 5.0 Apache Tomcat Solaris 10 SPARC: 147673-04 X86: 147674-04 CVE-2007-5342 Permissions, Privileges, and Access Controls vulnerability 6.4 CVE-2007-6286 Request handling vulnerability 4.3 CVE-2008-0002 Information disclosure vulnerability 5.8 CVE-2008-1232 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2008-1947 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2008-2370 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 5.0 CVE-2008-2938 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 4.3 CVE-2008-5515 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 5.0 CVE-2009-0033 Improper Input Validation vulnerability 5.0 CVE-2009-0580 Information Exposure vulnerability 4.3 CVE-2009-0781 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2009-0783 Information Exposure vulnerability 4.6 CVE-2009-2693 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 5.8 CVE-2009-2901 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2009-2902 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 4.3 CVE-2009-3548 Credentials Management vulnerability 7.5 CVE-2010-1157 Information Exposure vulnerability 2.6 CVE-2010-2227 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 CVE-2010-3718 Directory traversal vulnerability 1.2 CVE-2010-4172 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2010-4312 Configuration vulnerability 6.4 CVE-2011-0013 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2011-0534 Resource Management Errors vulnerability 5.0 CVE-2011-1184 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-2204 Information Exposure vulnerability 1.9 CVE-2011-2526 Improper Input Validation vulnerability 4.4 CVE-2011-3190 Permissions, Privileges, and Access Controls vulnerability 7.5 CVE-2011-4858 Resource Management Errors vulnerability 5.0 CVE-2011-5062 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-5063 Improper Authentication vulnerability 4.3 CVE-2011-5064 Cryptographic Issues vulnerability 4.3 CVE-2012-0022 Numeric Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • Using NPS to restrict access to WLAN

    - by eric.s
    We currently have one WLAN that only domain users can connect to. We will be adding a guest WLAN and would like all non-domain machines to use this, even if a user has a domain account. We have set up NPS and can log in against it, but we can not restrict the connection option to be a domain computer AND a domain account. As a network policy it states that it moves along through each policy until it finds one that it accepts or runs out. For connection request policies Domain Computers is not an option. This is where I thought I may be able to stop it. Has anyone been able to successfully restrict this without manually adding MACs to the WLAN Controller?

    Read the article

  • How do i find out what's preventing delete requests from working in iis7.5 and iis8?

    - by Simon
    Our site has an MVC Rest API. Recently, both the live servers and my development machine stopped accepting DELETE requests, instead returning a 501 Not Implemented response. On my development machine, which is Windows 7 running IIS7.5, the solution was to add these lines to our Web.config, under system.webServer / handlers: <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> ... <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE" type="System.Web.Handlers.TransferRequestHandler" resourceType="Unspecified" requireAccess="Script" preCondition="integratedMode,runtimeVersionv4.0" /> However, this didn't work on any of our live servers; not on Sever 2008 + IIS7.5 and not on Server 2012 + IIS8. There are no verbs set up in Request Filtering, and WebDAV is not installed on any of our live servers. The error page gives no further information, and nothing gets recorded in the logs. How do I find out what's preventing DELETE requests from working in iis7.5 and iis8?

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • To display the field values submitted with AJAX [closed]

    - by work
    Here is the code:I want to post the field values entered in this code to the page ajaxpost.php using Ajax and then do some operations there. What would be code required to be written in ajaxpost.php <html> <head> <script type="text/javascript"> function loadXMLDoc() { var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById("myDiv").innerHTML=xmlhttp.responseText; } } var zz=document.f1.dd.value; //alert(zz); var qq= document.f1.cc.value; xmlhttp.open("POST","ajaxpost.php",true); xmlhttp.setRequestHeader("Content-type","application/x-www-form-urlencoded"); xmlhttp.send("dd=zz&cc=qq"); } </script> </head> <body> <h2>AJAX</h2> <form name="f1"> <input type="text" name="dd"> <input type="text" name="cc"> <button type="button" onclick="loadXMLDoc()">Request data</button> <div id="myDiv"></div> </form> </body> </html>

    Read the article

  • Open source CMS for a university department

    - by Greg Kuperberg
    I realize that this type of question gets asked over and over again. Nonetheless, I want to ask a more specific version. I'm in a university math department. Long ago our sysadmins (or just one at the time) switched to a web content management system. At the time, Zope looked like an informed choice. We have used Zope for years, but at least in my opinion, it has always been a controversial decision. At the time I didn't understand why it was so important to have a web CMS. Now I see that it certainly is important, but I don't know that it should be Zope. The good (even necessary) features of Zope for us are: It's free and Linux-based. It is a true CMS and not something else (e.g. wiki or blog) It lets you write HTML and scripts. What I really don't like about Zope is that the outcome of using it is all-or-nothing in a lot of ways. At least in convenient use, it ends up dividing the enterprise into superusers who can do everything, and lusers who can't do anything (except write their own home pages in plain HTML). It has a huge user manual, which end users won't have time to read. Somehow with the access permissions, the simple thing to do is to let a few admins access all of the source and data and that's it. Since this is a math department, the user base varies from real novices to people who understand computers reasonably well. But as it stands, any change that involves Zope has to go through the sysadmins. When the sysadmins are in a hurry, sometimes they will also just add plain HTML pages to the web site instead of using the Zope framework. It doesn't help matters that Zope is fairly disk-intensive and fairly hype-intensive. Not to dwell on Zope too much, but I am wondering what is the right web CMS for a mixed user base of terminal novices, quick studies, and experienced users. Some users might want intermediate permissions, e.g. read permission but not write permission, or permission to change some subset of the pages or see some subset of the database tables. Also it should be Linux-based and open source and a little bit scalable, and of course widely used and well-supported is a good idea. I might guess that the answer is Drupal just because that was the general answer before, but I don't know if it is the right type of CMS for this purpose. (But note that Python is a relatively popular language in a math department, among other reasons because Sage is based on Python.) I can see that I didn't completely define the question and that people are guessing what type of site it is. It is the UC Davis Math Department. The main structure of the site is not suitable for a wiki and it is also not the same thing as a course environment like Moodle. Rather, the site is mostly structured as a generic medium-small enterprise. Some components of the site could be a wiki, Moodle, LaTeX plugin, Request Tracker, etc. However, the main issue is not these components. The main issue is that it would be better to decentralize management of the site. Right now, everything that is in the Zope CMS has to go through the sysadmins. Every other user in the department either has to put in a request to them, or write their own web pages with no help from Zope. There are two main reasons for this: (1) Other people in the department don't have time to read the Zope manual. (2) It's a hassle to set up intermediate permissions in Zope. However, there are other people in the department who know how to write computer programs and use markup languages. I wouldn't want a solution that assumes that users either can't be trusted with much more than drag-and-drop, or that they are IT professionals who sleep with documentation manuals. I'm wondering if Plone/Zope still has this quality, since certainly Zope by itself does. But I also wonder sometimes if common-sense flexibility is unfashionable these days, and that things in general have be either mindlessly easy or incredibly powerful.

    Read the article

  • XSP2 hangs after some time

    - by Sebi
    I'm running a REST/JSON webservice (SVC) using xsp2 (and xsp2 through Apache/mod_mono). After some hours, the service just hangs and outputs a timeout after about 5 minutes. This can only be solved by restarting xsp2. Sometimes multiple restarts are needed to get it running again. I've made a small sample webservice simply returning a "pong". The example is available in the bug report: https://bugzilla.novell.com/show_bug.cgi?id=608158 The bug can sometimes be reproduced following these steps: Steps to Reproduce: 1. start xsp2 (webservice) 2. request http://[server]/WSPing.svc/ping Actual Results: After some time, I always get timeouts. I'm running a gentoo linux as a XEN DomU and i'm using Visual Studio to build and publish the service. Any hints???

    Read the article

  • Apache vs Lighttpd: Weird behavior in reverse proxy mode.

    - by northox
    Context: I have an Apache server running in reverse proxy mode in front of a Tomcat java server. It handle HTTP and HTTPS and send those request back and forth to the Tomcat server on an internal HTTP port. Goal: I'm trying to replace the reverse proxy with Lighttpd. Problem: while asking for the same HTTPS url, while using Apache as the reverse proxy, the Tomcat server redirect (302) to an HTTPS page but with Lighttpd it redirect to the same page in HTTP (not HTTPS). Question: What does Lighttpd could do different in order to have a different result from the backend server? In theory, using Apache or Lighttpd server as a reverse proxy should not change anything... but it does. Any idea? I'll try to find something by sniffing the traffic on the backend tomcat server.

    Read the article

  • Need advise on choosing aws EC2

    - by Mayank
    I'm planning to host a website where in the first phase I would target 30,000 users. It is in php and runs on Apache server. I'm assuming 8,000 users can be online in worst case scenario and 1000 of them will be uploading photographs. A photograph will be resized to around 1MB at client side and one HTTP request is uploading only one photograph. My plan: 2 Small EC2 instances to run Apache httpd 2 Small EC2 instances to DB (Postgresql). I to write data and other its read replica. EBS volumes for DBs Last, Amazon S3 for uploaded photographs. My question here Is Small EC2 instance more than what I require. I mean should I go for micro Is 8000 simultaneous user a right no. (to decide what EC2 instance to choose) for a new website Or should I go for Small instance so to make it capable of spikes

    Read the article

  • Recovering a word file (Select the encoding that makes your document readable)

    - by HOY
    My girlfriend requested me to recover a word file which is her 2 months of work :(, and this is her thesis for graduation. It shows the "Select the encoding that makes your document readable" screen when I tried to open it, I tried 2 recovery tools but didn't work. File can be downloaded from the below link. http://s3.dosya.tc/server3/bmu4bi/glava.doc.html I kindly request your help. *The history of the issue*** she said she was copy pasting from other files while creating this file(she copy pasted from a pdf too). 2 days ago she opened the file in company pc and worked on it. Wrote 2 pages and saved. Next morning she could not open it. it is possible that an error occured when saving. the computer she worked freezes sometimes , when she was working there was a file in usb she plug out and in it and continue to work. then saved.

    Read the article

  • nginx+php-fpm help optimize configs

    - by Dmitro
    I have 3 servers. First server (CPU - model name: 06/17, 2.66GHz, 4 cores, 8GB RAM) have nginx as load balancer with next config upstream lb_mydomain { server mydomain.ru:81 weight=2; server 66.0.0.18 weight=6; } server { listen 80; server_name ~(?!mydomain.ru)(.*); client_max_body_size 20m; location / { proxy_pass http://lb_mydomain; proxy_redirect off; proxy_set_header Connection close; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_pass_header P3P; proxy_pass_header Content-Type; proxy_pass_header Content-Disposition; proxy_pass_header Content-Length; } } And configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 80 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 pm.status_path = /status ping.path = /ping ;ping.response = pong request_terminate_timeout = 30s request_slowlog_timeout = 10s slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 20 php-fpm processes which use from 1% - 15% CPU. So it's have high load averadge: top - 15:36:22 up 34 days, 20:54, 1 user, load average: 5.98, 7.75, 8.78 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 34.1%us, 3.2%sy, 0.0%ni, 37.0%id, 24.8%wa, 0.0%hi, 0.9%si, 0.0%st Mem: 8183228k total, 7538584k used, 644644k free, 351136k buffers Swap: 9936892k total, 14636k used, 9922256k free, 990540k cached Second server(CPU - model name: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz, 8 cores, 8GB RAM). Nginx configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config of php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 50 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong ;request_terminate_timeout = 0 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 50 php-fpm processes which use from 10% - 25% CPU. So it's have high load averadge: top - 15:53:05 up 33 days, 1:15, 1 user, load average: 41.35, 40.28, 39.61 Tasks: 239 total, 40 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 96.5%us, 3.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Mem: 8185560k total, 7804224k used, 381336k free, 161648k buffers Swap: 19802108k total, 16k used, 19802092k free, 5068112k cached Third server is server with database postgresql. Also i try ab -n 50 -c 5 http://www.mydomain.ru/ And I get next info: Complete requests: 50 Failed requests: 48 (Connect: 0, Receive: 0, Length: 48, Exceptions: 0) Write errors: 0 Total transferred: 9271367 bytes HTML transferred: 9247767 bytes Requests per second: 1.02 [#/sec] (mean) Time per request: 4882.427 [ms] (mean) Time per request: 976.486 [ms] (mean, across all concurrent requests) Transfer rate: 185.44 [Kbytes/sec] received Please advise how can I make lower level of load average?

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >