Search Results

Search found 14226 results on 570 pages for 'feature requests'.

Page 13/570 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Plesk 9 VPS - Doesn't reply to NameServer requests (nslookup, etc)

    - by Ben
    Hi, I'm trying to troubleshoot a problem with a new VPS i'm setting up. The VPS is running Plesk 9 on a CentOS 5 system. Everything works fine, except it doesn't serve dns requests. If I try something like nslookup [somedomain.com] the.ser.ver.ip to test a DNS query, i get the following error ;; connection timed out; no servers could be reached I can't telnet to it on port 42 either.. I'm guessing something is blocking the requests.. firewall maybe? the plesk firewall module is installed and the nameservers entry is green. Any other way I can check what's blocking it on the server? Any help/tip greatly appreciated. Note: http works, i can telnet to the server on port 80 and i can also ping the server Thanks

    Read the article

  • Local Apache on Windows XP not finishing page requests

    - by asgeo1
    I have Apache 2.2.11 installed locally on my Windows XP (SP3) dev machine, which I setup about 3 months ago. I have just started having a strange problem in the last few week. Apache is serving some basic PHP applications like phpMyAdmin. When I make a page request, Apache appears to not finish serving all resources for that page. Firefox shows the "Transferring data from servername..." message, and the page never completes. The same problem happens in Internet Explorer too. I can sometimes tell which resource it is waiting on, because most of the page will render except for some image or similar resources. (Not sure why Firebug doesn't show this) It doesn't have the problem every page request - for page requests where most of the resources are cached in my browser, the page request will work with no problems. Or pages that are very light will work with no problems. However, if I "hard" refresh the page, I will have this problem (probably because it is requesting all page resources) Does anyone know what this could be? It is so strange that it has only just started happening - and I did not make any changes to my system (that I am aware of) I tried playing with the Apache ThreadsPerChild setting, but it did not seem to make a difference. UPDATE: I have been doing some more tests. I have been serving the most basic of pages, just a plain HTML file: <html> <body> <h1>testing</h1> </body> </html> If I request this page multiple times in a row, AND each request occurs immediately after the previous has completed, then 50% of the time the request will time out. However, if I put a 1-2 second gap between requests, then there is no problem. This correlates to what I have observed when the brower requests a real application page. When the browser has nothing cached, then all of the page resources are requested from the browser in a short amount of time - this appears to trigger the problem. UPDATE2: Nathan Long has helped me understand the issue a little better with the server-status page (see below). It is weird, it is like the server has a hickup sending data to the client. The client sits there waiting forever for data that never arrives. Closing the client process does not terminate the connection on the server - the server still has active threads for each previously attempted connection, but they just sit there - not sending any data and never terminating. (even though the client is now closed) Only a restart of the server seems to terminate them.

    Read the article

  • Viewing file: protocol requests made from a swf

    - by Erik Vold
    I've got a swf file in a index.html file, the swf is basically loading images, but it requests a xml file which details where the image/asset files are located. Now the index file (which is just html, js, and css) works (ie: the swf file is working) when used on this url: http://localhost:8500/core/index.html which I'm able to do with Coldfusion 8 single server development environment. But when I access the file directly with this url: file:///C:/ColdFusion8/wwwroot/core/index.html the swf file does not work. So I'm guessing that the swf is having trouble locating the files that it needs. The problem is I have no idea what file urls it is try to access atm, and both Firebug and Fiddler are not able to inform me what requests are made on the file: protocol. So is there another tool that I can use?

    Read the article

  • Vetting Github Pull requests with Hudson

    - by cdecker
    I've been using Gerrit and Hudson very successfully to test and automatically vote on new checkins in the past and now I'm wondering whether it is possible to set up Hudson so that it'll check Github at regular intervals and looks if there are new Pull Requests available. If yes it should apply the patch and run the unit tests against it, adding a comment to the pull request if no failure is detected. It would certainly reduce the amount of work going into vetting patches/pull requests. Is that possible at all, or should I stick with my Gerrit setup?

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

  • PHP Requests Being Blocked After Making About 25 in Ten Minutes

    - by Daniel Stern
    We have an administrative portal where we run PHP functions through a Javascript portal using ajax for administrative purposes. For example, we might have a function called updateAllDatabaseEntries() which would call AJAX functions in rapid succession, with those functions each executing numerous SQL queries. The problem is after making several successive requests from the same computer (not an excessive amount, maybe 30 in ten minutes) the system will stop responding to any PHP, HTTP requests ETC ONLY from my computer. From other computers in the office the panel can still be accessed, and access is restored to this computer after about 15 minutes. We believe this is not a glitch but some kind of security feature built into our server, possibly relating to Suhosin and likely well-intentioned but currently preventing us from running our system administration. Server Info: Linux 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux Cheers - DS

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • 1K incoming http post requests per second, each with a 10-50K file

    - by Blankman
    I'm trying to figure out what kind of server setup I will need to support: 1K http post requests per second each post will contain a xml file between 5-50K (average of 25 kilobytes) Even if I get a 100 Mb/s connection with my dedicated box (they usually give 10 Mb/s but you can upgrade), from my calculations that is about 12K kb/s which means about 480 25kb files per second. So this means I need around 3 servers then, each with 100 Mb/s connection. Would a single server running HAProxy be able to redirect the requests to other servers or does this mean I need to get something else that can handle more than 100 Mb/s to proxy things out to the other servers? If my math is off I'd appreciate any corrections you may have.

    Read the article

  • Apache redirect some requests to another server

    - by mucie
    We just bought a new server. We want our old server to respond the https connections(because of ssl certificate) and new server to respond the rest. New server is ready but i don't know how to redirect requests to new one. mydomain.com => old machine ip 10.10.10.41 => new machine Requests will come through mydomain.com. If it is https: respond else redirect to 10.10.10.41 How should i configure apache for this situation?

    Read the article

  • How to implement proper identification and session managent on json post requests?

    - by IBr
    I have some minor messaging connection to server from website via json requests. I have single endpoint which distributes requests according to identification data. I am using asynchronous server and handle data when it comes. Now I am thinking about extending requests with some kind of session. What is the best way to define session? Get cookie when registered and use token as long as session runs with each request? Should I implement timeout for token? Is there alternative methods? Can I cache tokens to same origin requests? What could I use on client side (Web browser)? How about safety? What techniques I should use to throw away requests with malformed data, to big data, without choking server down? Should I worry?

    Read the article

  • Kicking off the ODI12c Blog Series

    - by Madhu Nair
    Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 It is always exciting to talk about a new release, especially one as significant as the newly released Oracle Data Integrator 12c (ODI12c). Why? Because it is packed with features that addresses many requirements for the user community. If you missed sneak previews at this year's Oracle Open World sessions, do not despair. Because over the coming weeks the ODI12c team of developers and consultants will be sharing their perspective on key features, experiences and best practices for ODI12c right here through a series of blogs. Before diving into feature details in subsequent blogs it helps to understand the overall themes that went into developing ODI12c. Let the Productivity Flow: Let us face it. Designing for developer user experience is always top of mind to any enterprise software. ODI12c addresses this through the introduction of declarative flow based mappings (the topic of our next ODI blog by the way!!). Reusability has been addressed though the introduction of reusable mappings cutting down development times for repeated logics. An enhanced debugger makes life easy for complex granular debugging scenarios. Unique repository IDs now allow you to manage multiple repositories. Performance is Paramount: Another major area of focus for ODI12c is performance. Increased parallelism (like the multiple target table load feature), reduced session overheads and ability to customize loads plans through physical views all empower the user to tune run times for extreme performances. mapping showing multiple target load physical representation allowing users to choose execution options Integrating it all: This release is not just about ODI12c as a standalone product. Closer integration with Oracle GoldenGate now brings Change Data Capture (CDC) capabilities into ODI12c. Oracle Warehouse Builder (OWB) jobs can now be executed and monitored from within ODI12c. And ODI12c is fast becoming the de facto standard for Oracle Applications that need data integration in their solutions. The best example being the latest release of the Oracle BI Applications technology. Even as we bring you in-depth write-ups about the features there are some great previews and resources that are already out there. Like this super entry by beta partner Rittman Mead Consulting and this ODI12c Key Features White Paper. You can download ODI12c here (this post helps). The best though is the upcoming Executive Webcast featuring customers and executives who have seen and conceived the product. Don’t miss it!

    Read the article

  • ColdFusion Server crash after thousands of HTTP requests

    - by Jason Bristol
    We are running ColdFusion 8 on a windows server 2003 VPS with an API that exposes student records to a partner API through a connector. Our API returns around 50k student records serialized in XML format pretty seamlessly. My question originates when something very frightening happened today when we tested our connector to our partners API. Our entire website and web host went down. We assumed that our host was just having some issues and after 4 hours with no resolution and no response from their customer service we finally got a response from them claiming that they had an "unauthorized user" in their network. After our server was back up we were unable to connect to our website as if the web service or coldfusion itself had froze. This is really where my concern comes from as I fear we may have overloaded the web service. As I mentioned before we tried sending over 50k HTTP POST requests over to our partner's API, however everything stopped after around 1.6k Is this bad practice or is there some sort of rate limiting I can relax somewhere in server configuration? We managed to find a workaround, but it bypasses our connector which is critical to our design. This would have been a one time deal as the purpose of so many requests was to populate our partner's website with current data, after that hourly syncs will keep requests down to around 100 per hour. UPDATE Our partner API is owned and operated by Pardot. We are converting students to prospects by passing student data to their API which unfortunately only seems to accept one student at a time. For that reason we have to do all 50k requests individually. Our server has 4GB of RAM, an Intel Core 2 Duo @ 2.8GHz running Windows Server 2003 SP2. I monitored the server during a 100 student sync, a 400 student sync, and a 1.4k student sync with the following results: 100 students - 2.25GB of Memory, 30-40% CPU utilization, 0.2-0.3% network bandwidth 400 students - 2.30GB of Memory, 30-50% CPU utilization, 0.2-1.0% network bandwidth 1.4k students - 2.30GB of Memory, 30-70% CPU utilization, 0.2-1.0% network bandwidth I know this is a far cry from 50k students, but I don't want to risk taking down our CMS system again assuming that was the cause. To give you a look at our code: <cfif (#getStudents.statusCode# eq "200 OK")> <cftry> <cfloop index="StudentXML" array="#XmlSearch(responseSTUD,'/students/student')#"> <cfset StudentXML = XmlParse(StudentXML)> <cfhttp url="#PARDOT_CMS_UPSERT#" method="post" timeout="10000" > <cfhttpparam type="url" name="user_key" value="#PARDOT_CMS_USERKEY#"> <cfhttpparam type="url" name="api_key" value="#api_key#"> <cfhttpparam type="url" name="email" value="#StudentXML.student.email.XmlText#"> <cfhttpparam type="url" name="first_name" value="#StudentXML.student.first.XmlText#"> <cfhttpparam type="url" name="last_name" value="#StudentXML.student.last.XmlText#"> <cfhttpparam type="url" name="in_cms" value="#StudentXML.student.studentid.XmlText#"> <cfhttpparam type="url" name="company" value="#StudentXML.student.agencyname.XmlText#"> <cfhttpparam type="url" name="country" value="#StudentXML.student.countryname.XmlText#"> <cfhttpparam type="url" name="address_one" value="#StudentXML.student.address.XmlText#"> <cfhttpparam type="url" name="address_two" value="#StudentXML.student.address2.XmlText#"> <cfhttpparam type="url" name="city" value="#StudentXML.student.city.XmlText#"> <cfhttpparam type="url" name="state" value="#StudentXML.student.state_province.XmlText#"> <cfhttpparam type="url" name="zip" value="#StudentXML.student.postalcode.XmlText#"> <cfhttpparam type="url" name="phone" value="#StudentXML.student.phone.XmlText#"> <cfhttpparam type="url" name="fax" value="#StudentXML.student.fax.XmlText#"> <cfhttpparam type="url" name="output" value="simple"> </cfhttp> </cfloop> <cfcatch type="any"> <cfdump var="#cfcatch.Message#"> </cfcatch> </cftry> </cfif> UPDATE 2 I checked the CF logs and found a couple of these: "Error","jrpp-8","06/06/13","16:10:18","CMS-API","Java heap space The specific sequence of files included or processed is: D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm, line: 675 " java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.io.CharArrayWriter.write(CharArrayWriter.java:105) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:37) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:50) at coldfusion.runtime.NeoBodyContent.write(NeoBodyContent.java:254) at cfapi2ecfm292155732._factor30(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:675) at cfapi2ecfm292155732._factor31(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:662) at cfapi2ecfm292155732._factor36(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:659) at cfapi2ecfm292155732._factor42(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:657) at cfapi2ecfm292155732._factor37(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor44(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:456) at cfapi2ecfm292155732._factor38(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor46(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:455) at cfapi2ecfm292155732._factor39(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor47(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:453) at cfapi2ecfm292155732.runPage(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:1) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) Looks like I might have crashed the JVM in CF, is there a better way to do this? We are thinking of just exporting all records initially as a CSV file and importing it into Pardot seeing as we will never have to do a request this large again.

    Read the article

  • DHCPDISCOVER requests from an off-by-one MAC address

    - by Aleksandr Levchuk
    In a Linux DHCP server I'm getting a bunch of these log lines: dhcpd: DHCPDISCOVER from 00:30:48:fe:5c:9c via eth1: network 192.168.2.0/24: no free leases I don't have any machines with 00:30:48:fe:5c:9c and I don't intend to give out an IP to 00:30:48:fe:5c:9c (whatever that could be). I tracked down the server that this is coming from and killed all the DHCP clients that were running but the DHCPDISCOVER requests do not stop. I can prove that this is the sending server by pulling the Ethernet cable - the requests stop. The strange thing is that the sending server only has 2 interfaces which are: 00:30:48:fe:5c:9a 00:30:48:fe:5c:9b What can be the cause of the off-by-one address? Who could be sending the requests? Details On the DHCP client: root@n34:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 100 link/ether 00:30:48:fe:5c:9a brd ff:ff:ff:ff:ff:ff 3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:30:48:fe:5c:9b brd ff:ff:ff:ff:ff:ff 4: ib0: <BROADCAST,MULTICAST> mtu 2044 qdisc noop state DOWN qlen 256 link/infiniband 80:00:00:48:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:9f brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 5: ib1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast state UP qlen 256 link/infiniband 80:00:00:49:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:a0 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff Same info: root@n34:~# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9a inet addr:192.168.2.234 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fefe:5c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72544 errors:0 dropped:0 overruns:0 frame:0 TX packets:152773 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:4908592 (4.6 MiB) TX bytes:89815782 (85.6 MiB) Memory:dfd60000-dfd80000 eth1 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9b UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:dfde0000-dfe00000 ib0 Link encap:UNSPEC HWaddr 80-00-00-48-FE-80-00-00-00-00-00-00-00-00-00-00 BROADCAST MULTICAST MTU:2044 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ib1 Link encap:UNSPEC HWaddr 80-00-00-49-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.3.234 Bcast:192.168.3.255 Mask:255.255.255.0 inet6 addr: fe80::202:c903:8:81a0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:2044 Metric:1 RX packets:1330 errors:0 dropped:0 overruns:0 frame:0 TX packets:255 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:716415 (699.6 KiB) TX bytes:17584 (17.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) The nodes were imaged with Perseus which uses kexec instead of rebooting.

    Read the article

  • Thunderbird "undo send" feature (like in Gmail)?

    - by josh
    I was wondering if there exists a thunderbird addon that gives similar functionality as Google's "undo send" feature? This basically just delays the sent message for a few seconds allowing the user to cancel the send if desired. I find this very useful in Gmail and would love the option now that I started using thunderbird. I've seen the send later addon, but this isn't really what I want. I want something very similar to Google's feature where it automatically delays each message before sending with an easy to access option to cancel the send. Thanks!

    Read the article

  • Group policy software installation feature on server 2008

    - by Force Flow
    What is the proper procedure for distributing software updates using group policy's software installation feature? For example, if I want to install Java, and Java 1.7u6 is already added as a package, should I: A) Remove the Java 1.7u6 package (selecting the "allow users to continue to use" option), and add Java 1.7u7? B) Add Java 1.7u7 and specify that it is an update to the existing 1.7u6 package? (Will this install the oldest version first, then install each update one after the other, or will it just install the latest package?) Note that this question is geared toward the operation of the software installation feature of group policy, not specifically the behavior of the Java installers. This could easily apply to the installation of Adobe Flash, Adobe Reader, or any other common software applications with frequent updates.

    Read the article

  • Disabling RAID feature on HP Smart Array P400

    - by Arie K
    I'm planning to use ZFS on my system (HP ML370 G5, Smart Array P400, 8 SAS disk). I want ZFS to manage all disks individually, so it can utilize better scheduling (i.e. I want to use software RAID feature in ZFS). The problem is, I can't find a way to disable RAID feature on the RAID controller. Right now, the controller aggregates all of the disks into one big RAID-5 volume. So ZFS can't see individual disk. Is there any way to acomplish this setup?

    Read the article

  • Belkin router issue

    - by walr1
    Hi, My cousin and I bought a wireless Belkin router for testing purposes. Please keep in mind for all of our tests there is no ethernet cable plugged in, just the router's power cord. We have been trying to "flood" it with PING requests on its default address 192.168.2.1, but it isn't doing a thing; not even logging any attempts of too many requests. I've disabled the firewall, disabled PING request block, etc. Any idea why this thing isn't being affected? We sent 4 million packets and it hasn't done a thing. Quite odd! Thanks.

    Read the article

  • Disable passwd history feature with remember=0

    - by user1915177
    PAM version - pam-0.79 Is setting 0 allowed on "remember" option in /etc/pam.d/common-passwd file of pam.d module to disable passwd history feature? With "remember=0" in /etc/pam.d/common-passwd file, I am observing a memfault when running the passwd command as a USER. When browsed the source, the function in _set_ctrl in support.c file of pam_unix module handles wrong values of remember, but currently its not robust enough to handle 0, which is a wrong value. So the valid and only option to disable history feature, is to not include the "remember" option in /etc/pam.d/common-passwd file and not to set-up /etc/security/opasswd file? Could see in the following link mention of setting "remember" to 0 has no effect to remember value in "/etc/security/opasswd" file. =https://lists.fedorahosted.org/pipermail/linux-pam-commits/2011-June/000060.html

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • Munin Aggregated Graphs Configuration Error

    - by Sparsh Gupta
    I tried making some Munin Aggregated graphs but somehow I am unable to make the configuration work. I think I have followed the instructions but since its not working, I would love some assistance or guidance as to what I am doing wrong. I want to Aggregate (sum) the total number of requests / second all my nginx servers are doing combined together. The configuration looks like [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request The munin graph I want to aggregate is http://exchange.munin-monitoring.org/plugins/nginx_request/details Thanks Sparsh Gupta

    Read the article

  • "Go to file" feature in various editors

    - by hekevintran
    In TextMate there is a feature called "Go to file" that is used for file navigation. It is a box where you type the name of a file in your project and it will use fuzzy matching to generate a list of candidates files from which you can select. Other editors have this feature, but they each give it a different name: Vim fuzzyfinder Emacs fuzzy-find-in-project TextMate Go to file (fuzzy) Eclipse OpenResource (not fuzzy) Eclipse GotoFile (fuzzy) Komodo Go to File (not fuzzy) Netbeans Go to file (not fuzzy) Does jEdit, Geany, or Ultraedit have this feature?

    Read the article

  • Apache / PHP Begins to Deny SQL Requests after about 2000

    - by Daniel Stern
    We have a web page on our server that we use to run administrative scripts. For example, we might run the script "unenrolStudents()" which runs 5,000 SQL SET commands one after another and sets 5000 student entries in an SQL database to unenrolled. However, we are finding that after running a few thousand queries (it is not totally consistent) we will be "locked out" by our server. SYMPTOMS OF LOCKING OUT: - unable to connect to server with winSCP - opening putty with that connection shows a blank screen (no login / pass) - clearing cookies / cache in chrome does NOT fix locking out - other computers in the office ALSO become locked out - locking out can be triggered with a high frequency of requests (10000 in 1 second) or by less over time (10000 in 500 seconds - this will still cause a lockout even though the frequency is much less) We believe this is a security feature of our own Apache. I know we are using Suhosin but I didn't configure it so I don't know. How can I disable this locking effect so that I can confidently run all my SQL requests and they will go through? Has anyone else dealt with this and found workarounds? Thanks DS

    Read the article

  • Nginx and Gunicorn hanging on GET requests

    - by whatWhat
    I'm using Nginx + Gunicorn which is serving my Django project. All GET requests hang for ~1 min. The content seems to be available immediately as I can see it in the Browser inspector but the browser itself looks like it's still waiting for more data. Heres my Ngnix config #allow for up to 3 connections per second. limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s; server { listen 80; server_name example.com; root /var/www/example.com/example/; # serve directly - analogous for static/staticfiles location /media/ { # this changes depending on your python version root /home/example/; } location /static/ { # if asset versioning is used if ($query_string) { expires max; } root /var/www/example.com; } location / { #Allow for a burst of 50. limit_req zone=one burst=50 nodelay; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8001/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; } My Gunicorn Config: bind = "127.0.0.1:8001" workers = 3 worker_class = "gevent" Is there anything obvious that would be causing the requests to stay open for so long?

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >