Search Results

Search found 21908 results on 877 pages for 'content catalog'.

Page 813/877 | < Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Tomcat with virtual hosts - 404

    - by Thardas
    I have a CentOS 5.2 server set up with Apache 2.2.3 and Tomcat 5.5.27. The server hosts multiple virtual hosts connected to multiple Tomcats. For instance we have one tomcat for development and testing and one tomcat for production. project.demo.us.com points to dev tomcat and project.us.com points to production tomcat. Here's the virtual host's configuration: <VirtualHost *:80> ServerName project.demo.us.com CustomLog logs/project.demo.us.com/access_log combined env=!VLOG ErrorLog logs/project.demo.us.com/error_log DocumentRoot /var/www/vhosts/project.demo.us.com <Directory /var/www/vhosts/project.demo.us.com> Allow from all AllowOverride All Options -Indexes FollowSymLinks </Directory> ########## ########## ########## JkMount /project/* online </VirtualHost> JkMount line defines that we use online worker and our workers.properties contains this: worker.list=..., online, ... worker.online.port=7703 worker.online.host=localhost worker.online.type=ajp13 worker.online.lbfactor=1 And tomcat's conf/server.xml contains: <Connector port="7703" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" maxThreads="80" minSpareThreads="10" maxSpareThreads="15"/> I'm not sure what redirectPort is but I tried to telnet to that port and there's no one answering, so it shouldn't matter? Tomcat's webapps directory contains project.war and the server automatically deployed it under project directory which contains index.jsp and hello.html. The latter is for static debugging purposes. Now when I try to access http://project.demo.us.com/project/index.jsp, I get Tomcat's HTTP Status 404 - The requested resource () is not available. The same thing happens to hello.html so it's not working with static content either. Apache's access_log contains: 88.112.152.31 - - [10/Aug/2009:12:15:14 +0300] "GET /demo/index.jsp HTTP/1.1" 404 952 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2" I couldn't find any mention of the request in Tomcat's logs. If I shutdown this specific tomcat, I no longer get Tomcat's 404 but Apache's 503 Service Temporarily Unavailable, so I should be configuring the correct Tomcat. Is there something obvious that I'm missing? Is there any place where I could find out what path the Tomcat is using to look for requested files?

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • CPU Lockup when loading folder of bookmarks in Firefox

    - by Gary M. Mugford
    I am running Firefox 3.6 on WinXPSP3 on a duo Core machine with 4G of memory. I am also running Avast! free anti-virus and ZoneAlarm free firewall, both latest versions. Within the last month, my service provider basically forced me to upgrade to a Docsis 3.0 compliant modem (offered me a deal I couldn't turn down). At that point, I also upgraded to FF3.6. Basically, I am not unhappy with many aspects of this switchover, EXCEPT, when I now load a folder of bookmarks (anywhere from 10 to 38) I get nothing like the load times I experienced a couple of months back. It's taking minutes rather than seconds. And the first bookmark in one of the folders, GMail, rarely loads before timing out. I have used the old trick of powering off the cable modem before my day's work. This used to fix 'load-lag' in the old days. I have switched from my ISP's DNS server to Google, OpenDNS and back. And nothing seems to work currently. It's not my DNS cache. That's been flushed and secondary computers also have the same issue when loading folders. I have watched the CPU usage and loading the folder will send VSMon (ZoneAlarm) usage over 40 percent, AVastSvc (Avast!) over 30 and Firefox will then push the needle to 100. There's a brief burst by SVCHost when the others falter in devouring cycles. Then everything subsides to single digits once the last tab is loaded. The only other nominal nastiness is VSMon ALWAYS hitting 50 percent when ANY program starts downloading content from the internet. If I shutdown ZoneAlarm (and VSMon with it), the same slow loading takes place, but this time System is running 50% plus, again driving the usage to 100 per cent. I have my doubts FF3.6 vs FF3.5 is an issue, since the other computers are still running 3.5 and suffer the same issue. Those computers are on, but inactive, most of the time, being backups. Obviously, when the CPU hits 100, I can't do much of anything in FireFox OR in other programs. Video play through WMPC or VLC is extremely choppy, although it doesn't seem to affect the audio. Any ideas what I can try next? Thanks, GM

    Read the article

  • Is Steam for Mac effectively running as superuser?

    - by godDLL
    When you download the client it does not weigh too much, and seems to do very little. Inside the app bundle there is a script that—upon inspecting the environment and deciding you're not running Linux—launches the client, which downloads the full support environment and resources. For this to happen (all of this is saved inside the bundle, the app bundle gets updated in this process) Steam wants Universal Access for Assistive Devices, and your password. Cacheable resources, preferences (like keyboard shortcuts), support files (like game hardware requirement lookup tables) live inside the bundle, not in ~/Library/{Application Support|Preferences|Cache}; games' data get dumped into ~/Documents/Steam Content. I'd describe myself as a bit OCD (which really says a lot), and I wouldn't care that much still. I'd go comb this hairy mess and find out where stuff is, when and if I need to, even if it's in an unfamiliar place; that does not actually tick me off. Well, a little bit. What makes me concerned is the way Steam needs both Access for Assistive Devices, and my password to run. The former gives it the ability to talk very intimately with running apps and the underlying system; while the latter (admin account) could very well give it and it's publishers unrestricted access to all my software, hardware and data. With publishers like Rockstar using scene NOCD cracks to publish their games on Steam, I'm not so sure I'm OK with this. I'd like more games made available for the MacOS X and all the pretty machines that run it, but this arrangement does not seem very Mac-like to me. It looks like Valve is going around system security measures and best practices, foregoing sandboxing, code signing, relatively sane structured organization; all the things that would appeal to someone who's no fun at parties at all, and will die alone, in his long dead mother's basement… wait. Right. Anyway. Can we get some input on Steam for Mac security at the end-user machine, from someone who understands how Accessibility API works, whether games distributed on Steam can read and write outside the user homefolder, collect data from other running apps, or similar?

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • IE8 Refuses to run Javascript from Local Hard Drive

    - by Josh Stodola
    I have a problem that just started at work recently and the network manager is certain he did not change anything with the group policy. Anyways, here is a detailed description of the problem. My machine is Windows XP SP3, and I use IE8 to browse. We have McAffee anti-virus software that I am unable to configure. I use the following file to test... <!DOCTYPE html> <html> <head> <title>Javascript Test</title> </head> <body> <script type="text/javascript"> document.write("<h1>PASS</h1>"); </script> <noscript> <h1>FAIL</h1> </noscript> </body> </html> When I open this file from the C: drive, it fails every time. If I execute it anywhere else (local/remote web server or on a mapped network drive), it works just fine. When I am simply browsing the Internet, Javascript on web sites works just fine. It is only failing on files running from my C: drive. Additionally, I have had a couple other programmers in the department try this file on their C: drive, and it works fine for them. So I don't believe it is a group policy thing. I need to fix this because I do extensive testing from my C: drive, and I am accustomed to doing so. I don't want to get into the habit of moving files to a different drive just to test. Things I have tried: Enabled "Allow Active Content to Run Files on My Computer" in Options | Advanced | Security Enabled "Allow Active Scripting" in Options | Security | Custom Level Verified that "Script" was not checked as disabled in Developer Toolbar Added localhost to Trusted Sites in Options Disabled McAffee completely (momentarily, with help from network admin) Used an older DOCTYPE in my test HTML page Re-installed IE8 completely Ran regsvr32 on the JScript.dll Slammed keyboard I am sure that there is a setting somewhere that will fix this problem, possibly in the registry. I would not be surprised if it was related to the developer toolbar. At this point I do not know where else to look. Can anyone help me resolve this problem? EDIT: Regardless of the bounty, this issue is still ongoing.

    Read the article

  • VPN pptp connection Unable to pass through linux iptables

    - by user221844
    I have set up a windows VPN server behind Linux - Ubuntu box that is working as firewall and proxy server. Now I want people from outside to be able to connect to the VPN server, but the connection is not being established and I get on the client an error 619. I have checked the problem on the internet and it seems a firewall issue. what should I do to make the connection established through the firewall? here is below the information about my setup Firewall-External-IF-IP: 172.16.1.100 Firewall-LAN-IF-IP: 192.168.1.1 VPN-Server-IP: 192.168.1.10 and below is my iptables file content: #Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *filter :INPUT ACCEPT [162000:140437619] :FORWARD ACCEPT [23282:27196133] :OUTPUT ACCEPT [185778:143961739] :LOGGING - [0:0] -A INPUT -p gre -j ACCEPT -A INPUT -s 192.168.1.10/32 -p tcp -m tcp --sport 1723 -j ACCEPT -A INPUT -s 192.168.1.10/32 -p udp -m udp --sport 1723 -j ACCEPT -A FORWARD -s 192.168.1.0/24 -o EXT_IF -j ACCEPT -A FORWARD -s 192.168.1.0/24 -i EXT_IF -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p tcp -m tcp --dport 1723 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p tcp -m tcp --sport 1723 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p gre -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p gre -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p tcp -m tcp --dport 1723 -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p udp -m udp --dport 1723 -j ACCEPT COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *nat :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :FORWARD ACCEPT [8527694:6271564562] :OUTPUT ACCEPT [14748508:11899678536] :POSTROUTING ACCEPT [23271280:18170828012] COMMIT # Completed on Thu May 29 12:40:18 2014 hope that I find the solution here ....!! :(

    Read the article

  • Increase Timeout for remote sessions in Debian 5 Lenny

    - by Ash
    I always get a remote connection time out when using PuTTy and also when i send emails with attachments from a mail sever installed on Debian. I always get this error. I'm not sure if this is firewall or the new Debian 5 installation which i made. Is there any settings i need to fix after fresh install. Any inputs are highly appreciated. This error is pulling my brains out. Thanks. Error: 2011-01-10 15:21:13,454 INFO [btpool0-23://69.19.19.89/service/upload?fmt=extended] [[email protected];mid=72;ip=10.10.01.78;ua=Mozilla/5.0 (Windows;; U;; Windows NT 5.2;; en-US;; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729);] FileUploadServlet - File upload failed org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. timeout at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:367) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) at com.zimbra.cs.service.FileUploadServlet.handleMultipartUpload(FileUploadServlet.java:430) at com.zimbra.cs.service.FileUploadServlet.doPost(FileUploadServlet.java:412) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at com.zimbra.cs.servlet.ZimbraServlet.service(ZimbraServlet.java:181) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.zimbra.cs.servlet.SetHeaderFilter.doFilter(SetHeaderFilter.java:79) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:81) at org.mortbay.servlet.GzipFilter.doFilter(GzipFilter.java:132) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:218) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.rewrite.RewriteHandler.handle(RewriteHandler.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.DebugHandler.handle(DebugHandler.java:77) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:543) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:939) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:405) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:413) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:451) Caused by: org.mortbay.jetty.EofException: timeout at org.mortbay.jetty.HttpParser$Input.blockForContent(HttpParser.java:1172) at org.mortbay.jetty.HttpParser$Input.read(HttpParser.java:1122) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:977) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:887) at java.io.InputStream.read(InputStream.java:85) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:94) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:64) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362) ... 33 more

    Read the article

  • configuring slime in emacs

    - by CodeKingPlusPlus
    I am in the process of configuring slime for emacs. So far I have read about basic functionality for common lisp such as C-c C-q which invokes the command slime-close-parens-at-point which places the proper number of parens where your mouse is. Another command that seemed cool was invoked by C-c C-c and it would pass the code you are editing in a buffer to the REPL, and "compile" it. Why won't these commands work for me? Anyway, I have downloaded slime via M-x list-packages and do not seem to have this functionality (C-h w and then any of these commands tells me that these commands do note exist). So, I saw a bunch of other slime extensions such as slime-repl', 'slime-fuzzy' and 'hippie-expand-slime'. So I again usedM-x list-packages` and downloaded them. Still I did not have these commands. Here is the content of my emacs file relevant to slime: ;;;Common Lisp and Slime (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-20130626.1151") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-repl-201000404") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/hippie-expand-slime-20130226.1656") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-fuzzy-20100404") (require 'slime) (setq slime-lisp-implementations `((sbcl ("/usr/bin/sbcl")) (ecl ("/usr/bin/ecl")) (clisp ("/usr/bin/clisp" "-q -I")))) (require 'slime-repl) (require 'slime-fuzzy) (require 'hippie-expand-slime) When I execute M-x slime I get the following message in the inferior-lisp buffer where I can execute common lisp code (however, shouldn't this be the slime-repl since I required it?): STYLE-WARNING: redefining EMACS-INSPECT (#<BUILT-IN-CLASS T>) in DEFMETHOD STYLE-WARNING: Implicitly creating new generic function STREAM-READ-CHAR-WILL-HANG-P. WARNING: These Swank interfaces are unimplemented: (DISASSEMBLE-FRAME SLDB-BREAK-AT-START SLDB-BREAK-ON-RETURN) ;; Swank started at port: 46533. Then a slime-error buffer is created with the contents: Invalid protocol message: Symbol "CREATE-REPL" not found in the SWANK package. Line: 1, Column: 28, File-Position: 28 Stream: #<SB-IMPL::STRING-INPUT-STREAM {10056B9C33}> (:emacs-rex (swank:create-repl nil) "COMMON-LISP-USER" t 5) How should I modify my emacs file to give me the functionality of those commands? In my emacs file am I not loading the necessary files? Do I need to install an additional package? If you need more information let me know! All help is much appreciated!

    Read the article

  • ssh refuses to authenticate keys

    - by MixturaDementiae
    So I am setting up a connection between my machine [fedora 17] and a virtual machine running in Virtual Box in which is running CentOS 5. I have installed openssh from the repositories on CentOS, and I have configured everything as it follows: Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key SyslogFacility AUTHPRIV PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile /home/pigreco/.ssh/authorized_keys PasswordAuthentication no ChallengeResponseAuthentication yes GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding yes Subsystem sftp /usr/libexec/openssh/sftp-server this is the configuration file sshd_config on the server i.e. on the CentOS. Moreover I have created a public/private key pair as usual on the .ssh/ folder in my home directory in my OS, i.e. Fedora, and then I've copied with scp the id_rsa.pub to the server and then I have appended its content to the file .ssh/authorized_keys on the server machine. The error that I get is the following: OpenSSH_5.9p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 50: Applying options for * debug1: Connecting to 192.168.100.13 [192.168.100.13] port 22. debug1: Connection established. debug1: identity file /home/mayhem/.ssh/identity type -1 debug1: identity file /home/mayhem/.ssh/identity-cert type -1 debug1: identity file /home/mayhem/.ssh/id_rsa type 1 debug1: identity file /home/mayhem/.ssh/id_rsa-cert type -1 debug1: identity file /home/mayhem/.ssh/id_dsa type -1 debug1: identity file /home/mayhem/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 16:e5:72:d1:37:94:1b:5e:3d:3a:e5:da:6f:df:0c:08 debug1: Host '192.168.100.13' is known and matches the RSA host key. debug1: Found key in /home/mayhem/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,keyboard-interactive debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/mayhem/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 Agent admitted failure to sign using the key. debug1: Trying private key: /home/mayhem/.ssh/identity debug1: Trying private key: /home/mayhem/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Do you have some good suggestion of what I can do? thank you

    Read the article

  • How could I let Skydrive desktop sync to MicroSD in Windows 8 tablet?

    - by peSHIr
    I have a Samsung Slate 7 tablet with (now) Windows 8 on it. This machine has a 64 Gb SSD and I have a 64 Gb MicroSD card in it. I also have a Skydrive on my main Microsoft ID that contains about 45 Gb of content. With Windows and some development stuff installed, my Skydrive will not fit on the main drive of the tablet. (Besides, my idea was to keep data on the memory card anyway, to make it easier to repave the machine without data loss if need be.) My problem should now be clear: I want to install the Skydrive desktop app to sync my Skydrive to the MicroSD card. This is not possible, as Skydrive does not allow syncing files to removable drives. I have tried a number of things already, but none of them worked: Use the mklink command line tool to create a directory link/junction from a folder name on SSD to a folder on the MicroSD and then try to install Skydrive sync to the SSD link folder. Skydrive however still recognizes this as something it does not want to sync onto. The various different filter drivers mentioned on Agnipulse (including the Hitachi one) that should make windows see some or all of the removable drives in the system as fixed drives do not seem work on (64-bit) Windows 8: they either can't be installed, do nothing and/or cause Windows 8 to go into Automatic Repair mode when rebooting. The Lexar BootIt app seems to be meant to flip the relevant bit in the on-board drive controller of supported USB pen drives, but I tried it anyway. Of course it did nothing to how the MicroSD card was seen. I have now run out of ideas, it seems, and I was wondering if anyone here has a solution to let Windows 8 see the MicroSD memory card in my tablet as a fixed drive instead of removable drive, or some other way of getting the Skydrive desktop to sync my Skydrive data to that MicroSD card. And to be complete: this is not a duplicate question of this or this as those ask about getting USB drives multiple partitions to work on Windows XP. This question is specific about getting desktop Skydrive to sync to MicroSD card in Windows 8, which seems to be a question I have not seen on superuser so far.

    Read the article

  • Amazon Elastic Terms and Conditions

    - by PP
    WARNING: Have you really read Amazon's Terms and Conditions? Would anybody seriously agree to this term on Amazon's Elastic services sign up page? 6.2. Restrictions with Respect to Use of Marks. Your use of any trademarks, service marks, service or trade names, logos, and other designations of AWS and its affiliates or licensors, hereinafter "Marks", shall strictly comply with the following provisions. You may use the Marks in conjunction with the display of the AWS Content and for the purpose of indicating that your Application was created using the Services. You may use the Marks only in the form in which we make them available to you and not in any manner that disparages Amazon, its affiliates or its licensors, or that otherwise dilutes any Mark. Other than your limited right to use the Marks as provided in this Agreement, we and our licensors retain all right, title, and interest in and to the Marks. You will not at any time now or in the future challenge or assist others to challenge the validity of the Marks, or attempt to register confusingly similar trademarks, trade names, service marks or logos. You agree to follow our the Trademark Use Guidelines posted on the Amazon Web Services™ Trademark Guidelines page (the "Trademark Guidelines") as those guidelines may change from time to time. The Trademark Guidelines are incorporated herein by reference. You must immediately discontinue use of any Mark as specified by us at any time in writing. We may modify any Marks provided to you at any time, and upon notice, you will use only the modified Marks and not the old Marks. Other than as specified in this Agreement, you may not use any trademark, service mark, trade name or other business identifier of Amazon or its affiliates unless you obtain Amazon's or its affiliates' prior written consent. The foregoing prohibition includes the use of "amazon," any other trademark of AWS, Amazon or its affiliates, or variations or misspellings of any of them, in the name of an Application or in a URL to the left of the top-level domain name (e.g., ".com", ".net", "co.uk", etc.)-for example, a URL such as "amazon.mydomain.com", "amaozn.com" or "amazonauctions.net" are expressly prohibited. Any use you make of the Marks shall inure to our benefit and you hereby irrevocably assign to us all right, title and interest in the same. In addition, you agree not to misrepresent or embellish the relationship between us and you, for example by implying that we support, sponsor, endorse, or contribute money to you or your business endeavors. If you are a large company and you want to use Amazon's services you must agree that: you may not use the word "amazon" in any domain name you control (even if you are a forestry company) you may not use any word Amazon choose to trademark in any domain you control (regardless of whether the name has a different meaning/purpose in your industry) from now until forever you will never dispute any claim Amazon makes on any word you or anybody else uses Seriously, who would sign such a thing?

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • DVD playback with Windows Media Player 11 works fine, but when copied to HDD and then played back, t

    - by stakx
    I have several DVDs with short documentaries on it. Since the notebook I'm using (a Dell Latitude E6400) has only one DVD drive, and I might play back those short movies very often, I thought of copying them to the HDD and playing them back from there. However, I've run into a problem, namely stuttering audio. Problem description: When I play back these movies directly from DVD (with Windows Media Player 11 under Windows Vista), everything works fine. Smooth video, no significant audio problems (only the occasional click). But as soon as I copy any of these DVDs to the HDD and try to play them back from there (e.g. using the wmpdvd://drive/title/chapter?contentdir=path protocol, I get stuttering audio — audio playback sounds like a machine gun for a third of a second or so, approx. every 8 seconds. I have tried converting the VOB files from the DVD to another format (ie. ripping), but that resulted in a noticeable downgrade of picture quality. Therefore I thought it best to keep the files in their original format, if possible. Still, I suspect that the stuttering audio is due to some (de-)muxing problem, and that changing the file format might help. (After all, video playback is fine; therefore I don't think that the hardware is too slow for playback.) Only thing is, I don't know how to convert the VOB files to another Windows Media Player-compatible format without quality loss. I hope someone can help me, or give me further pointers on things I could try out to get HDD playback to work without the problem described. Some things I've tried so far, without any success: VOB2MPG, in order to convert the .vob file to a .mpg file. But that changes only the A/V container, not the content. No re-encoding takes place at all. Re-encoding with MPlayer/MEncoder. Lots of quality loss there, and I frankly haven't got the time to test all possible settings combinations available. Disabling all plug-ins, equalizers, etc. in Windows Media Player. Disabling all hardware acceleration on the audio playback device. Further info on the VOB files I'm trying to playback: The video format is MPEG ES, PAL 720x576 pixels @ 24/25 frames per second. The sound stream is uncompressed PCM, 16-bit stereo @ 48kHz. (Might it help if I somehow re-encoded the sound stream at a lower resolution, or as an MP3? If so, how would I do this without changing the video stream?) P.S.: I am limited to using Windows Media Player (11). (I previously tried MPlayer btw., but the video playback quality was surprisingly bad.)

    Read the article

  • What presentation software suits my needs?

    - by claws
    Background: I'm teaching biology to 12th grade students. The syllabus I'm teaching is huge. I mean literally, very huge. There is a lot for students to remember. There are no less than 1000 facts (weird names, dates etc) for students to remember. They'll have to remember all of them, they don't have a choice. The notes I compiled for their learning itself is upto 80 printed pages(Just the bullet outline & facts). That's just one chapter. We have 34 chapters. Also my students are very hardworking, they study upto 8-10hrs per day (Yeah! we are from India :). So, I want to ensure maximum retaining by the students at each and every stage (Teaching & Learning). I'm trying to as many memory training techniques as possible. I'm trying to incorporate, mnemonics, strong visual aids (pictures, 3D-animations, real videos etc.), spaced repetition etc. I think MS powerpoint is not suitable for my needs: There are about 200 slides per chapter. Its very easy for students to get lost while teaching. Because the problem with powerpoint is that it gives facts (as bullets) but it doesn't exploit the association & organization (Concept Map) of the content, which helps students learn quickly. I found an amazing software called XMind. You can see the screenshot here. Problem is that it is not as powerpoint in terms of powerpoint. This software can be used for just for concept maps. In the above screenshot, each topic occupies a single slide. I have an Image/picture(Detailed huge picture) and about 5-10 bullet points and probably a video or an animation of somethings. And this XMind is not good at presenting, in terms that it doesn't allow me to set what to present after what. I want to present a top down view, with a slide for each topic. PS: I Don't like prezi.com. I tried but it simply is too confusing for my students. It zooms here and there. I didn't tried it but I've seen few presentations.

    Read the article

  • How to start nginx via different port(other than 80)

    - by Zhao Peng
    Hi I am a newbie on nginx, I tried to set it up on my server(running Ubuntu 4), which already has apache running. So after I apt-get install it, I tried to start nginx. Then I get the message like this: Starting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) That makes sense as Apache is using port 80. Then I tried to modify nginx.conf, I reference some articles, so I changed it like so: server { listen 8080; location / { proxy_pass http://94.143.9.34:9500; proxy_set_header Host $host:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } After saving this and try to start nginx again, I still get the same error as previously. I cannot really find a related post about this, could any good people shred some light? Thanks in advance :) ========================================================================= I should post all the content in conf here: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 81; location / { proxy_pass http://94.143.9.34:9500; proxy_set_header Host $host:81; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } } } mail { See sample authentication script at: http://wiki.nginx.org/NginxImapAuthenticateWithApachePhpScript auth_http localhost/auth.php; pop3_capabilities "TOP" "USER"; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { listen localhost:110; protocol pop3; proxy on; } server { listen localhost:143; protocol imap; proxy on; } } Basically, I changed nothing except adding the server part.

    Read the article

  • Chrome Problems on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites! Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). Is anybody else experiencing such issues? Does anybody has a solution to any of these? Any Help is appreciated! Thank You!

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

  • Routing all data through an VPN tunnel with ppp

    - by Oliver
    I'm trying to create a VPN tunnel that forwards all data from the local machine to the VPN server. I'm using ppp-2.4.5 for this with the following configuration: pty "pptp <VPNServer> --nolaunchpppd" name <my login name> remotename PPTP usepeerdns require-mppe-128 file /etc/ppp/options.pptp persist maxfail 0 holdoff 5 I have a script in if-up.d with the following content: route del default eth0 route add default dev ppp0 Before starting the VPN tunnel my routing looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 2 0 0 eth0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 After starting the tunnel (via pon) it looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 12.34.56.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 Now the problem is, that the VPN tunnel seems to be looped into itself. If I run ifconfig after a few seconds without any traffic: eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.0.0 broadcast 192.168.255.255 ether 00:01:2e:2f:ff:35 txqueuelen 1000 (Ethernet) RX packets 39931 bytes 6784614 (6.4 MiB) RX errors 0 dropped 90 overruns 0 frame 0 TX packets 34980 bytes 7633181 (7.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xfbdc0000-fbde0000 ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1496 inet 12.34.56.78 netmask 255.255.255.255 destination 12.34.56.1 ppp txqueuelen 3 (Point-to-Point Protocol) RX packets 7 bytes 94 (94.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 782863 bytes 349257986 (333.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 It states that already over 300 MiB have been send, ppp0 is only online since a few seconds and the connection isn't working anyway. Can someone please help me to fix the routing table, so that the traffic from ppp0 is not send again through ppp0 but instead goes to the remote server?

    Read the article

< Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >