Search Results

Search found 3004 results on 121 pages for 'plain'.

Page 80/121 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How do I use key combinations on an axis on a joystick in xorg?

    - by valadil
    I'm using xserver-xorg-input-joystick on Debian Stable so I can use a joystick in place of the mouse. I have mouse movement working correctly, but got stuck trying to add functions for some other keys. These work: #Left stick #Pointer Option "MapAxis1" "mode=relative axis=1.5x" Option "MapAxis2" "mode=relative axis=1.5y" #Right stick #Arrow keys Option "MapAxis4" "mode=relative keylow=Left keyhigh=Right" Option "MapAxis5" "mode=relative keylow=Up keyhigh=Down" But when I try to make key combos (so I can navigate windows and screens in xmonad) I have no luck. #dpad #xmonad focus #up/down toggle window. l/r choose screen. Option "MapAxis8" "mode=relative keylow=Super_L,k keyhigh=Super_L,j" Option "MapAxis7" "mode=relative keylow=Super_L,w keyhigh=Super_L,e" I've also tried Super_R, plain old Super, Meta, and mod4mask, and anything else I can think of. These buttons print the letter, but don't appear to hold down the modifying key. The exception to that is shift. If I specify Shift_L or Shift_R, I get a capital letter. xev indicates that modifier keys are being pressed. If I lower Axis8, I get press Super_L, press k, release k, release Super_L. That looks like it should be working. Maybe this is an xmonad problem and not a joystick driver one? I'm also having trouble with getting an axis to use other XF86 keys: # triggers # song selection Option "MapAxis3" "mode=relative keylow=none keyhigh=XF86AudioForward" Option "MapAxis6" "mode=relative keylow=none keyhigh=XF86AudioBack" That does nothing. Any idea why? If it turns out that this isn't something I can do on an axis, but would work with a button, is there a way to treat my joysticks as buttons? Also, if anyone has suggestions for the other 5 buttons I'll have left after mouse buttons are bound, I'm listening.

    Read the article

  • Installing Windows Management Framework 3.0 basically destroyed WMI, how can I fix it without reinstalling the O.S.?

    - by Massimo
    Related, of course, to this question. Before discovering it was somewhat... dangerous, I installed Windows Management Framework 3.0 on a number of Windows Server 2008 R2 SP1 servers, and WMI got completely trashed on all of them. This is what the WMI namespace looks like on a normal server (this is from Server Manager - Configuration - WMI Control): This is what it looks like after installing WMF 3.0: Yeah. Everything except WMF 3.0's new features is gone. Needless to say, nothing seems to work anymore on those servers. And no, this is not due to some strange installation error, this happened on three servers which were perfectly working before installing WMF 3.0, and on all of them the installation completed succesfully. Admittedly, one of them had a somewhat complex setup (various System Center products and SQL Server instances)... but two of them are just plain standard domain controllers which do nothing else at all. How can I fix this mess without having to reinstall the O.S. on these servers? And why did it happen in the first place?

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • Is having a [high-end] video card important on a server?

    - by Patrick
    My application is quite interactive application with lots of colors and drag-and-drop functionality, but no fancy 3D-stuff or animations or video, so I only used plain GDI (no GDI Plus, No DirectX). In the past my applications ran in desktops or laptops, and I suggested my customers to invest in a decent video card, with: a minimum resolution of 1280x1024 a minimum color depth of 24 pixels X Megabytes of memory on the video card Now my users are switching more and more to terminal servers, therefore my question: What is the importance of a video card on a terminal server? Is a video card needed anyway on the terminal server? If it is, is the resolution of the remote desktop client limited to the resolutions supported by the video card on the server? Can the choice of a video card in the server influence the performance of the applications running on the terminal server (but shown on a desktop PC)? If I start to make use of graphical libraries (like Qt) or things like DirectX, will this then have an influence on the choice of video card on the terminal server? Are calculations in that case 'offloaded' to the video card? Even on the terminal server? Thanks.

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Postfix character encoding?

    - by Camran
    I use Postfix as a mailserver. I have Ubuntu OS. Then I use PHP to send emails. Problem is that none of my emails are encoded properly by a mailsoftware which my VPS provider uses. According to them, the problem lies with me. It is only the name field which isn't encoded properly. For example "Björn" becomes "Björn" in my emails. However, when I echo the $name, it outputs "Björn" which is correct. Also, gmail and hotmail does show it correctly. The strange part is that the "text" (the message itself) is encoded properly. I use the following for sending mail: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; $name= iconv(mb_detect_encoding($name), "UTF-8//IGNORE//TRANSLIT", $name); //// I HAVE TRIED WITH AND WITHOUT THE LINE ABOVE, NO DIFFERENCE mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); I have tried with and without the iconv line also, no luck. The last thing I can think of is POSTFIX, could there be a setting for character encoding there? Anybody knows?

    Read the article

  • Prevent Amazon EC2 Time zone from reverting back on yum update

    - by D.Tate
    I use an Amazon EC2 server instance that runs a distro called Amazon Linux AMI. (I've read that it is based on CentOS/Red Hat). My specific version is the 2012.09 release. Anyway, I was able to change the time zone about a week ago from the default UTC to America/New_York (which is EST/EDT). The command I used to change it was: ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime ...thanks to this other Server Fault question. At that point, I was able to run date from the the command line, and it correctly displayed the EDT time. And even after EDT "fell back" to EST this past Sunday, I was pleased to find that running date still produced the correct local time. So that was great. However, after running a yum update yesterday, it seems that my time zone got reverted back to plain 'ol UTC. I even checked the last modified time of /etc/localtime file, and indeed it confirmed that it had been modified around the same time I had updated. Is there any way to prevent this from happening again, or will I be stuck resetting the time zone every time I do a yum update?

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • samba access from win98

    - by SimonSalman
    Hello, the admin installed a new file server in our institute: OpenSuse 11.1 with Samba 3.2.7-11.3.2-2154-SUSE-CODE11. They copied the smb.conf from the old machine (hosting Samba 3.0.0) to the new one. Everything works as before, but one Windows 98 machine can see but not access the file server. It prompts for user authentication, but will not accept any user-password combination. There exists a lot of discussion about the problem on the net, but none provided a clear answer to the problem. EDIT: 1. I changed Win98 registry enable plain-text passwords, and alternatively changed server's smb.conf and /etc/smbpasswd to accept encrypted passwords 2. Further I provide a profile with a user-password combination on Win98 machine similar to one of the samba users-password combinations. 3. I changed smb.conf such that the samba server is the Local Master Browser all these changes are not necessary when using the older samba server. So, I conclude that a configuration problem on the server side is likely. If you need any further information, I will post them here. Best regards, Simon

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • Java: very slow tomcat and too big war file

    - by NaN
    I created some sort of RESTful API backend for a mobile app. It's written completely in Java using Jersey as Framework. At the moment no database is used, it's all in the memory, but this is no problem so far (it's only for prototyping purposes). I ordered the smallest package from digital ocean and installed tomcat7. All in all tomcat works, but I have three major problems: 1) It takes a long time until tomcat deploys the app: I deploy it per tomcat manager and it takes about 2 minutes unit the site works (excl. war upload time). 2) The war files are quite big (16MB): I don't know why they are so big. There are no database dependencies and most logic is written in plain java. Okay, we are using jersey, but 16MB are a lot for the logic of a small webservice. 3) I have to restart tomcat all 3 days or so. It looks like a memory leak or something similar. If the app runs for a few days the response time is quite high and the server seems to be frozen. It works again, if I restart tomcat per ssh. You can find my mvn pom file right here. Do you have some tips? Are there good tomcat alternatives?

    Read the article

  • 550 Requested action not taken: mailbox unavailable on OS X server 10.6

    - by Marc Graham
    I recently added a new domain to my mail server. I have 1 main server mail.example.com and several others that have the mx record pointing to mail.example.com. My two new domains have the mx record set correctly. The issue I am experiencing is the 550 Requested action not taken: mailbox unavailable error but only when I send emails to accounts on the new urls from an external email account such as gmail. If i send an email to one of the newly made email addresses with the new url from an email account within the same server it delivers normally. For example.... sending [email protected] to [email protected] receives 550 error sending [email protected] to [email protected] works normal here is a report from wormly.com with server and account names changed for obvious reasons Resolving hostname... Connecting... SMTP -> FROM SERVER: 220 existingmailserver.com ESMTP Service ready SMTP -> FROM SERVER: 250-Requested mail action okay, completed 250-SIZE 0 250-AUTH LOGIN PLAIN CRAM-MD5 250-ETRN 250-8BITMIME 250 OK MAIL FROM: [email protected] SMTP -> FROM SERVER: 250 Requested mail action okay, completed RCPT TO: [email protected] SMTP -> FROM SERVER: 550 Requested action not taken: mailbox unavailable SMTP -> ERROR: RCPT not accepted from server: 550 Requested action not taken: mailbox unavailable Message sending failed.

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

  • Converting PDF eBooks into a Kindle format

    - by Ender
    Over the past couple of years I've amassed quite a collection of guides, tutorials and ebooks in PDF format. A lot of these are quite useful for work, especially PDF documentation, and rather than have to be at a computer every time I want to read how to do something in Sitecore or to read through a software testing ebook I'd like to do it on my brand-spanking-new Kindle. However, even though there is now a native PDF reader on the Kindle due to the nature of PDF's they are practically unreadable. The text doesn't wrap due to how PDF's are sized and so far after a bunch of Google searches I've yet to find a viable solution to get my PDF's converted into a readable Kindle format. Sometimes these books have code or pictures/tables in them, but most of the time they're text-heavy and to be honest I'd be surprised if there wasn't a free tool to handle the converting of PDF to one of the (seemingly many) Kindle formats. So, can anyone help me out with this? EDIT: I've tried Calibre, and have checked their forums to play with some of the advanced settings, yet the solutions available seem to be extremely poor, especially if the book you're attempting to read contains equations, code, or anything outside of plain text. I've also tried Amazon's conversion service, which wasn't much help with such documents. The best way I have found so far is to build the entire thing over again in ePub or RTF format and convert to MOBI from there. This works for text-heavy books with tables, but anything technical still isn't covered. Can anyone help with this?

    Read the article

  • Nginx + Ubuntu 9.10, gzip not functioning

    - by Matt
    Hey there, So I installed and configured Nginx 0.7.62 on a new Slicehost Ubuntu 9.10 slice. All seems to work fine with the server, except that gzip isn't working for one reason or another. I made sure that it's setting were correct in /etc/nginx/nginx.conf: user www-data; worker_processes 3; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 2; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript; gzip_disable "MSIE [1-6]\."; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } This normally wouldn't be a big deal, but gzip support could save considerable bandwidth for my site. Does anyone have any ideas of what to check, or has anyone else run into this problem?

    Read the article

  • Process PHP files from a network share in a vmware virtual machine

    - by nhinkle
    As a testing environment, I have set up a vmware virtual machine running Windows Server 2008 R2. I have Apache and PHP installed (as part of the xampp package). I am doing the development outside of the VM, and so want Apache to serve PHP files from a VM shared folder (which appears as a network share in the VM). I have done this by creating an NTFS symbolic link in Apache's htdocs directory. I can access this directory from the browser, and plain-text files are readable. However, PHP fails to process files, instead returning the following error: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'C:/xampplite/htdocs/path/to/file.php' (include_path='.;C:\xampplite\php\PEAR') in Unknown on line 0 It appears to be a permissions issue — PHP doesn't seem to be allowed to read the file to process it. However, Apache has no problem opening files in the directory. I cannot figure out how to give PHP the necessary permissions to process the file. Does anybody know of a way to make this work, or else another solution for getting the files into the VM automatically while I develop on the host machine?

    Read the article

  • ConfigMgr 2012 - How to automatically make updates available to computers without forcing them to be installed?

    - by Massimo
    I'm using System Center Configuration Manager 2012 with the Software Update Point feature; however, in this environment patching has to be strictly manual, because server reboots need to be approved and scheduled by different people; thus, I need to use ConfigMgr's SUP like I would use a plain WSUS server with auto-approval but with manual installation. I created some Automatic Deployment Rules to automatically download and deploy critical updates, and to have an installation dealine of "as soon as possible"; but then, I've also configured those rules to not do anything when the deadline is reached, and to not perform system restarts even if needed (see image). Also, I've configured the device collection to where those rules deploy updates to not have any valid maintencance window. However, I'm experiencing quite the opposite as what I was expecting: as soon as the new updates are processed by the ADRs, they get automatically installed on all systems by the Software Center, and the computers are subsequently restarted. Why is this happening? Am I getting something wrong or is just ConfigMgr 2012 not behaving like it should?

    Read the article

  • how to diagnose a hard system seizure? Dell+Ubuntu

    - by rob
    I've got Ubuntu 9.10 on a Dell Vostro 420 desktop, a little over a year old, which I use for plain vanilla work stuff (email, web, terminal, text editor). Every now and then, at totally random times, it completely freezes on me. Hard. Mouse and keyboard stop working, cursor stops blinking, clock stops moving. All I can do is hold down the power button on the front of the box to shut it off. Sometimes it happens after several months of continuous uptime; sometimes it happens a few minutes after a reboot, while all I've done is open a terminal to look at log files, or maybe firefox to do a google search. Each time, there is nothing at all in /var/log/messages at the time of the crash. This makes it seem like a hardware problem, and indeed a few months ago I opened the box and wiggled everything and the problem went away for a while. But now it's back. I went in and checked everything, took out each RAM card and reseated. No luck. I ran all the system diagnostics (the long version) and everything passed with flying colors. Something is messed up in this box, but without any useful logs or failed tests, how in the world am I going to find it? And of course, Dell's not gonna help me cause I went and replaced Windows with Ubuntu. What steps would you take next to track down this problem?

    Read the article

  • On Windows machines, what is the typical toolchain for remote maintenance?

    - by Hanno Fietz
    I need to deploy PHP and Python code and the appropriate environment (web server, db server) to remote Windows systems, and I don't know what toolchain would be the equivalent to ssh, scp, bash and the like. So, basically, what I need to be able to do is the following: access remote Windows with the appropriate privileges in a secure manner, like I routinely do with ssh (I don't even know whether that would be a text or graphic interface on Windows). remotely install software: Apache or IIS, MySQL or Postgres, Python or PHP copy files from remote (the application we're deploying) remotely configure the machine to run regular tasks (e. g. checking for updates to the application) automate tasks like downloading files from a designated place The main question is probably how I get onto the machine securely in the first place, and then the rest is general Windows admin knowledge, which probably is too broad a scope to fit into one question. I have years of experience with maintaining Linux boxes and I have used tools of varying sophistication on those, ranging from plain scping of PHP files to deployment of Java application containers and even full VMs with Vagrant. On Windows, I'm a complete noob, and I don't even know where to start. I have installed Apache, MySQL , PHP on a desktop machine maybe twice in my life, that's about it. Bonus points for things that work from a Linux machine at my end, but I could run a VM and do everything from there.

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • Rewrite a url on Nginx

    - by Ido B
    I tried to use this - location / { root /path.to.app/; index index.php index.html; rewrite ^/(.*)$ /check_register.php?key=$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path.to.app/$fastcgi_script_name; include fastcgi_params; } And its didn't work , This is my full config - user www-data www-data; worker_processes 4; events { worker_connections 3072; } http { include mime.types; default_type application/octet-stream; access_log off; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; gzip on; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } location / { root /path.to.app/; index index.php index.html; rewrite ^/(.*)$ /check_register.php?key=$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path.to.app/$fastcgi_script_name; include fastcgi_params; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } include /usr/local/nginx/sites-enabled/*; } How can i make it work?

    Read the article

  • Is reliability reputation of mechanical keyboards overblown?

    - by Rarst
    A while back I worked up to finally buying mechanical keyboard (~$100 range, "black" switches) and was initially quite content with purchase. However just outside first year (read it - as soon as warranty expired) it started to develop repeat issues (press once, get chain of letter repeated) on multiple keys. It doesn't react to generic cleaning (up to compressed air) and searching Internet shows noticeable amount of people with similar-to-identical issues, spanning years. This makes me severely hesitant to buy another mechanical keyboard, considering: every other keyboard I ever owned, including ultra-cheap crap managed to last longer than that typing experience is nice, but not lifechanging-fan-forever nice for me my choice of mechanical keyboards is severely limited not many brands represented in local market and primarily crazy looking gamer models russian (not to mention russian and ukrainian if possible) layout excludes international ordering price tag for a meek year of use I got our of it is plain demoralizing It is obvious mechanical keyboards have their fans, but shopping around for "best fit" or getting into multiple hundreds price tags is probably not something I am highly interested in. Considering my constraints and bad experience with reliability, is it practical for me to sink more money into buying mechanical keyboard(s) again? In other words - manufacturers are beaming about how crazy reliable mechanical keyboards are. Are active long time users of such keyboards confidently of same opinion?

    Read the article

  • Is it possible for the Subversion Apache module to serve html files with an html content-type without using the svn:mime-type property?

    - by Martin Pain
    I am aware that if you set the svn:mime-type Subversion property on a .html file to text/html then when viewing the file in a browser through the Subversion module in Apache httpd it will be served with a Content-Type: text/html header, enabling the browser to render it as HTML rather than plain text. However, I am looking for a way to do this without using the svn:mime-type property. I'm aware that you can configure your svn client to automatically add the property - this is not what I want, as I do not want to ensure all users have these settings. I'm also aware that I could create a pre-commit hook that rejects the commit if the properties are not set, in order to force users to set the property - I might fall back to that, but I'm looking for something less intrusive. I'm also aware that I could use a post-commit hook to add the properties automatically on the server-side. I'd rather not do that (as users then have to update immediately after their commit, and it's not trivial to write) - I'm looking for a better alternative. Perhaps something with rewrite rules in the Apache server?

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >