Search Results

Search found 8687 results on 348 pages for 'per ersson'.

Page 79/348 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Java: very slow tomcat and too big war file

    - by NaN
    I created some sort of RESTful API backend for a mobile app. It's written completely in Java using Jersey as Framework. At the moment no database is used, it's all in the memory, but this is no problem so far (it's only for prototyping purposes). I ordered the smallest package from digital ocean and installed tomcat7. All in all tomcat works, but I have three major problems: 1) It takes a long time until tomcat deploys the app: I deploy it per tomcat manager and it takes about 2 minutes unit the site works (excl. war upload time). 2) The war files are quite big (16MB): I don't know why they are so big. There are no database dependencies and most logic is written in plain java. Okay, we are using jersey, but 16MB are a lot for the logic of a small webservice. 3) I have to restart tomcat all 3 days or so. It looks like a memory leak or something similar. If the app runs for a few days the response time is quite high and the server seems to be frozen. It works again, if I restart tomcat per ssh. You can find my mvn pom file right here. Do you have some tips? Are there good tomcat alternatives?

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Postfix additional transports - is it working?

    - by threecheeseopera
    I have enabled two additional transports in my postfix config to deal with recipient domains that demand connection limiting, per the instructions here at serverfault. However, I have no idea if this is working or not; in fact, I think it is not working, due to the send speeds I am seeing in the logs. How might I determine if my additional transports are working? If they aren't, do you have any tips on figuring out why? And, do you have any comments on my particular configuration? (am I a bucket of fail?) I have enabled the additional transports in master.cf: smtp inet n - - - - smtpd careful unix - - n - 10 smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 cautious unix - - n - - smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 I have set up the transport mapping file /etc/postfix/transport: hotmail.com cautious: yahoo.com careful: gmail.com cautious: earthlink.net cautious: msn.com cautious: live.com cautious: aol.com careful: I have set up the transport mapping and some connection-limiting settings in main.cf: transport_maps = hash:/etc/postfix/transport careful_initial_destination_concurrency = 5 careful_destination_concurrency_limit = 10 cautious_destination_concurrency_limit = 50 Finally, I have run converted the transport file to a db per the postfix docs: #> postmap /etc/postfix/transport And then restarted postfix. I do see my transport_maps setting when I run postconf, but I do not see any of the transport-specific settings ('careful_xxx_yyy_zzz'). Also the mail logs do not appear to be different in any way to what they were previously. Thanks!!!

    Read the article

  • Postfix + Exchange + ActiveDirectory; How to mix them

    - by itwb
    My client has got many sub-offices, and one head office. The headoffice has a domain name: business.com All users in the many sub-offices need to have a headoffice email address: [email protected] Anyone not in the head office will need the email forwarded to an external email address. All users in the head office will have their email delivered to Microsoft Exchange. Users are listed in Active Directory under two different OU's: HeadOffice or SubOffice. Is this something able to be configured? I've done some googling, but I can't find any examples or businesses set up this way. Edit: Postfix will accept all email, will need to determine to forward the email to an external account or alternatively have it delivered to MS Exchange. I've done some reading about MS Exchange and that you can 'mail-enable' contacts for forwarding - but I don't know if each AD account requires an Exchange CAL? The end goal is to forward email to external accounts to sub offices or accept email for head office. Maybe I don't need to worry about Postfix to perform this task..... http://www.windowsitpro.com/article/exchange-server-2010/exchange-server-licensing-some-of-your-questions-answered "What about client access licenses (CALs)? You need one CAL per user who will connect to Exchange. Although it might not be 100 percent precise, I prefer to think of it as one CAL per mailbox; there are exceptions for users outside your organization, automated tools that use mailboxes, and so on. Exchange doesn't enforce this limit, so it's on you to ensure that you have the correct number of CALs for the set of clients you support."

    Read the article

  • Isolating Apache virtualhosts from the rest of the system

    - by JesperB
    I am setting up a web server that will host a number of different web sites as Apache VirtualHosts, each of these will have the possibility to run scripts (primarily PHP, possiblu others). My question is how I isolate each of these VirtualHosts from eachother and from the rest of the system? I don't want e.g. website X to read the configuration of website Y or any of the server's "private" files. At the moment I have set up the VirtualHosts with FastCGI, PHP and SUExec as described here (http://x10hosting.com/forums/vps-tutorials/148894-debian-apache-2-2-fastcgi-php-5-suexec-easy-way.html), but the SUExec only prevents users from editing/executing files other than their own - the users can still read sensitive information such as config files. I have thought about removing the UNIX global read permission for all files on the server, as this would fix the above problem, but I'm not sure if I can safely do this without disrupting the server function. I also looked into using chroot, but it seems that this can only be done on a per-server basis, and not on a per-virtual-host basis. I'm looking for any suggestions that will isolate my VirtualHosts from the rest of the system. PS I'm running Ubuntu 12.04 server

    Read the article

  • How to remove Media Center from Windows 8

    - by Arabella
    I bought 2 Windows 8 upgrades, 1 for my PC and 1 for my notebook. I added Windows Media Center to my notebook using the free offer in November (side note: the key was emailed to me within 5 minutes, I see many people have been complaining that it takes a few days). Today I decided to add WMC to my PC as well, so I went onto the Microsoft website, same like last time, and I received the email within a few minutes. Once I added WMC, entered the key and the computer rebooted, my activation is now broken: This product key is already being used on another PC. Try a different key or buy a new one. After rereading the product key email, I realised that the WMC key was exactly the same as the one I had received in November for my notebook (I used the same email, i.e. my Microsoft account Outlook email, for both). I didn't think this would be a problem, as on Microsoft's feature pack page it states: ...is limited to five licenses per customer per promotion. So then I decided, I'll just remove WMC from my PC and go back to Windows 8 Pro. So I turned off the WMC feature, PC restarted, activation still broken because my key has been replaced. I then tried to activate it with my original Pro key. The error it gave was that this key cannot be used with this version of Windows, as it is now Windows 8 Pro with Media Center and not Windows 8 Pro anymore. I've searched a bit and it seems the only way to remove it is do a clean install. I tried the Windows 8 Downgrade Helper, which told me I was already running Win 8 Pro when I tried to downgrade, and that I was running Win 8 Pro with Media center when I tried the other option. To sum up: How do I remove Windows Media Center from Windows 8 Pro without having to do a clean install?

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • Why database partitioning didn't work? Extract from thedailywtf.com

    - by questzen
    Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx. Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day. The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed. Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • Does SNI represent a privacy concern for my website visitors?

    - by pagliuca
    Firstly, I'm sorry for my bad English. I'm still learning it. Here it goes: When I host a single website per IP address, I can use "pure" SSL (without SNI), and the key exchange occurs before the user even tells me the hostname and path that he wants to retrieve. After the key exchange, all data can be securely exchanged. That said, if anybody happens to be sniffing the network, no confidential information is leaked* (see footnote). On the other hand, if I host multiple websites per IP address, I will probably use SNI, and therefore my website visitor needs to tell me the target hostname before I can provide him with the right certificate. In this case, someone sniffing his network can track all the website domains he is accessing. Are there any errors in my assumptions? If not, doesn't this represent a privacy concern, assuming the user is also using encrypted DNS? Footnote: I also realize that a sniffer could do a reverse lookup on the IP address and find out which websites were visited, but the hostname travelling in plaintext through the network cables seems to make keyword based domain blocking easier for censorship authorities.

    Read the article

  • Get the Windows Scheduler's Location Or Code?

    - by Ram
    Today i got a task to find the scheduler that is running once in a month, actually I have to change its running time period to once a week. I have searched for it in the Windows scheduled tasks, but i didn't find it. It is sending a mail containing a link. Now i am not confused about this where else can i find it. As the place i know where the scheduler can be, has already been checked by me. Can someone suggest me where else can i search for this scheduler? As per the previous developer's comment "It is a normal Windows method of scheduling tasks" UPDATE Actually the task is running after a month periodically. As per my knowledge it could be a "Windows Task Schedule" or "Windows Service" created by the old developer. Now as the previous developer is not available and i do not have any documentation.. i need to change the time period from month to weakly. I have checked in the "Task Schedules" on the server and checked the services running ob the server and was unable to find the "Scheduler". now i have two questions: is there any other approach by using that i can schedule an automated email periodically. any idea to find this.

    Read the article

  • Bonnie does not provide speed for Sequential Input / Block

    - by Lqp1
    I'm using ProxmoxVE and I would like to run some benchmarks regarding performances of this product. One of these benchmarks is bonnie++ ; it runs very well in a VM (qemu-kvm) but when I run it in a conainer (openVZ), it does not provide me reading speed (only writing). I don't understand why... Does anyone know what's happenning ? VMs ans Containers are Debian 7.4. Here's the output of bonnie in the container: root@ct2:/# bonnie++ -u root Using uid:0, gid:0. Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ct2 1G 843 99 59116 8 60351 4 4966 99 +++++ +++ 2745 8 Latency 9558us 3582ms 527ms 1672us 936us 5248us Version 1.96 ------Sequential Create------ --------Random Create-------- ct2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 19567us 358us 368us 107us 59us 25us 1.96,1.96,ct2,1,1401810323,1G,,843,99,59116,8,60351,4,4966,99,+++++,+++,2745,8,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,9558us,3582ms,527ms,1672us,936us,5248us,19567us,358us,368us,107us,59us,25us The filesystem for / is of type "simfs", which is a pseudo filesystem for openVZ. Maybe it's related to this issue but I can't find anyone with the same issue with bonnie and openVZ... Thanks for your help. Regards, Thomas.

    Read the article

  • IIS 7 much slower than IIS 6

    - by JoeJoe
    I have a asp.net 3.5 web application running fine on Windows2003 IIS6. I published same exact application to IIS7.5 (Win2008R2) on a faster box (i5,8Gram) and it is significantly slower. 5-6 sec per page vs. 1-2 sec per page. During that time the Task Mgr CPU is always under 10%. Both attach to same database on other box. Benchmark is consistent from any other client browser or machine. I have connection pool on both, compression on both. Same network subnet. Forms authentication (no SSL yet). Can you give me steps on how to troubleshoot where the delays are being inserted or settings in IIS7 that I may have overlooked. Just using defaults. There is only 1 web site on each box. I understand the roles of an Application as defined in IIS has changed. There is no special Application defined in IIS.

    Read the article

  • What is the best hosting option for Flash web-widget?

    - by par
    Our Flash web-widget has got highly popular. It is downloaded around 100,000 times per day. And that is the problem. Our server bandwidth is too narrow to deliver the widget to the clients fast. The widget is loaded very slow. Probably 20 times slower than before (at peak times). Probably I have choosen not the right hoster for my task - delivering 1 MB Flash widget to 100,000 users per day. What is the best hosting solution in my case? I'm not good at server administration so forgive me if I sound naive. The details are the following. Our hoster options: -Dedicated server, Ubuntu -10 Mbit Connection -monthly bandwidth limit: 2000 GB Widget size is 1 MB. The widget consists of the main SWF and a number of loaded SWF and data files. This is a part of Apache Status report taken right now ---- Server uptime: 1 hour 2 minutes 38 seconds Total accesses: 74865 - Total Traffic: 5.8 GB CPU Usage: u28 s7.78 cu0 cs0 - .952% CPU load 19.9 requests/sec - 1.6 MB/second - 81.1 kB/request 200 requests currently being processed, 0 idle workers WWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWCWWWW WWWWWCWWWWWWWWWWWWWWCWWWWWWWWWWWWCWWWWWWWWCWWCWWWWWWWWWWWWWWWWWW WWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWWCWCWWWWWWWWWWWWWWWCW WWWWWWWW........................................................ ----

    Read the article

  • Which steps are required to avoid my server being considered as spam sender?

    - by Cyril N.
    I'm looking to set up a webmail server that will be used by a lots of users that will receive and send emails. They will also have the possibility to forward emails they receive. I'd like to know which steps are recommanded/required to indicate to others Mail services (GMail, Outlook, etc) that my server is not used as a spam sender (disclaimer : IT's NOT ! :p) but a legitimate one. I know I have to define a SPF TXT records for example, but what others steps would you recommend me to do ? For example, is there a formula like having a proportional number of servers based on the amount of email sent (for having a different IP address) ? (something like sending a maximum of 1M emails / per IP / per day ?) Something else I'm missing ? I tried to search online, but I mostly find how to avoid emails sent with scripts (like PHP) being put in the SPAM folder. I'm looking for a server/dns configuration side. Thanks a lot for your help/tips, I appreciate !

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • Rails application keeps timing out when attempting to connect to Postgresql DB

    - by Corillian
    I'm hosting a postgresql database on a small windows azure Ubuntu 13.04 VM with a default postgresql.conf. I have a Rails application running on a medium windows azure Ubuntu 13.04 VM. When accessing the postgresql database the rails application is constantly timing out. In its database.yml I have the connection pool size set to 120 and the timeout set to 15 seconds. Despite this my rails logs are full of the following error message: ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds (waited 5.0023203 seconds). The max pool size is currently 120; consider increasing it. My postgresql.conf has a max connection limit of 120, making it any larger prevents the server from being able to successfully restart. I've also made sure that ssl was off in the postgresql.conf per this article but beyond that I have no idea what's going on. My postgresql logs don't contain any info indicating something is going wrong. My website is getting ~1k hits per day so perhaps a small VM instance just isn't powerful enough? I appreciate any assistance! [Edit1] The postgresql database is in a separate cloud service within the same affinity group. For example: db small VM: mydatabase.cloudapp.net (Affinity Group US East) forums medium VM: myforums.cloudapp.net (Affinity Group US East) On the database server I have opened port 5432. The connection to the database server from the forums server is using its hostname. Is it possible that the DNS resolution is what's taking so long?

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >