Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 636/763 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Apache using 100% CPU, once again

    - by CBenni
    Recently, apache2 started using 100% of CPU power: top gives me From other, similar threads, I took the tip to use mod_status. Aside from HUGE amounts of NULL requests, it gives: CPU Usage: u2.16 s1.32 cu0 cs0 - .0835% CPU load 1.2 requests/sec - 17.6 kB/second - 14.6 kB/request 8 requests currently being processed, 42 idle workers The access and error logs do not show anything surprising or intriguing at all. Note the .8% CPU usage. Another tip was to use strace: root@server:~# strace -p 1956 Process 1956 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...> And remains like this for at least half an hour, without producing any additional output. Restarting apache fixed the problem for less than a second The server runs a few custom python scripts aswell as a django-powered website on apache2 (up-to-date), but even turning the scripts off (or not having them active in the first place) did not change anything. After I stopped apache and powered my server off, powered it on a few minutes afterwards and restarted all my services, the CPU usage remained low for several hours, just in order to pop up again randomly (?) The DigitalOcean CPU stats on my server are: You can see how the CPU usage was super high for almost half a day until I restarted the bot - just to remain stable for several hours and then pop up again. I am completely at a loss of words and don't know what I could do to find out what piece of my code is giving me these problems or if apache itself is the cause... Therefore I would greatly appreciate any hints to the questions: What else can I try to do? Which things might I not have checked? Is this definitely in my own code? How do you find what part of python code crashes an app via a infinite loop or similar?

    Read the article

  • Mysqld shutting down by itself

    - by AJ Naidas
    I'm running a Wordpress Blog that gets medium-high traffic. It is hosted in an Ubuntu Server 2GB Memory 2 Core Processor 40GB SSD Disk, 3TB Transfer. The problem is that MySQL shuts down by itself after an hour or two. I had to restart mysql each and every time this happens. I checked the logs and this is what I found: 140612 6:48:14 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140612 6:48:14 [Note] Plugin 'FEDERATED' is disabled. 140612 6:48:14 InnoDB: The InnoDB memory heap is disabled 140612 6:48:14 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140612 6:48:14 InnoDB: Compressed tables use zlib 1.2.3.4 140612 6:48:14 InnoDB: Initializing buffer pool, size = 1.4G InnoDB: mmap(1502412800 bytes) failed; errno 12 140612 6:48:14 InnoDB: Completed initialization of buffer pool 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool 140612 6:48:14 [ERROR] Plugin 'InnoDB' init function returned error. 140612 6:48:14 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 140612 6:48:14 [ERROR] Unknown/unsupported storage engine: InnoDB 140612 6:48:14 [ERROR] Aborting 140612 6:48:14 [Note] /usr/sbin/mysqld: Shutdown complete judging by this line: 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool I suspect that this is a memory problem, but I would like to hear from the experts here before I conclude. Is this a lack of memory problem? Do you think the value of max_connections in my.cnf (currently 100) is a potential cause and needs increasing? TIA.

    Read the article

  • i7 x980 at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. intel i7 X980@3,33 Mhz with 12 Gb of Team Group 1600 MHz ddr3 Ram. When we use Gurobi The Computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an I7 930 all cores are at a near 100% level even after longer solution times. we first thought that the Harddisk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of Ram we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any Ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • What differences are there between "home" switches and "professional" switches?

    - by pjreddie
    Our radio station uses a PtP wireless system to stream our radio and TV signals from our studio up a hill to our transmitter. We have been having problems with warbly sound and drop outs that come from some point in this system. An engineer that occasionally visits the station thinks it could be the switches we use on each side of the PtP wireless system to connect the PtP devices to the encoders and decoders and wants us to get two of these switches: http://www.amazon.com/Netgear-JGS516-ProSafe-16-Port-Ethernet/dp/B0002CWPOK/ref=dp_return_1 The encoder/decoder setup only streams 8Mbps total so it seems like the switches we have should not be stressed out, unless they are causing sufficient latency to degrade the performance of the encoder/decoder. At each end of the connection we only have 4 connections, is there any reason we couldn't get a cheaper, "home" quality switch like this: http://www.amazon.com/D-Link-DGS-1005G-5-Port-Gigabit-Desktop/dp/tech-data/B003X7TRWE/ref=de_a_smtd Is there a significant difference that we would notice in terms of latency between these two switches? How much does the quality of the switch actually matter in this scenario? Any help is appreciated, feel free to ask questions if anything needs clarification. Thanks

    Read the article

  • Use external display from boot on Samsung laptop

    - by OhMrBigshot
    I have a Samsung RV511 laptop, and recently my screen broke. I connected an external screen and it works fine, but only after Windows starts. I want to be able to use the external screen right from boot, in order to set the BIOS to boot from DVD, and to then install a different OS and also format the hard drive. Right now I can only use the screen when Windows loads. What I've tried: I've tried opening up the laptop and disconnecting the display to make it only find the external and use the VGA as default -- didn't work. I've tried using the Fn+key combo in BIOS to connect external display - nothing I've been looking around for ways to change boot sequence without entering BIOS, but it doesn't look like it's possible. Possible solutions? A way to change boot sequence without entering BIOS? Someone with the same brand/similar model to help me blindly keystroke the correct arrows/F5/F6 buttons while in BIOS mode to change boot sequence? A way to force the external display to work from boot, through modifying the internal connections (I have no problem taking the laptop apart if needed, please no soldering though), through BIOS or program? Also, if I change boot sequence without accessing external screen, would the Ubuntu 12.1 installation sequence attempt to use the external screen or would I only be able to use it after Linux is installed and running? I'd really appreciate help, I can't afford to fix the screen for a few months from now, and I'd really like to make my computer come back to decent performance! Thanks in advance!

    Read the article

  • Windows 7 boot animation slows down startup by default?

    - by kngofwrld
    I just upgraded my HDD to an SSD drive. I am running a completely fresh install and enjoy the short boot time. I tweaked the startup to be as fast as I could by removing unneeded apps and such. Nor am I running a solid desktop background (which causes a 30-sec startup delay). I have a 2.1ghz 64 bit laptop with 4 gigs of ram, so it's not a liquid-cooled speed monster, but I checked some super high end PC boot vids on YouTube and noticed that they startup in almost the same time as my machine. I also noticed that the glowing Windows 7 animation plays all the way no matter how fast the PC is. I turned off the animation, and the startup time is unchanged. I turned on verbose startup info and noticed that it runs until the very end, where it looks like it just sits there for no reason waiting for something to happen for a few seconds. So now I think that the Windows 7 startup animation has a timer built into it that forces the computer to wait for no other reason than to play the full animation. Super-fast XP boot vids on YouTube seem to start much faster (and not just because they "have less to load"). Am I imagining things? My question is: How can I turn off not just the animation, but the timer for the animation. Here is a vid that tipped me off, I have no relation to the poster. (warning: soundtrack might be loud) http://www.youtube.com/watch?v=T5LkX3xejJ4

    Read the article

  • Exchange 2003 Internet Mail Size Limits

    - by scampbell
    I have unsuccessfully tried to increase per user incoming mail size settings by editing their user account settings on our Exchange server, but large incoming mail from external domains is still blocked using the default global settings. After reading here: http://support.microsoft.com/default.aspx?scid=kb;en-us;322679 I see that All Internet e-mail messages use the global setting for limits on sending and on receiving. The message categorizer evaluates the sender's sending limit and the recipient's receiving limit. In example 2 earlier, a user with a user mailbox limit of 3 MB could receive messages from another user with a 3-MB sending limit. Because Internet users use the global setting, they can send only a 2-MB message. Which to me is madness! Surely if I want to allow a user to receive mail up to a certain size then I should be able to set it as such? Is there a specific way of getting round this? Would setting the global defaults high and setting a lower, say 10MB, limit on the SMTP connector do the trick? Thanks.

    Read the article

  • Can nginx be an mail proxy for a backend server that does not accept cleartext logins?

    - by 84104
    Can Nginx be an mail proxy for a backend server that does not accept cleartext logins? Preferably I'd like to know what directive to include so that it will invoke STARTTLS/STLS, but communication via IMAPS or POP3S is sufficient. relevant(?) section of nginx.conf mail { auth_http localhost:80/mailproxy/auth.php; proxy on; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 SSLv3; ssl_ciphers HIGH:!ADH:!MD5:@STRENGTH; ssl_session_cache shared:TLSSL:16m; ssl_session_timeout 10m; ssl_certificate /etc/ssl/private/hostname.crt; ssl_certificate_key /etc/ssl/private/hostname.key; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { protocol imap; listen 143; starttls on; } server { protocol imap; listen 993; ssl on; } pop3_capabilities "TOP" "USER"; server { protocol pop3; listen 110; starttls on; pop3_auth plain; } server { protocol pop3; listen 995; ssl on; pop3_auth plain; } }

    Read the article

  • Does having TRIM enabled affect other hard drives on a computer (and how do you know when Windows is using it)?

    - by Breakthrough
    I recently purchased a solid state drive (an OCZ Vertex 2 (80 GB)) to use as my primary operating system partition. I also have three other SATA hard drives of assorted sizes. I successfully installed Windows 7 Professional onto the SSD (works awesome, great response time and transfer rate), and used the other three HDDs for data storage. I was browsing through the Bible of OCZ SSDs, and noticed the following in Section 60-76 - Tweaks and TRIM: Q. How do I know if TRIM is enabled on my OCZ SSD? A. In Windows 7, go to start/run/cmd), type the following: fsutil.exe behaviour query DisableDeleteNotify It should respond back with: DisableDeleteNotify=0 if TRIM support is ready and active. If it's not, then type: fsutil.exe behavior set DisableDeleteNotify 0 After a bit of searching on Google, I found similar results elsewhere (set DisableDeleteNotify to 0, which makes sense since for TRIM to work, the solid-state drive needs to be notified when deletes occur (for the garbage collector) unlike a normal hard drive). When I run the query on fsutil, I get the following result: DisableDeleteNotify = 48 Following the instructions I found, I set this to 0 instead of 48. However, I am beginning to wonder. Is this all the proof I really need that the OS is using TRIM? Also, since this applies globally for the computer, is TRIM data being sent to the other hard drives connected to the computer? And if so, would this cause any degradation in disk performance?

    Read the article

  • Xen or KVM? Please help me decide and implement the one which is better

    - by JohnAdams
    I have been doing research for implementing virtualization for a server running 3 guests - two linux based and one windows. After trying my hands on Xenserver, I am impressed with the architecture and wanted to use the opensource XEN, which is when I am hearing a lot more about KVM, about how good it is and it's the future etc. So, could anyone here please help me answer some of my queries, between KVM and XEN. Based on my requirement of three VMs on one server, which is better for performance - KVM or XEN, considering one the linux vm's will works a file-server, one as a mailserver and the third one a Windows server? Is KVM stable? What about upgrades.. What about XEN, I cannot find support for it Ubuntu? Are there any published benchmarks on both Xen and KVM? I cannot seem to find any. If I go with Xen, will it possible to move to KVM later or vice versa? In summary, I am looking for real answers on which one I should use.. Xen or KVM?

    Read the article

  • How to Monitor Network in Medium-Sized Company?

    - by Kyle Lowry
    I work at a medium sized company (100+ employees). An issue that has been cropping up is network performance, internet access in particular. We have about 70 or more computers, a mix of Mac OS X and Windows XP & 7 machines. We have several servers (Exchange server, PC file servers, MS SQL, Blackberry, FTP, Mac server, etc). There are four main switches, a SonicWall firewall, and probably a couple routers in the server room with a dozen or so more scattered around the building. The network structure has grown organically over a number of years; and, as far as I know, there really isn't a monitoring solution in place. When we experience network issues (slow connections, dropped packets, and so on), our general solution is to power cycle some hardware or go around to each employee and ask them if they are uploading/downloading any large files. This is really inefficient and time consuming, and it does not allow us to monitor the network, tackling potential problems proactively. I would like to find a solution that would allow me to monitor network usage company-wide in real time, with detail going down to the individual computer, ideally. Given the hodgepodge of equipment and operating systems, what would be the best way to set up some kind of monitoring solution? Hardware, software, restructuring our network architecture?

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Why does my simple Raid 1 backup storage perform really slow sometimes?

    - by randomguy
    I bought 2x Samsung F3 EcoGreen 2TB hard disks to make a backup storage. I put them in Raid 1 (mirror) mode. Made a single partition and formatted it to NTFS, running Windows 7. For some reason, accessing the drive's contents (simply by navigating folders) is sometimes really slow. Like opening D:/photos/ can sometimes take several seconds before it starts showing any of the folder's contents. Same applies for other folders. What could be causing this and what could I do to improve the performance? I remember that there was an option somewhere inside Windows to choose fast access but less reliable persistence operations (read/write). It was a tick inside some dialog. At the time, it felt like a good idea to take the tick away from the option and get more reliable persistence but slower access, but now I'm regretting. I'm unable to find this dialog.. I've looked hard. I don't know, if it would make any difference. Oh, and I've ran scan disk and defrag on the drive. No errors and speed isn't improved.

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Encrypted Windows 7 & Linux Advice Wanted

    - by Miles
    I would like to set up my laptop to dual boot Arch Linux and Windows 7 with file sharing and encryption. Just wanted some advice on going about this because I have not dealt with encryption nor file sharing. I have two 500GB hard drives, and this is my plan: Install Windows 7 across both hard drives Use a live CD to wipe out Windows boot loader and replace with Grub Legacy Use live CD to wipe out second hard drive and re-size the Windows partition located on first hard drive Install Arch Linux along side with Windows 7 on first hard drive, all remaining space goes to home folder as ext2 Install truecrypt and ext2fsd Concerns: Is this the most efficient way to share files between both OSes? Or should I just be using NTFS to store all my data? How would the file permissions work when sharing files between Windows and Linux? Is there a high likley hood of corruption, and what is the ease of backing up files from an encrypted disk? Anything I should look out for, conflict between Grub and Truecrypt? Thank you for any advice, and feel free to post any links you might find useful to me. I am trying to plan this out so I can minimize downtime as I do not want to spend more than a night on this, nor do I want to run into a major problem some time in the future.

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Recommendations for good Unix MTA / groupware solutions? [closed]

    - by Jez
    Possible Duplicate: Exchange server replacement that runs on Linux I'm setting up a Debian server, and one of the things I need on it is an MTA. I don't want to use something like Exim or Postfix because I want something that ties in SMTP, POP3, and IMAP all in one (a la Microsoft Exchange). Most MTAs also seem to be hellishly difficult to configure. Try and read the Exim documentation; you could do a university degree on it (I'm not kidding). When you can get an HTTP server like Cherokee which is easy to configure and has a nice web interface, do MTAs or groupware solutions need to be that hard? I'm aware that some people think "the Unix way" is to have lots of different interacting pieces of software (like maybe an SMTP MTA, POP3 service, webmail service, and overarching manager to tie them all together), but I think this is a situation where that just makes things a lot harder to deal with and one large software suite fits in much more nicely. So, I'm looking for good open source software suites that will run on Debian that: Combine (at least) SMTP, POP3, and IMAP Are easy(ish) to configure Have a nice configuration web interface or GUI Are not defunct projects I don't mind if it's groupware and offers calendaring too, but I would only be using the e-mail functionality for now. Another nice-to-have would be built-in webmail (if we're combining a bunch of functionality, why not?) Note however that I do NOT need Outlook support. I am not really looking for an "Exchange replacement drop-in". The suites I've found so far that seem to match the above criteria (and have appropriate licenses) are Citadel, Kolab, and Zimbra. I'd appreciate anyone who has experience with any of these giving me the pros and cons of them, such as how easy they are to configure and what their performance is like. I'd also appreciate any other suggestions for solutions that fulfil my criteria that I may have missed out.

    Read the article

  • CPU usage always below 10% in windows server 2008 r2 x64

    - by ???
    I am using a server with windows server 2008 r2 running on it to run my program. The CPU of the server is Intel xeon x5570 2.93GHz with 2 processors, 8 cores per processer. However, I found that the cpu usage is almost always below 10% even I use 32 threads in my program. And I also found that sometimes the cpu usage could reach as high as 93% through the task manager when running my program and at that moment my program has processed over 1000 files per second while normally, it only processed over 50 files per second. However, this does not happen often. I use tools downloaded from the internet to make sure no core sleeps when the server is on, nothing changed. Also, I edited the windows register to make sure that I, as an administer, have no cpu usage limit. But it changed nothing. Is there anyway that I can make full use of my cpu? That is to say that each core runs a thread of my program and the total cpu usage could reach over 50% when I use a reasonable number of threads in my program. Did this happen to anyone of you? And could you help me with this ? Thank you!

    Read the article

  • What kind of server configuration is best for a chatting app? [closed]

    - by mohabitar
    I'm just now starting to go deeper into the world of cloud hosting and databases, and am getting overwhelmed by how deep this information goes. It's all a little too much to consume in a short amount of time. I get a lot of pricing information, but I'm unable to determine what that means to me. I'm making what you might compare to an email app. Users can send messages to one another. I just don't understand, out of the several options, what would be ideal for an app like this, where users would be constantly sending and receiving text data. With Amazon DynamoDB, I have to specify a pre-defined throughput with number of reads and writes per second. Sure I can just type 50, but I'm not exactly sure what 50 writes per second represents. I'm trying to determine what would be the most cost efficient solution, and I want to know what a throughput of 50 reads/writes/second compares to. Is that a high number? What is a good throughput number for a message sending app with say 50,000 daily users? I'm just providing specific numbers so I can understand what these throughput numbers represent. 100 transactions/second to me seems like a small number since I'm not familiar with this stuff, so I'm just looking to bring everything in context. What would 100 read/write/second be useful for? Are there any average example values available? And I'm not sure what each service is good for. For a message sending app, is there any reason I'd want to choose say Amazon DynamoDB over Google App Engine? Any insight would be greatly appreciated.

    Read the article

  • Ram configuration 4x4 or 2x8?

    - by Carl B
    I am looking to upgrade my Ram to 16gigs and I am wondering if there is any distinct advantage of the way I do it. That being a 4x4 or 2x8 set up. In all my searching there have been a number of pros for each profile. I can find no benchmarck results for either setup as a compaison. So, if there are 2 profiles of the same speed, same voltage, same timing and same cas what would perform better or have a better over all benafit? A few examples from my search - a 4x4 set would lend a benafit in that if one stick failed, you only lose 25% of your Ram vs 50% in a 2x8 set up. a 2x8 set up would have less strain on the memory controler and motherboard. a 2x8 would generate less heat. a 2x8 set up is easier to over clock (not part of my need, but alot of the comparisons circled around the overclocability ease of the 2 stick set up). There is one outstanding benafit that I have found in at least the target company I have looked at and that is price. The 2x8 is nearly half the cost. My motherboard supports a max of 16 gigs and I have a 64 bit OS. Has anyone seen any performance comparisons or is 16 gb just 16 gb no matter how you slice it? And is there any merit to the above pros? Edit: as per the mobo specs - Main Memory • Supports four unbuffered DIMM of 1.5 Volt DDR3 800/1066/1333/1600*/1800*/2133* (OC) DRAM, 16GB Max

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • Windows 7 x64 support for Intel GMA 3650 (or GMA 3600)

    - by Loom
    I recently purchased an Intel D2700 MUD motherboard and I cannot find drivers for the Win7 x64 integrated graphics (Intel GMA 3650 aka PowerVR sgs545). The accompanying CD contains Win7 x32 version only. When I run it I got an error: This computer does not meet the minimum requirements for installing the software. I tried to use online utility Intel Driver Update Utility Graphics. I used Chrome, Firefox, Internet Explorer without success. First, UAC prompt appear, and then endlessly spinning progress-bar with text "Analyzing computer...". The text in UAC prompt is: Program file name: System Requirements Lab Verified publisher: Husdawg, LLC I downloaded this utility (intel_srldetect_4.5.5.0) and started it from my hard disk. I got an error: A network error occured while attempting to read from the file: C:\Users\Loom\Downloads\SystemRequirementsLab_intel_4.5.5.0.msi Standard VGA driver works for this video card but without hardware acceleration: Hardware acceleration is either disabled or not supported by your video card driver, which could slow game performance. Make sure you have the latest video card driver installed and that hardware acceleration is turned on. Where I can get appropriate driver?

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • what are these weird IP address connections in resource monitor?

    - by bill
    I decided to check out Resource Monitor (on the 'Performance' tab in Task Manager, Windows 7) and I noticed in the "Network" section that the 'System' image name kept making a bunch (~5 at a time) of connections to random IP addresses, it would show anywhere from 1-500 bytes/sec 'sent'. They would stay connected for 1-2 minutes. -All web browsers are closed So, first thing I did was run a trace from network-tools.com on some of these IP addresses. 8/10 were outside of US and did not resolve to any host name. Of the 10 IP addresses I traced, 2 were in US, 4 showed origins in China, and one each to Algeria, Russia, Pakistan, Korea. (!) So, the next thing I did was turn off my wireless card, watch the connections disappear, then turn the card back on, and within 30 seconds more random connections were created by System, with different IP addresses from the first time. The next thing I did was go open Task Manager, Show Processes From All Users, then I killed just about everything that wasn't (what appeared to be) a windows process. Turned on wi-fi, and again within 30 seconds, random IP addresses connect for ~ 1 min at a time, new ones coming and going. I occasionally use bit torrent on this machine, but there was definitely no process that seemed related to bt running after I went through task manager, and bt wasn't open to begin with. So, any ideas on what these connections might be for? I have been using Ad-Aware Free and AVG Free on this computer for a while now, always up to date..

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >