Search Results

Search found 751 results on 31 pages for 'quad'.

Page 25/31 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Ubuntu odd mouse and keyboard behavior when window gains inner focus

    - by Scott
    This morning on my Ubuntu 10.10 system when I open a window - for example, system-preferences-about me, if I click in to a field such as "work email", I can no longer close the window with the mouse! Clicking the X on the window will not work. Also, I loose the ability to click on anything else - clicking on the desktop, icons, menus, workspaces, etc. do not work. Even the effect when you hover over a folder on the desktop and that folder highlights - that stops until the window is closed. If I open this same screen but do not click in to a field, everything is fine - I can close the window with the X and everything else works fine. Same thing happens with several other windows I tried. Even calculator - I can open it, everything is fine until I click on a button in the calculator - then no ability to click on anything else. Have to Alt-F4 to close the window. The system is only about a week old from a fresh install (64 bit ubuntu - quad core amd machine). I uninstalled wine, turned off remote desktop/disabled in startup, also in startup disabled visual assistance, bluetooth, dropbox, klipper. Reboot, no difference. The only other thing non-standard I see in startup is nvidia. Using a logitec usb mouse, saitek usb keyboard. Was working fine yesterday. I can not think of anything I did / installed yesterday. I switched themse, then went to update manager and saw two x server / x org related updates and installed them, reboot and NOW IT IS FINE! However, then I re-enabled dropbox, klipper and remote desktop rebooted and the problem is back. Again I disabled and rebooted. Problem is still there!! So somehow I fixed it at least for a few minutes, but now it is back and I am out of ideas.

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • Some Emails incoming to Outlook 2007 are blank, same emails work fine on webmail, iphone, etc

    - by Funran
    This is a pretty easy problem to describe. Basically users who have just been upgraded to Outlook 2007 (yeah I know 2010 is out), are not receiving SOME emails (from outside our domain, ie hotmail, yahoo). Receiving is not the correct word, these emails come in, along with their attachments, subjects, to/from line, etc. But the body is blank. If the same user goes into their webmail, iphone, blackberry instead, they can read the message fine. It's clear to me that something in Outlook 2007 is not generating the body correctly, so it just strips it. I just don't know WHY. Our mail server was recently upgraded to Exchange 2010, users on 2010 running outlook 2003 are working fine, it's just the random emails for users using 2007. I hope I made that clear enough, thank you for any future help guys. EDIT: I don't see rft, but i swear I've seen it before. Here is the view source on a recent email. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"><html><head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <meta name="GENERATOR" content="MSHTML 8.00.6001.19120"> <DEFANGED_style_0 <="" style=""> </head> <body bgcolor="#ffffff"> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">MS,</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">Could you tell me please what the legal descrip &amp; Topo Quad name is for this Monroe P.ID Site?</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><em><font color="#0000ff" size="2" face="Calibri">Thanks, Henry Roye</font></em></p><DEFANGED_DIV></body></html>

    Read the article

  • Is there an objective way to measure slowness of PC/WINDOWS?

    - by ekms
    We've a lot of users that usually complain about that his PC is "slow". (we use win XP). We usually check startup programs, virus, fragmentation, disk health and common problems that causes slowness (Symantec AV drops disk to 1mb/s , or a seagate HD firmware error in certain models), but in those cases the slowness is pretty evident. In other hand, the most common is the user complaining about his pc but for us looks OK, even in 6 years old desktops. People sometimes even complains about his new quad core desktops speed!!! So, we are asking if there's a way to OBJECTIVELY check that a computer didn't dropped its performance, compared with similar ones o previous measures, specially for work use (I don't think that 3dmark benchmark o similar may help). The only thing that I found that was useful is HDTune, but it only check hard disk performance. Basically, what we want is something that enable us to say to our users "see? your PC is as slow as was three years ago! stop complaining! Is all in your head!"

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by Fred
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • VMWare vSphere 5: 4 pNICs for iSCSI vs. 2 pNICs

    - by gravyface
    New SAN for me, never used before: it's an IBM DS3512, dual controller with a quad 1GbE NIC per controller that a client bought and needs help setting up. Hosts (x2) have 8 pNICs and while I usually reserve 2 pNICs for iSCSI per host (and 2 for VM, 2 for management, 2 for vMotion, staggered across adapters), these extra ports on the SAN have me wondering if storage I/O would be significantly improved with 2 additional NICs per host, or if the limitations of the vmkernel/initiator would prevent the additional multipaths from ever being realized. I'm not seeing alot of 4 pNIC iSCSI implementations per host; 2 is the de facto standard from what I've read/seen online. I could and probably will do some I/O testing, but just wondering if there's a "wall" that someone else has discovered long ago (i.e. before 10GbE) that makes a 4 NIC iSCSI per host setup somewhat pointless. Just to clarify: I'm not looking for a how-to, but an explanation (link to paper, VMWare recommendation, benchmark, etc.) as to why 2-NIC configurations are the norm vs. 4-NIC iSCSI configurations. i.e. storage vendor limitations, VMKernel/initiator limitations, etc.

    Read the article

  • Win7 Hangs During App Install/Upgrade/Uninstall

    - by JadeMason
    I have a custom built PC that intermittently hangs when installing, uninstalling, or upgrading applications. Technical Specs ASUS P5E w/ WiFi Motherboard Intel Core2 Quad Q6600 Processor 4x 2GB G.Skill DDR2 800 SDRAM ASUS EAH2900XT / Radeon HD 2900XT 512MB Video Card Under normal operation the machine runs reliably, even under heavy load, such as video transcoding. The temperature never gets anywhere near where I would worry about it. However, the machine regularly hangs (complete lockup, no response to keyboard or mouse, no activity on-screen) when either installing a new application, uninstalling an existing application, or applying patches to existing applications or the OS. This is extremely frustrating as this machine is primarily used as a HTPC. Several apps are configured for automatic updates, and these updates sometimes cause the machine to lockup while we are watching content on the PC. In previously investigating this issue, I found one likely problem could be my Logitech Webcam. The Logitech software has a bug that leaves an entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Control\SessionManager\PendingFileRenameOperations Which references the Temp directory. My registry contained this error, so I uninstalled the webcam software and deleted this registry key value. Unfortunately, the machine will still intermittently hang. I've noticed that the hangs always happen when an install/upgrade/uninstall requires elevated privileges (presumably to modify the registry). I can typically get at least one install/upgrade/uninstall to complete after a reboot, but after that it is a game of russian roulette to see if the operation will succeed or hang the machine. The event log is not helpful, as log messages end at the time of the hang, with no record of a warning or error. My only recourse when the machine hangs in this way is to perform a hard reset/power cycle. Any tips on how to further debug this issue are greatly appreciated.

    Read the article

  • windows 2008 r2 iis worker proccess memory usage increase

    - by nLL
    I have this web site written in c#. around 400-500 users online at any time. it was on windows 2008 32 bit machine before and never ever locked/slowed down due to increased memory consumption up until i upgraded it's server to win 2008 r2 64 bit. Old server had only 4 gig ram and quad core cpu at 2ghz. site was working just fine. since i've upgraded the server i noticed (2 times with in 10 days) it started to eat ram. last night it went up to 4 gb ram. with ram increase response slows down quite a lot. recycling app pool doesn't help. I have to restart it's worker process to recover. i've noticed this usually happens if there are continuous errors. as i didn't change anything in the code am i safe to assume it is not related to memory leak in the code? did anyone came across something like that? same thing happens if i create continuous errors with classic asp. thanks

    Read the article

  • Hyper-V and attaching physical disks

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by user37755
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • Upgrade or replace?

    - by Felix
    My current PC is about four years old, although I have made upgrades to it throughout its existence. The current specs are: (old) Intel Pentium D 2.80Ghz (32K L1 / 2M L2), Gigabyte 945GCMX-S2 motherboard (old) 2.5GB DDR2 (slot0: 512MB @ 533Mhz; slot1: 2GB @ 667Mhz) (new) HIS Radeon HD 4670 - I think this is limited by the motherboard not supporting PCIe 2.0 (?) (old) WD Caviar 160GB - pretty slow (new) WD Caviar Black 640GB (if any more specs are relevant, let me know and I'll add them) Now, on to my question. I've been having performance issues lately, both in video games and in intensive applications. A couple of examples: Android application development (running Eclipse and the Android emulator) is painfully slow (on Linux). I only realized this when, at my new job as an Android dev, both tools are MUCH quicker. (I'm not sure what CPU I have there) The guys at my new job got me NFS Hot Pursuit, in which I barely get like 5-10FPS, even with graphics options turned all the way down My guess is that the bottleneck in my system is my CPU, so I'm thinking of upgrading to a Quad Core i5 + new motherboard + 4GB DDR3 (or more, 'cause I know you'll all jump and say 8GB minimum). Now: Is that a good idea? Is my CPU really a bottleneck, or is the whole system too old and I should replace it? I run Windows 7 on the old, 160GB HDD (which is on IDE, by the way). Could this slow down games as well? Should I get a new drive for Windows if I want to play new games? I know nothing about power supplies. Could that be a problem / will it be a problem if I upgrade to an i5? How come DiRT2 works on full graphics settings (pretty amazing graphics by the way) and NFS Hot Pursuit pulls only 5-10FPS?

    Read the article

  • Out of memory errors but not actually out of memory...

    - by commradepolski
    So, myself and my fellow support techs have been fighting with this issue and we still dont know what the problem is. Lets start off with the system specs: Windows XP 32 bit Corporate (SP2 and SP3) Intel D975XBX2 Mobo 4gb of ram Intel Core 2 Quad Q6600 ATI Radeon HD 3600 - 512mb After a few hours of working on the machine, the end user will begin to see the following symptoms: Out of memory messages Title bars and menus dont draw in properly Problems accessing network resources Problems opening up documents such as MSWord and MSPowerpoint and text files Problems opening up explorer windows General instability We have looked at task manager while this issue was occurring, and all indicators, like PF usage, threads, handles, etc. are normal. We have been having trouble pinpointing the root cause of this issue. It is also not situated with one user, it affects 8-10. So far we have tried: Resetting CMOS (Waiting to see results) Replacing video card (didnt help) Windows updates (didnt help) Updating network drivers (didnt help) Switching user from 1gbps to 100mbps network connection (awaiting results) Swapping the affected user's hardware (waiting for results) Increasing desktop heap size (helped for a bit but then the issue became more frequent) Applying the /3 switch to XP (didnt help) Increasing and decreasing and setting PF to system managed state (didnt help) We did have a power outage at the office a couple weeks ago, and all these issues became more frequent. Prior to the power outage it may take a week or so for the users to experience the issues but since the power outage it takes 3-4 hours or less. We havent had reports of the above issues causing BSODs, although that would be easier to diagnose :). Any help is greatly appreciated.

    Read the article

  • php-cgi.exe Taking out server, multiple running

    - by Alex
    I have been using ZendServer CE for over a year and have never had a problem. Recently, about a week or two ago I have found my server to be acting up and even causing RDP to be un-connectable. After some looking around I have 20, 25, 30+ php-cgi.exe running. With my IIS7 service starting with Windows once my server started all these php-cgi.exe would start running (even though the limit is 10) and I could not even connect to it. After disabling the Web Server as startup which stops php-cgi.exe from running the server runs flawless, like it always has. As soon as I run the web server all these odd issues start. I have a post over at Zend http://forums.zend.com/viewtopic.php?f=44&t=41043&p=95133 where I was told to update my Zend install. After doing so this issue has not gone away. Even running 1 php-cgi.exe (somehow 2 start anyway) the server begins to go silly. The first issue I find myself with running php-cgi.exe is that Windows Services, weather be stock or using FireDaemon begin to lag, slow start, crash, etc. If anyone can help me with this I would GREATLY appreciate it. At this time I am forced to look for a alternative to running PHP other than cgi as it simply takes out the whole box. On another note, I run this same version of Zend on a similar server with no issues. Starting to think its a IIS issue. (UPDATE) Installed newest version of PHP, separate from Zend, same issue. Server Specs: Intel Xeon Quad w HT Nehlam Based 24GB DDR3 1333 2x1TB Raid Mirror OS 2x1TB Raid Mirror (Other) 4x2TB Raid 5 (Storage) Server 2008 R2

    Read the article

  • Low-traffic WordPress website on Apache keeps crashing server

    - by OC2PS
    I have recently moved my low-moderate traffic (1000 UAUs, 5000 pageviews on a busy day) website from shared hosting to a Centos 6 64-bit VPS with Apache and cPanel running on 4 quad-core processor (likely oversold) and 3GB memory (Xen). We've had problems from the beginning. The server keeps crashing. It seems PHP keeps expanding till it consumes all the memory and crashes the server. Some folks have suggested that I should abandon Apache/cPanel/PHP/mySQL and go with nginX/Varnish/PHP-FPM/SQLite. But that's just not possible for me as I am not very tech savvy and need a simple GUI like cPanel to be able to manage the mundane management tasks (can't afford to hire system administrator or get fully managed hosting). I have come across several posts discussing optimization of Apache for WordPress. But all of these lead to articles that are pretty dated such as this ~4 year old one from Jan 2009 - http://thethemefoundry.com/blog/optimize-apache-wordpress/ The article is pretty detailed and seems helpful, but I stumble even on the first step. My httpd.conf only has 2 loadmodule commands LoadModule fastinclude_module modules/mod_fastinclude.so LoadModule bwlimited_module modules/mod_bwlimited.so So I go total bust right there. Further, my httpd.conf says Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the configuration file. To have modifications retained, all modifications must be checked into the configuration system by running: /usr/local/cpanel/bin/apache_conf_distiller I am having trouble finding where to change the modules in WHM. Please can someone help me with updated guidelines on how to optimize Apache for WordPress? Many thanks! P.S. The WordPress installation also has WP Super Cache installed. P.P.S. I also have phpBB, OpenCart, and Menalto Gallery installed.

    Read the article

  • Terminal Server CPU usage at 100%

    - by Light1c3
    I'm running a terminal server with around 50-60 users,and every so often the server will go from 40% usage to 100%. I took a closer look an it seems every time this happens, a different user or two seem to be caught in a loop and end up using < 30% where the rest of the users only use a maximum of 5%. The company behind the software we use clame it's due to the servers inadequate hardware (It's a VM system running on a dual - quad core setup) which to me sounds like BS! I'm fairly new to this level of IT so if I misspoke I apologize. I have no way to prove it but I believe adding more raw hardware power wont do me any good as this to me seems like a bug in their software, and it will suck up as much ( or little) CPU as it's given. The VM in question has 4 vCPU cores and 12 GB RAM available, and is running Windows Server 2008, 64-bit Thanks in advance for your help! Note: I have the same question posted on SO, but was pointed in this direction so just in case, here is a link to the post http://stackoverflow.com/questions/17276602/termserver-cpu-at-100

    Read the article

  • What is best configuration settings for Wordpress and MySQL on Win2008 + IIS7 stack?

    - by holiveira
    I currently have four blogs that uses Wordpress running on a shared hosting company. This blogs have a considerable amount of visits and I'm constantly receiving warnings from the hosting company saying that I'm consuming too much server CPU. Considering the fact that I have a dedicated server in another company with plenty of idle resources (it has a quad core Xeon 2.5GHz and 8GB of Ram and run on Win2008) I'm planning to move the blogs to this server in order to have some more freedom. I'm currently using this server to host some web applications using ASP.Net and SQL Express. I've installed a blog to test and it worked fine, but some issues appeared and raised some questions in my mind: How to properly set the permissions in the folders used by wordpress plugins, I mean, what permissions should I set for the IIS_User in some folders so that the plugins works correctly? What's the best caching plugin to use considering this is a Window Server? In the previous hosting company I used the WPSuperCache, but it was a Linux Stack. Or should I ignore the caching plugins and use the Dynamic Caching Feature of IIS7? How can I optmize the MySQL server running in this server (specially the settings regarding memory and caching) How can I protect the admin folders against hacker attacks? I know some people will advice me not to run Wordpress in a Windows stack, but that's my only choice. I don't even know were to start managing and LAMP stack, don't have the time to do so nor the money to rent another server.

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Application Screen Repainting Issues

    - by Jeff Sheldon
    I have this issue lately at work. It drives be nuts, and I finally stopped to ask this question. It's quite often that an application I've been running just randomly fails to repaint itself for a while, usually in the editor screen. I most often see this occurring with Expression Web, Visual Studio 2008/2010 and SQL Server Management studio. These applications are what I work in the most, so I'm not surprised to mostly see it here. But I was curious if anyone else had a solution for this. I've tried: Reboots. The screen shot below is about 10 minutes after a reboot. New Video Drivers. This machine is running a Nvidia Quadro NVS 290 video card with the latest drivers. Closing other applications, this is the only thing running right now. As far as hardware, this machine has Dual Quad-Core Xeon 2.83ghz Processors, with 10 gigs of memory, running Windows XP SP3 64bit. Any help would be great. JNK EDIT: Per comments from deleted (wrong) answer: I'm running dual monitors. Set it to single display, still occurred. Rebooted, and tried it again, and it still occurred. Switched it back to dual screen. My resolution is only 1400x900 on each.

    Read the article

  • Strange boot problems on 6 month old setup

    - by Balefire
    I've already exhausted my knowledge on this one, so forgive me if this post is a bit long. I built a computer 6 months ago for my wife and it worked fine until last week. Then it randomly shut down and would lock up while trying to boot on the boot screen. I cleared cmos and it allowed me to do startup recovery, but it "failed to fix the issue" so I reinstalled windows on the HD (moving the old install to windows_old). It worked, so I started installing drivers again, but then when I restarted to finalize installations it locked up again. This time, I took the hard drive and hooked it up to my computer, backed up all her files, and then formatted the hard drive before reinstalling it. (again had to clear cmos to let me boot from disk) It installed windows, I installed drivers, and it worked for a few hours but then died during startup again. So, then I got a new HD, cleared cmos, and installed clean again, with the same result as the time before, it worked for a few hours, installed windows updates, then crashed on the 3rd or 4th time turning it on. I decided next to try reinstalling and then going online to see if there were any updates for the BIOS or drivers on the Motherboard, but now I can't get it to even bring up the boot menu, so now I'm just left wondering was it the motherboard, or is it the CPU, or the RAM? The problem was strangely intermittent so I thought it had to be a software issue, since a hardware issue would ALWAYS fail to boot, right? But now it seems to be a hardware issue, because it's not bringing up anything. Any suggestions? System: Windows 7 64-bit 970A-DS3 Gigabyte Motherboard AMD Phenom II X4 955 Deneb 3.2GHz Quad core Proc GeForce GT 430 (Fermi) 1GB Video Card 500W PSU 2 x G.SKILL Ripjaws X Series 4GB 240-Pin DDR3 1600 RAM

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • Why am I getting programs stuck in log_wait_commit under Linux?

    - by staticsan
    There is something subtly wrong with my Linux install that I just can't locate. It is Ubuntu Lucid Lynx (10.04) 64-bit. Hardware is a Dell Optiplex 960: Intel Core 2 Quad CPU, 8Gb of RAM, 2x 300Gb HDDs. /home is ext2 on one disk and everything else is on the other (/ is also ext3). I have VirtualBox running a 64-bit Vista image for Outlook calendaring, but the heavyweight apps are IntelliJ, NetBeans, MySQL and Opera. Opera also loads my mail (IMAP) of which there is over 10,000 messages. The problem is that Opera stalls for a few seconds from time-to-time. Watching the process list shows it's in log_wait_commit which means (as far as I have figured out) the filesystem is holding things up. Sometimes I can make this happen by doing a subversion update, but usually it happens for no reason I can see. It usually happens to Opera, but I've seen NetBeans go under, too. It doesn't make the app crash - it's just completely unresponsive for a few seconds. Googling has not helped. The closest I got was to remove the sync attribute in the file system. This achieved nothing. On the advice of a Linux guru friend, I lowered /proc/sys/vm/dirty_writeback_centisecs to 300, but that didn't do anything, either. And it was all he could think of. What is going on and can I fix it? (And how?)

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >