Search Results

Search found 17945 results on 718 pages for 'last fm'.

Page 509/718 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • What is the solution to enable Dymo Turbo 400 Label Printer to work on Win 7 / 64-bit?

    - by mdpc
    It's Christmas time and time for printing labels for all those Christmas cards. I've upgraded to Windows 7 64-bit from XP. I've been unsuccessfully attempting to get the connected Dymo 400 Turbo label USB printer to work again. The latest manufacturer drivers have been successfully loaded and installed. The drivers are supposed to work on Windows 7/64-bit. The Win 7 system(s) in question are patched and up-to-date on that score. The Windows Update site responds with a driver when the USB cable is connected to this printer. The printer queue seems to be established correctly. What happens is that I submit a job to the printer (either using the DYMO s/w or not), it delays for a period of time, and then I get the message 'printing error'. Can't seem to locate the appropriate error in the new and improved event log. Several combinations of rebooting, re-installation and power cycling components fail to make the printer work. Sometimes during some type of reset it spits out the last thing to be submitted, but that seems intermittent. I have tried different USB cables and different USB (2.X) ports as well. I have run the Windows 7 troubleshooter it tries to fix the problem but alas it doesn't. Interestingly, trying the USB printer (and its associated manufacturer drivers and s/w) on another Windows 7 64-bit system has the identical failures noticed on the original system. I did not find anything on the manufacturers' site concerning this problem. The printer has no hardware problems or issues.

    Read the article

  • How to "drag and drop" folders or multiple HTML files into a browser and have them open in multiple tabs

    - by PoorLuzer
    I save pages that I browse on the net and find interesting into a folder called C:\PageSaves Later, during the commute, I open these pages to see what they are and move them into a neatly categorized folder tree. For example, Perl related pages goto C:\Pages\Perl, MySQL related pages goto C:\Pages\MySQL and so on. I was wondering if there is any way I could open any number of HTML files on disc / inside a folder (C:\PageSaves in my case) into Mozilla/FF/K-Meleon etc For example, I would like to just "drag and drop" the folder C:\PageSaves into FireFox and have it open all the .html pages in the folder in a separate tab Right now, if I "drag and drop" multiple HTML files, it just opens the last file in the selection. Have a set of toolbar buttons, basically, a (the) plugin that should allow me to nuke the page (if I don't want to keep the page anymore) from disc or move the file (and its corresponding folder) into a predefined / new folder I am familiar with coding full blown FireFox plugins, so even if something very basic/almost similar exists, I can take it forward. Hints/clues/other methods of achieving the same result are all welcome!

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • WD Caviar Green Extremely Slow

    - by Steven
    I am encountering a really weird problem on my WD Caviar Green HDD. Well first of all I have 2 HDDs on my Desktop, one 160GB Seagate holding my Win7 Ultimate x64 and the problematic one, WD 1.5 Caviar Green for storage purpose. My problem is kinda weird, when I transfer files from my Seagate(C:) to my WD (D:) the speed is good (50-60MB/s). Then the problem arises when I transfer too "many" large files, the transfer speed would go straight down to kilobytes/s. Well after I cancelled the transfer and access my D:, even entering a folder requires loading for like 10 seconds. Such problem not only arises when I am transferring files to my D:, it seems like my WD can't handle much activities. For instance, last time I installed my game on D: and I would face much lag after playing for some time. When the same game is installed on C: no problem arises. Does anyone knows what is the problem? P/S: There was one temporary solution that I used to tried. After the "situation" occurs, I tried to access as many folders on D: as I can and let it load, repeating such actions and giving it some time bring the D: back to speedy transfer. However, large transfers would causes the situation to happen again. Does it have something to do with cache whatsoever?

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Computer not finding hard drives on boot -sometimes-

    - by todd.pund
    Computer specs: Mobo: Gigabyte ultradurable 3 - GA-970A-UD3 Processor: First gen I7 3.2GHZ Ram: 8GB Kingston DDR3 1066 Video Card: EVGA NVidia GTX 460 1GB Hard Drive: 500MB 7200rpm x2 (can't remember brand, sorry I'm at work.) Last week my developer preview for Windows 8 ran out so I put my copy of windows 7 back on the computer. The computer at that point started suffering from frequent freezing and crashing. When I rebooted the computer sometimes it wouldn't find the system HD at all. When I looked at the post screen it seemed to show that it wasn't finding either of the HDs. Then yesterday when turning on the computer I just got GRUB as a message (not a GRUB prompt, just GRUB) I haven't had a dual boot of Linux for at least a year. I loaded windows 7 recovery console from the disk and ran: bootrec /fixboot bootrec /fixmbr Which did not help. At that point I just installed Ubuntu 13.04 over the windows 7 install and still received the GRUB post. I went into the BIOS and switched the Hard Drive priorities and then it loaded into Ubuntu fine. For several days everything was just hunky dory until I installed the Ubuntu version of Steam, install Portal and tried to run it. At that point the computer froze and after hard rebooting couldn't find the hard disks again. Then after restarting the system it loaded up fine again and no issues since. (I have not tried to launch portal again). My next thought is to remove the system hard drive and try to use the secondary as the master to see if the primary HD is bad. I'm sorry if this has been confusing, I'll answer any questions I can. Any thoughts?

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Cloning a git repository from a machine running OS X

    - by Mike
    Hi folks, I'm trying to host a git repository from my home OS X machine, and I'm stuck on the last step of cloning the repository from a remote system. Here's what I've done so far: On the OS X (10.6.6) machine (heretofore dubbed the "server") I created a new admin user Logged into the new user's account Installed git Created an empty git repository via "git init" Turned on remote login Set port mapping on my router (airport extreme) to send ssh traffic to the server Added a ".ssh" directory to the user's home directory From the remote machine (also an OS X 10.6.6 machine), I sent that machine's public key to the server using scp and the login credentials of the user created in step 1 To test that the server would use the remote machine's public key, I ssh'd to the server using the username of the user created in step 1 and indeed was able to connect successfully without being asked for a password I installed git on the remote machine From the remote machine I attempted to "git clone ssh://[email protected]:myrepo" (where "user", "my.server.address", and "myrepo" are all replaced by the actual username, server address and repo folder name, respectively) However, every time I try the command in step 11, I get asked to confirm the server's RSA fingerprint, then I'm asked for a password, but the password for the user I set up for that machine never works. Any advice on how to make this work would be greatly appreciated!

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • All computers on network get stuck waiting for some sites indefinetely

    - by zacaj
    This happens across three computers, running windows 7 and Ubuntu, firefox, opera, and chrome (all latest versions). I am connected to the internet through a Verizon wireless usb modem. When I try to open some web pages they will never finish loading (and usually never even show anything). The status bar at the bottom of the browser will display "Waiting for X" The servers it gets stuck on include: platform.twitter.com s7.addthis.com connect.facebook.net ajax.googleapis.com 2mdn.net Ive been getting away with just blocking them in AdBlock up until now, however the last two have been causing problems. There are some sites which require googleapis.com to load correctly, and some that wont ever load unless its blocked. eBay requires access to 2mdn.net to load pictures. On top of this its getting really annoying having to update AdBlock across all these computers whenever a new site pops up. I'm hoping there's some easier way to fix this? The different sites causing the freeze indicate to me that it's either a problem on my end (somehow?) or some server side software that got updated with a new bug?

    Read the article

  • windows 2008 r2 iis worker proccess memory usage increase

    - by nLL
    I have this web site written in c#. around 400-500 users online at any time. it was on windows 2008 32 bit machine before and never ever locked/slowed down due to increased memory consumption up until i upgraded it's server to win 2008 r2 64 bit. Old server had only 4 gig ram and quad core cpu at 2ghz. site was working just fine. since i've upgraded the server i noticed (2 times with in 10 days) it started to eat ram. last night it went up to 4 gb ram. with ram increase response slows down quite a lot. recycling app pool doesn't help. I have to restart it's worker process to recover. i've noticed this usually happens if there are continuous errors. as i didn't change anything in the code am i safe to assume it is not related to memory leak in the code? did anyone came across something like that? same thing happens if i create continuous errors with classic asp. thanks

    Read the article

  • PC in POST loop

    - by Antony Scott
    Hi, I have a custom built PC using a Gigabyte GA-EP35-DS3P motherboard with a Q6600 CPU. For the last 2 days it has got itself stuck into a POST loop. Saying that, I don't think it actually got in to the BIOS. It repeatedly lit up the LEDs and then not much more. Sometimes I could see the CPU fan twitch. Today I re-seated the DIMMs and it powered up straight away. Could this be a sign of an impending hardware failure? The PC is hooked up to a UPS, so I don't think it's a power spike or anything like that, as I have 2 other PCs on the same UPS and they're both fine. Yesterday, the first time this happened, I was getting a message which I think said "Scanning BIOS image on hard drive". I've been building and using PCs for well over 25 years and that's a new one on me! I don't think it's an over heating problem, as when the PC does finally boot up the CPU is running at 35-40C. Any help or suggestions would be greatly appreciated.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Fixing damaged partition table

    - by dr4cul4
    This is continuation of Recover Extended Partition , but this time I have different problem related partition table it self. I managed to restore partition that I needed and backed up files that were crucial to me (at least those that I had space to store somewhere) OK now get to the problem. My partition table is corrupted, booting RIP Linux I can mount it in truecrypt (and other ones that recovered), but that's basically it. When I launch GParted I have unallocated drive. GParted Dev info: Device Information Model: ATA ST2000DL003-9VT1 Size: 1.82TiB Path: /dev/sda Partition table: unrecognized Heads: 255 Sectors/track: 63 Cylinders: 243201 Total Sectors: 3907029168 Sector size: 512 When I check information on unallocated space I get: File system: unallocated Size: 1.82TiB First sector: 0 Last sector: 3907029167 Total sectors: 3907029168 Warning: Can't have a partition outside the disk! Now the output of testdisc (Analyze): TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Partition Start End Size in sectors > 1 P Linux 13132 242 39 16353 233 8 51744768 2 E extended LBA 16807 223 1 243201 254 63 3637021626 No partition is bootable 5 L Linux 16807 223 57 20430 39 25 58191872 X extended 20430 70 1 243201 78 13 3578816632 Invalid NTFS or EXFAT boot 6 L HPFS - NTFS 20430 71 58 243201 78 13 3578816512 6 LNext Now fdisk: # fdisk -l /dev/sda Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00039cd0 Device Boot Start End Blocks Id System /dev/sda1 210980864 262725631 25872384 83 Linux /dev/sda2 270018504 3907040129 1818510813 f W95 Ext'd (LBA) /dev/sda5 270018560 328210431 29095936 83 Linux /dev/sda6 328212480 3907028991 1789408256 7 HPFS/NTFS/exFAT Now I would like to fix that to arrange partitions correctly, but I have no idea which tool is capable of fixing that (tried, a few, some of them offered fixing, but it was to risky at the moment - still backing up data).

    Read the article

  • Can a website see/know my MAC address even if I use a VPN?

    - by ilhan
    I have searched other results and read many of them but I could not get an enough information. My question is that can a website see my MAC address or can they have an information about that I'm the same person under these conditions: I am using a VPN and I use two IPs: first one is normal one, the second one is the VPN's IP. I use two browsers to hide behind browser fingerprinting. I use both browsers with Incognito Mode. I always use one for normal IP, one for the VPN IP. I do not know that if the website uses cookies or not. But can they collect an enough information to prove that these two identities belong to same person? Is there any other way for them to see that I am the same person? I use different IPs, different browsers and I use both browsers in incognito mode. I even changed one of browsers language to only English. So even if they collect my info from browser, they will see two browsers using different languages. (Addition after edit): So I have changed my IP and browser information and the website can not reach this information anymore to prove that I am the same person using two accounts. Then let's come to the title: Can they see my MAC address? Because I think that it is the last way that they can identify me and my main question is that. I wrote the information above to mention that I changed IPs and I have some precautions to avoid browser fingerprinting (btw my VPN provider already has a service about blocking it). I wrote them because I read similar advices in some related questions but my question is that can they see my MAC address (or anything else that can make me detected) despite all these precautions. And lastly, Is there an extra way to be anonymized that I can do? For example, can my system clock or anything else give an information? Thanks in advance.

    Read the article

  • WIN7 constant BSOD 0x7B on boot, not producing any dump files where to go from here?

    - by prayingpantis
    So my one win 7 pc has been getting a BSOD on boot (roughly a sec after load screen starts) after a power failure. The complete stop code is 0x0000007B (0x80786B58, 0xC0000034,0x00000000,0x00000000) I've searched for quite a while now on the net and it seems like most people gave up after gettting 0x7B and no dump files. What I've tried so far: startup repair - reports it cannot repair computer automatically. BadPatch is reported somewhere in a problem signature contained in the problem details. startup repair with a WIN 7 CD - also fails, I can't recall what the error was, but it was not the same as the error produced with the start up tool shipped with the version of WIN 7 installed on my machine (I think the text had something ACL-ish contained in it) used a boot disk (Hiren's boot iso) - I used it to enable the CrashDump registry key and then after BSOD, read the HDD's dump locations but it was empty. Note, I'm quite sure the registry keys I edited are the correct ones, since the reboot on BSOD option was enabled by default and after I changed the regkey controlling this functionalitty to 0 the BSOD stayed after I booted again. check disk - works and returns no problems, also it seems I'm able to access all my files on the HDD. mem test - works and returns no errors So I'm not sure what else I can do to figure out what is the problem here. I read somewhere that you can use WINDBG to remote debug another PC, but I'm not sure if this is possible since the OS isn't even loaded yet? Also the last driver change I made on the system was installing a video driver, but I had no problems with it and were able to reboot several times until the power outage happened and the BSOD appeared. Any help or guidance for a way to DEBUG this problem would really be appreciated (I'm not really that keen to try a whole bunch of random fixes, I'd rather try and narrow down the problem first).

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Which default Database Systems come installed in Microsoft VS2010 Express?

    - by Tonygts
    Appreciate all advice 0n the following questions Which database systems (Ms SQL 2008, MS SQL Compact, or others) comes installed with VS2010 Express edition. SQL Server 2008 R2 Express is free, can we install and integrate with VS2010 Express? How to uninstall those database already come installed? I have installed VS2010 express on Windows 7; just VS2010 components (VB, C#, C++ and Web Developer) and without installing any other things like SQL Express. In the Console Panel-Program & Features' window, the installed list is shown below: Microsoft SQL Server 2008 Setup Support File Microsoft SQL Server 2008 Browser Microsoft SQL Server VSS Writer Microsoft SQL Server Database Publishing Wizard 1.4 Microsoft ASP.NET MVC2 - VWD Express 2010 Tools Microsoft SQL Server 2008 Management Objects Microsoft SQL Server Compact 3.5 SP2 ENU Microsoft SQL Server System CLR Types Microsoft Silverlight 3 SDK Microsoft ASP.NET MVC 2 Microsoft Visual Studio 2010 ADO.NET Entity Framework Tools Visual Studio 2010 Tools doe SQL Server Compact 3.5 SP2 ENU Web Deployment Tool Microsoft Visual Web Developer 2010 Express - ENU Microsoft Visual C++ 2010 Express - ENU Microsoft Visual C# 2010 Express - ENU Microsoft Visual Visual Basic 2010 Express - ENU Microsoft SQL Server 2008 As you can see, Microsoft SQL Server 2008 (last line) and near the top, Microsoft SQL Server Compact 3.5 SP2 ENU and many of their related SQL components such as Microsoft SQL Server 2008 R2 Management Objects are also installed. These are actually installed by installing VS2010 Express, but I have no idea how to use them or verify their valid existence from VS2010. Also, do I have to uninstall them before I install SQL Server 2008 R2, which is the latest version I believe? And what tool is needed to manage and create data source and tables?

    Read the article

  • nginx codeigniter rewrite: Controller name conflicts with directory

    - by palerdot
    I'm trying out nginx and porting my existing apache configuration to nginx. I have managed to reroute the codeigniter url's successfully, but I'm having a problem with one particular controller whose name coincides with a directory in site root. I managed to make my codeigniter url's work as it did in Apache except that, I have a particular url say http://localhost/hello which coincides with a hello directory in site root. Apache had no problem with this. But nginx routes to this directory instead of the controller. My reroute structure is as follows http://host_name/incoming_url => http://host_name/index.php/incoming_url All the codeigniter files are in site root. My nginx configuration (relevant parts) location / { # First attempt to serve request as file, then # as directory, then fall back to index.html index index.php index.html index.htm; try_files $uri $uri/ /index.php/$request_uri; #apache rewrite rule conversion if (!-e $request_filename){ rewrite ^(.*)/?$ /index.php?/$1 last; } # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location ~ \.php.*$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } I'm new to nginx and I need help in figuring out this directory conflict with the Controller name. I figured this configuration from various sources in the web, and any better way of writing my configuration is greatly appreciated.

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by user37755
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • Excel table auto update

    - by Mike
    So I have a table in Excel with formulas. When I add a new row, the new row automatically fills in the formulas as well, which is great. My problem is that it also changes the formula in the row above the added row as well. Here's what happens specifically: My table's last row is row 24. A formula I have in that row is the following: =COUNTIF(C$11:C24,"y")/(COUNTIF(C$11:C24,"Y")+COUNTIF(C$11:C24,"N")) When I add in data in row 25 the formula is updated in row 25 as well, which is what I want, to the following: =COUNTIF(C$11:C25,"y")/(COUNTIF(C$11:C25,"Y")+COUNTIF(C$11:C25,"N")) My problem is that the row above also updates - my row 24 changes to the same as row 25 (the C24 goes to C25). Why is my row 24 formula changing when I add a row 25? Note, my formulas above row 24 stay the same when I add in row 25 - only row 24 changes when I add in 25. Is there a way to not update the row above the row being added? This problem continues when additional rows are added - If I add in a row 26, then the formula in rows 24-26 then all reference C26. Why are they all updating?

    Read the article

  • Is this a legitimate registry key? (windows 7)

    - by Keyes
    In hkey_local_machine/software/classes I found some registry keys named msime.taiwan, msime.japan and a couple others with similar names, except with a number at the end of, so there was 4 keys altogether. From what I know itmcoulc be associated with a thing in windows that lets you write japanese characters or whatever. I also found a macaffee page, , which seemed dated but it said the key is created by a virus named w32 virut. Just wondering is this a legit key? I found it on another pc and both pcs show when exported to a .txt show it was last written to in 2009. Here is the reg query for the 4 keys. (added lines to differentiate them.) HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan (Default) REG_SZ Microsoft IME (Japanese) HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan\CLSID (Default) REG_SZ {6A91029E-AA49-471B-AEE7-7D332785660D} HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan\CurVer (Default) REG_SZ MSIME.Japan.11 HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan.11 (Default) REG_SZ Microsoft IME (Japanese) HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan.11\CLSID (Default) REG_SZ {6A91029E-AA49-471B-AEE7-7D332785660D} HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan\CLSID (Default) REG_SZ {F407D01A-0BCB-4591-9BD6-EA4A71DF0799} HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan\CurVer (Default) REG_SZ MSIME.Taiwan.8 HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan.8 (Default) REG_SZ IMTCCORE HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan.8\CLSID (Default) REG_SZ {F407D01A-0BCB-4591-9BD6-EA4A71DF0799}

    Read the article

  • 412 Precodition Failed error only occurs on certain networks

    - by Andy
    One of my favorite websites: http://jessiejofficial.com (yes, I'm a Jessie J Fan :')) has recently started displaying the error message "412 Precondition Failed" whenever I visit it from my home network, even when I use Tor Browser. At first I thought that this was an issue with the whole website, however I have contacted the web developer and he has said that they has been plenty of hits within the last 48 hours. Plus, I discovered tonight that I can access the website from my phone, through the mobile network. So it appears to just be my network as all of the devices in my house connected to the WiFi display the same error when I try to visit any page of the site. However there have been no changes that we are aware of or are noticeable to our network since the website was accessible, and I have just heard that another person in a different part of the country is experiencing the same difficulties also. Any help/advice/suggestions would be appreciated greatly Update: When trying to ping 'jessiejofficial.com' in Windows command prompt the request times out on all four attempts, on any computer connected to the wireless network. I can now also confirm that the same thing occurs on my MacBook Pro.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >