Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 454/537 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast to the east coast. However, I am seeing some disturbing latency numbers from my west coast location to the east coast. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: West coast to east coast: 215 ms latency, 46 ms transfer time, 261 ms total West coast to west coast: 114 ms latency, 41 ms transfer time, 155 ms total It makes sense that Corvallis, OR is geographically closer to my location in Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only increased 10%, yet the latency increased 100%! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... Does routing distance affect performance significantly? How does geography affect network latency? Latency in Internet connections from Europe to USA ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • Using bind (named) as a public proxy server

    - by TrentDavis
    We have a Python DNS server that does a bunch of stuff to figure out values it should return for various DNS records. This works nicely, however as it is Python, the performance under high load won't be great. What I would like to do is have a "proxy" bind server sit in front of it to return results to the public internet. This will cache the results (typically 15 minutes, some records are a few seconds), so the load on the Python server will be greatly reduced as it will only see one look up per domain (only about 100 domains) every 15 minutes. The data in these domains changes a lot, so using a master won't work as it will constantly be changing. I have something setup that looked like it would work great (using a forwarder for the zone), and tested it with dig etc, all going great. However when we went to go live with it, things weren't working, and we figured out that named is not setting the "Authoritative" bit (fair enough, it is a forwarder). So my question is, can we tell bind to set the Authoritative bit for forwarded domains? I have looked at all the doco I can find, and can't find anything about doing things this way. Most of the doco about using it as a proxy if for a LAN to the internet. Ideally I would like to use bind as it is there and installed (CentOS 5 servers). But at a pinch we could look at a different name server to do the work if it just can't be done with bind. Thanks.

    Read the article

  • Can't connect to local MySQL server through socket

    - by Martin
    I was trying to tune the performance of a running mysql-server by running this command: mysqld_safe --key_buffer_size=64M --table_cache=256 --sort_buffer_size=4M --read_buffer_size=1M & After this i'm unable to connect mysql from the server where mysql is running. I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) However, luckily i can still connect to mysql remotely. So all my webservers still have access to mysql and are running without any problems. Because of this though i don't want to try to restart the mysql server since that will probably mess everything up. Now i know that mysqld_safe is starting the mysql-server, and since the mysql server was already running i guess it's some kind of problem with two mysql servers running and listening to the same port. Is there some way to solve this problem without restarting the initial mysql server? UPDATE: This is what ps xa | grep "mysql" says: 11672 ? S 0:00 /bin/sh /usr/bin/mysqld_safe 11780 ? Sl 175:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 11781 ? S 0:00 logger -t mysqld -p daemon.error 12432 pts/0 R+ 0:00 grep mysql

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Macbook Pro Triple Boot OS X Lion, Windows 7 and Windows 8

    - by Lloyd Sparkes
    MacBook Pro (Summer 2010 Model, Basic Model) I currently have OS X Lion and Windows 7 running side by side on my MacBook Pro. However I have a need to get Windows 8 running as well in this mix (a Virtual Machine is not good enough, I need the performance). I have created a suitably sized parition (80GB) that is recognizable in Boot Camp. However every time I try to boot from the USB stick (that worked to install Windows 8 on my PC) using the latest version of rEFIt, it just boots Windows 7 and not the Windows 8 installer. I cannot start the installation within Windows 7 as it will just install over Windows 7. I'm guessing the Boot Camp emulation is doing something werid to stop the "Press any key to install Windows..." message from appearing (which should happen if the installer detects Windows is already installed (e.g. if you left your install disk in). Is there a way to get around this / force the installer to start? (Note I cannot start the Windows 7 installer either if I wanted to install a second copy of Windows 7 to upgrade to Windows 8)

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Intel Atom overheating in ASUS EEE Box 1501P

    - by Sergey L.
    I have had an ASUS EEE Box 1501P for just a little bit over a year. Of course it breaks 2 months after the warranty runs out. http://www.asus.com/Eee/EeeBox_PC/EeeBox_PC_EB1501P/ I have been using the box as a Home Media Center. Running mostly 24/7 often pausing a video overnight. Since last week the fan started running extremely loud. After some digging I found that the Intel Atom CPU in it is overheating and the built-in sensor is reporting temperatures way over 105°C. This got me worried, so I took the unit apart. Completely vacuumed the heat sink, oiled the fan, but the unit is still showing the same behaviour. After turning it on and just observing the hardware monitor in the BIOS the temperature slowly rises from 40°C to over 95°C in appx 5 min. I am running the newest BIOS and a lightweight Linux OPENELEC OS with XBMC on it. Now I am wondering if it could be a faulty heat sensor in the Atom. Recommended running temperature is up to 85°C, but I have not detected any performance hits when running at the above mentioned 105°C and there seem to be no software faults. How can an Atom with an attached heat sink and a fan running at full capacity even get this hot in the first place at 0 load? Aren't those things designed to generate virtually no heat? Could it be a faulty heat sensor? What shall I try to fix this? I would prefer not to damage the CPU, since it is hard fused into the motherboard and cannot be replaced. I could remove the heat pipe/heat sink, but it is getting hot, so heat is properly transferring from the CPU to the heat pipe, the fan is running at full capacity, is recently oiled and warm air is making it out of the exhaust. Edit: One more note: The North-bridge (or whatever it is called nowadays) is on the same heat pipe.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • Will Parallel-port dongle work on USB-to-Parallel Adapter?

    - by Gary M. Mugford
    We have a niche program running on a Win2K laptop that uses a security dongle connected to a parallel port for authentication. The laptop is getting creaky and I spent a frustrating night last night shopping various websites for a new laptop that had a parallel port. Seems I'm about three years late [G]. The question I have, is, if I buy a new(ish) laptop and use a USB-to-Parallel Port adapter, will the security dongle work? I know I'm not being specific about the app, but it's one most people wouldn't have heard of anyways. I've been guessing the answer to my question is no, since the app won't know to send a request out to the non-existent port. But, if the process actually is that the dongle sends a message INTO the computer every now and then, then it might work. And, I'm not sure whether the dongle is only needed at program startup time or randomly. The dongle is a 'permanent' addition to the old laptop. This is all about the money. We can have a newly-updated version of the program (which won't add any features we need) for the princely sum of $2700. Or we can spend $500 on a refurbed laptop still running WinXP, add a 30 buck adapter and keep the same solid, stolid performance we've come to appreciate. But it all comes down to the dongle behaviour. Oh, and a dock won't work. The whole laptop issue is about moving about the various nooks and crannies of the building with laptop in hand. Thanks for any suggestions/guidance. GM

    Read the article

  • How to get an ARM CPU clock speed in Linux?

    - by MiKy
    I have an ARM-based embedded machine based on S3C2416 board. According to the specifications I have available there should be a 533 MHz ARM9 (ARM926EJ-S according to /proc/cpuinfo), however the software running on it "feels" slow, compared to the same software on my Android phone with a 528MHz ARM CPU. /proc/cpuinfo tells me that BogoMIPS is 266.24. I know that I should not trust BogoMIPS regarding performance ("Bogo" = bogus), however I would like to get a measurement on the actual CPU speed. On x86, I could use the rdtsc instruction to get the time stamp counter, wait a second (sleep(1)), read the counter again to get an approximation on the CPU speed, and according to my experience, this value was close enough to the real CPU speed. How can I find the actual CPU speed of given ARM processor? Update I found this simple Pi calculator, which I compiled both for my Android phone and the ARM board. The results are as follows: S3C2416 # cat /proc/cpuinfo Processor : ARM926EJ-S rev 5 (v5l) BogoMIPS : 266.24 Features : swp half fastmult edsp java ... #./pi_arm 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 8.50 sec. (real time) Android # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 2 (v6l) BogoMIPS : 527.56 Features : swp half thumb fastmult edsp java # ./pi_android 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 5.95 sec. (real time) So it seems that the ARM926EJ-S is slower than my Android phone, but not twice slower as I would expect by the BogoMIPS figures. I am still unsure about the clock speed of the ARM9 CPU.

    Read the article

  • Windows XP, Preference menu's hidden for many programs

    - by Jestep
    I don't know when this started happening, but it has been several months since I first noticed it. Basically, when I go to a preferences menu of some programs, the preferences window is completely hidden, but the program see's it as being open. This prevents me from interacting with the preferences and the actual program. So far I've noticed it on Adobe Illustrator and Netbeans. Illustrator when I select edit - Preferences - An Option. On Netbeans it happens when I right click on a site and select properties. Here's a link to a screen shot after I click on the preferences menu (I don't have enough points to post an image yet): http://www.jestep.com/images/screen-2.jpg. Note that the main workspace is grayed out. I have to hit Escape to close the hidden preferences window. I've tried unstinstalling, completely wiping the registry of any trace of the program and reinstalling. Thought it may have been a multi-monitor issue when I switched from 2 monitors down to 1, but menu's were not on other monitor when I plugged one back in. I've reset workspaces, windows display, windows performance settings, changed resolution, safe mode, everything I can think of. I cannot figure out what would cause the same problem on completely unrelated software, and I cannot reset it by reinstalling. Any help would be greatly appreciated.

    Read the article

  • Group Policy installation failed error 1274

    - by David Thomas Garcia
    I'm trying to deploy an MSI via the Group Policy in Active Directory. But these are the errors I'm getting in the System event log after logging in: The assignment of application XStandard from policy install failed. The error was : %%1274 The removal of the assignment of application XStandard from policy install failed. The error was : %%2 Failed to apply changes to software installation settings. The installation of software deployed through Group Policy for this user has been delayed until the next logon because the changes must be applied before the user logon. The error was : %%1274 The Group Policy Client Side Extension Software Installation was unable to apply one or more settings because the changes must be processed before system startup or user logon. The system will wait for Group Policy processing to finish completely before the next startup or logon for this user, and this may result in slow startup and boot performance. When I reboot and log in again I simply get the same messages about needing to perform the update before the next logon. I'm on a Windows Vista 32-bit laptop. I'm rather new to deploying via group policy so what other information would be helpful in determining the issue? I tried a different MSI with the same results. I'm able to install the MSI using the command line and msiexec when logged into the computer, so I know the MSI is working ok at least.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Windows typeperf and pound signs.

    - by Weegee
    I'm currently trying to use typeperf to access some Windows performance counters. Unfortunately, a few of the instances I'm trying to check are of the format service#1. The command typeperf "\\server\Process(service#1)\Working Set Peak" is unfortunately returning the data for \\server\Process(service)\Working Set Peak rather than the data for the instance service#1. This holds true for any of the services that have pound signs in the counter string. Does anyone know of a method to get around this problem? Sample output: I:\>typeperf -s server "\Process(service#1)\Working Set" "(PDH-CSV 4.0)","\\server\Process(service)\Working Set" "10/08/2009 09:37:29.070","1643274240.000000" "10/08/2009 09:37:30.070","1643274240.000000" "10/08/2009 09:37:31.070","1643274240.000000" The command completed successfully. I:\>typeperf -s server "\Process(service#2)\Working Set" "(PDH-CSV 4.0)","\\server\Process(service)\Working Set" "10/08/2009 09:37:39.273","1643274240.000000" "10/08/2009 09:37:40.273","1643274240.000000" "10/08/2009 09:37:41.273","1643274240.000000" "10/08/2009 09:37:42.273","1643274240.000000" "10/08/2009 09:37:43.273","1643274240.000000" The command completed successfully. I can confirm in PerfMon that the Working Set value "1643274240.000000" is incorrect for both service#1 and service#2. I am running Windows XP Service Pack 2, but a co-worker who is running Windows Server 2003 was having the same troubles.

    Read the article

  • Least CPU intensive way of streaming your screen on windows?

    - by sinni800
    Hello, sometimes I like capturing my screen for others to see. Only thing: I am playing games while I do it. I have tried a few streaming solutions where Windows Media Encoder coupled with my own Windows server appealed to me most, because I can change resolutions, etc. I also tried ustream coupled with the Flash applet and the Adobe Flash Encoder recording a Camtasia source. Camtasia has the disadvantage though that it shows the green-and-black-alternating borders and can not be targeted fullscreen. I like how xfire does it. But it doesn't work with every game, many are simply not supported. A few thoughts about this: Is there a program which captures like Fraps or XFire (based on Direct3D and OpenGL outputs) and exposes the output to a DirectShow source filter? Which brings me to: Is there hardware accelerated capturing directly from the graphics card? Maybe including direct encoding with help from OpenCL? Modern graphic cards decode BluRay content directly for example. I should have a modern enough graphics processor for this to be possible (see below). If using Windows Media Encoder: Which are the least CPU intensive settings? Which codec? Is there a newer codec than Windows Media 9? Is it less CPU intensive? I only have 7, 8 and 9 inside the Encoder Could the performance be massively increased by having a Quad-Core CPU (see below)? Bandwidth is no problem up to 1000 to 1500 kbit/s (I have 2048). My Computer specs: Intel Core 2 Duo E8400 4 GB DDR2-800 Ram Ati Radeon HD5770 Using Windows 7 Professional

    Read the article

  • Updated XAMPP with MySQL, all my tables are missing

    - by user371699
    I just updated XAMPP to a newer version, which included updating MySQL from 5.5 to 5.6. Using phpMyAdmin, however, all of my tables within my databases still appear on the left navigation panel, but the main window shows that all my databases are empty (except for information_schema, and a couple other default tables.) Clicking on a table in the navigation panel gives me a "table doesn't exist" message. It does looks like information_schema.tables doesn't have my tables, either. Can anyone assist me with this? I did make a complete backup of all my databases before the upgrade, but I first want to see if I can fix this the "normal" way. Furthermore, I'm not sure if the MySQL upgrade involved making changes to the information/performance databases, so I don't know if I can restore the old ones. Thank you. EDIT: Continuing my searching, I realized that only the INNODB databases are missing. I've tried running the following with no avail: /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp and /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp --datadir=/opt/lampp/var/mysql The my.cnf file in /opt/lampp/etc contains the following InnoDB settings: innodb_data_home_dir = /opt/lampp/var/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /opt/lampp/var/mysql/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 16M # Deprecated in 5.6 #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 What could possibly be wrong? Why is the information_schema not updating correctly? It looks like /opt/lampp/var/mysql has all my tables in it within the database directories, but they're still not showing up in information_schema.

    Read the article

  • I need help choosing between two configurations of the Dell Studio 14

    - by Adnan
    There are two configurations of the Dell Studio 14 (1458) which I'm looking at: Config 1: Core i7-720QM @ 1.6 GHz; ATI Mobility Radeon HD 5450 1GB; 4gb DDR3 RAM @ 1066 MHz; 500 GB SATA HDD @ 7200 RPM; Price: $999 Config 2: Core i5-430M; ATI Mobility Radeon HD 4530 512MB; 4GB DDR3 RAM @ 1066 MHz; 500 GV SATA HDD @ 7200 RPM; Price: $874 What I want to know is, would config 1 still be able to do decent gaming (maybe some Starcraft II), and is there a great performance difference between the i5 and i7 processors? Is the $130 extra worth it for the i7 and better graphics card? I do more than just basic computing. I plan on getting into web design (specifically using Photoshop and Dreamweaver), and I wish to do gaming. I know Conifg 1 is the better value, but I want to be sure that the $130 more is truly worth it. I dont have too much money and want to spend wisely as possible, yet I am a computer geek and plan on doing a lot more than the average user.

    Read the article

  • 32 core (each physical core) 2.2 GhZ or 12 core (6 physical cores) 3.0GHZ?

    - by Tejaswi Rana
    I am working on a multithreaded application (Forex trading app built on C#) and had the client upgrade from the 12 core 3.0GHZ machine (Intel) to a 32 core 2.2 Ghz machine (AMD). The PassMark benchmark results were significantly higher when using multicores doing Integer, Floating and other calculations while for a single core calculation it was a bit slower than the pack (others that were being compared to with similar config as the 12 core one). Oh it also comes with 64 GB RAM (4 times as the other one) and a much faster SSD. So after configuring and running the application on that machine, not only did it not perform as well, it was significantly slower. We're talking about 30seconds - 1 minute slower on an app that usually completes processing within 5-20 secs. The application uses MAX DEGREE of PARALLELISM (TPL) which I've tried setting to number of cores and also half of that. I've also tried running single threaded and without setting any limits in parallel threading. While it may be the hardware has some issues, I am wondering if the CPU processing speed is the issue. I can overclock to 3.0 GHZ. But is that even a good idea? Server Info - AMD http://www.passmark.com/forum/showthread.php?4013-AMD-Dual-6272-performance-is-60-lower-than-benchmarks Seems that benchmark was wrong to start with - officially. Intel i7 3930k OS (same in both) Windows 7 Professional 64-bit

    Read the article

  • Windows 7 Loading Very Slow

    - by Adnan
    Hi guys, I've had a problem that only started to occur yesterday. When I boot into Windows 7 and log on to my user account, the computer gets very laggy and slow for at least 5 minutes. Icons take ages to load, and everything is rendered unclickable. This happens for about five minutes after which everything goes back to normal. I tried restarting a few times to see if this is a recurring problem, and it is. I ran a full system scan with Microsoft Security Essentials and found nothing wrong, and I also defragmented the disk to increase performance. However, the problem still exists. Edit: For the past day, I've been trying to install Ubuntu on the same laptop. When installing it on a partition didn't work, I decided to use Wubi. Could this somehow be the problem? Also, my hard drive gets hot a lot, so could the heat be affecting the hard drive and maybe making it defective? Any help on this issue would be greatly, greatly appreciated.

    Read the article

  • Streaming to PS3 with NAS and built-in dlna server?

    - by philt
    With consumer-grade hardware, is it possible to successfully stream 1080p mp4 videos to a PS3? I have a linksys router that can only do 10/100. The PS3 is wired to it with cat5e cable, and the PS3 itself supports gigabit ethernet. I would upgrade the router and get one that supports gigabit ethernet if it could handle streaming like this. It currently does work with minor jerkiness streaming from my mac to the PS3, but fast-forward/reverse and "goto" (equivalent of scene selection) take forever and/or fail completely. And streaming from my mac of course requires the mac to be on at all times. When I put the movies on an external USB drive and connect to the PS3 directly, it performs flawlessly. Fast forward and everything works great. So I was thinking about getting a NAS, but I don't know if any inexpensive NAS (i.e. Buffalo Linkstation Live, WD My Book World Edition, D-Link DNS-321, etc.) can actually deliver the performance necessary to do this, even with gigabit ethernet?

    Read the article

  • Why are my socks proxies slow

    - by vps_newcomer
    I have a linux vps, and i have tried a few socks proxy setups to test their performance: All tests were using speedtest.net The standard ssh tunnel proxy 0.8mbit/s download and 0.1-0.2mbit/s upload speeds dante-server proxy 1.3mbit/s download and 0.4-0.5mbit/s upload I am wondering why are these speeds so slow? Is anything shaping them? Is it just the nature of socks proxies? I know that the ssh tunnel has to do encryption and what not so that is why its slow, but i was surprised to see that the second setup was also quite slow. On the VPS i have received download speeds of 25MB/s per second (thats about 200mbit/s and upload speed of atleast 5MB/s (haven't got a good enough pipe to test anything faster). The other option i was going to try is to setup OpenVPN and see how that goes, however i need to find a good tutorial as it's fairly complicated to setup. So why is it so slow? How can i test to see where the bottleneck is? How can i make it faster :D

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >