Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 120/763 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Why won't Windows use the other CPU cores?

    - by revloc02
    In Windows Task Manager the Performance tab shows the first CPU maxed out, the other 7 just idling along with the occasional spike. What gives? More info: I've got 8GB and only 4.5GB are being used. The Processes tab has no indication of any process hogging processing power. In fact System Idle Process is 98-99. When I program stuff and have like 8 to 12 applications going (several directly unrelated to programming of course) my computer slows to a crawl. Sysyem Info: Intel Core i7-2600K Processor (quad-core with hyper-threading), 8GB RAM, Intel BOXDZ68BC LGA 1155 Motherboard, 500GB HDD

    Read the article

  • mysql is not using multiple cpus

    - by mhost
    Our MySQL server has been using a lot of CPU lately (it's reached 100% several times and stays there for a while) and I noticed that it the CPU load is all on one core of one cpu. I was hoping to spread that out to all 4 on my server. I have been tweaking the MySQL settings to use more ram and less cpu, but it still occasionally reaches very high CPU usage. It seems like everything about the topic refers to thread_concurrency (which I've read is a solaris only setting). What can I do in Linux? Thanks.

    Read the article

  • Any ramifications to disabling ClearPageFile

    - by user34710
    I am runing a Dell XPS 15z with a quad core I7, 8gb ram, 7200rpm hdd, windows 7 ultimate. My laptop was taking around 3 minutes to shut down, which seemed far too high to me. I did a bit of investigation, and it seemed that the ClearPageFile reg entry was set to 1, meaning that upon shutdown my pagefile.sys was being cleared before actually shutting the machine down. When I disable this setting, my laptop shuts down in about 20 seconds - clearly a big gain. My question is, are there any side effects of having this turned off (other than the security risk of the HDD being stolen, and still having any important info that was in the RAM up for grabs)?

    Read the article

  • Enterprise Level Monitoring Solution

    - by Garthmeister J.
    My company is currently looking to replace our current solution used for monitoring our web-based enterprise solutions for both up-time and performance. Please note this is not intended to be a network monitoring-type solution (internally we currently use Nagios). If anyone has a provider that they have had a positive experience with, it would be much appreciated. Here is a list of our requirements: • Must have a large number of probes/agents around the globe to be representative of our customer base • Must have a flexible scripting capability to automate multi-step user actions • 24 hour a day monitoring • Flexible alerting system • Report generation capability • Mimic browser specific monitoring (optional, not a must-have)

    Read the article

  • Poor TCP loopback throughput on Windows

    - by Yodan Tauber
    I measured the throughput of a locally bound TCP socket connection on my computer (Intel Q9550, 64 GB RAM, Windows XP 64 bit) using iperf. I got dissatisfying results (around 1.6 Gbit/s) each time, no matter how I tweaked the TCP settings (buffer length, window size, max segment size, no delay). I got similar results when I tried netperf. Now, I understand (from sources like these) that the average throughput of a loopback connection should be around 5 Gbit/s. What could be the reasons for such poor performance?

    Read the article

  • mysql is not using multiple cpus

    - by helpmhost
    Hi, Our MySQL server has been using a lot of CPU lately (it's reached 100% several times and stays there for a while) and I noticed that it the CPU load is all on one core of one cpu. I was hoping to spread that out to all 4 on my server. I have been tweaking the MySQL settings to use more ram and less cpu, but it still occasionally reaches very high CPU usage. It seems like everything about the topic refers to thread_concurrency (which I've read is a solaris only setting). What can I do in Linux? Thanks.

    Read the article

  • How do I stop my IIS App Pool making a request to wpad.mydomain.com?

    - by Programming Hero
    As part of some performance troubleshooting, I've monitored the slow startup of a "cold" App Pool (one without an active worker process) in IIS. When using a built-in account, the App Pool starts in sub-second time. When using a custom local account the App Pool takes 30+ seconds to start processing requests. The service appears to be making requests to wpad.mydomain.com, an address it does not have access to, which causes it to wait 30 seconds for a response before eventually timing out. As a workaround, I've added the hostname to the server's hosts file, to direct the traffic to the local machine, which returns much faster (1-2 seconds). What do I need to do to stop IIS making this request when this identity is used for the App Pool?

    Read the article

  • Specific apache + mysql settings for a light-weight site

    - by Good Person
    I have a small website with a Joomla and a Moodle set up. It seems that both of these are very slow. The server (CentOS release 5.5 (Final)) is a virtual dedicated server with about 2GB of ram. I don't expect to ever get more than 10-15 people on at the same time (and if that is high) What settings could I change in either apache, mysql, or even the OS to increase the performance of my site? I'm not concerned about running out of resources if I get too many visitors. If you need more specific data leave a comment and I'll edit the question.

    Read the article

  • Is virtual machine slower than the underlying physical machine?

    - by Michal Illich
    This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)? Did anyone measure performance difference of web server or db server (virtual VS physical)? If it depends on configuration, let's imagine two quad core processors, 12 GB of memory and a bunch of SSD disks, running 64-bit ubuntu enterprise server. On top of that, just 1 virtual machine allowed to use all resources available.

    Read the article

  • How to choose NoSQL database engine?

    - by Poma
    We have a database with following specs: 30k records, 7mb in size 20 inserts/second 1000 updates/second 1000 range selects/second, by secondary index, approx 10 rows each needs at least one secondary index needs some mechanism to expire keys if they are not updated for 75 secs (can be done via programmatic garbage collector but will require additional 'last_update' index and will add some load) consistency is not required durability is not required db should be stored in memory For now we use Redis, but it does not have secondary index and it's keys index:foo:* is too slow. Membase also does not have secondary index (as far as I know). MongoDB and MySQL memory engine have table-level locks. What engine will fit our use case?

    Read the article

  • How fast can a Windows 2008 CIFS client write to SAMBA server using 10Gb NIC?

    - by one_bsd_guy
    We are experiencing a performance problem using Windows 2008 CIFS client. We have a FreeNAS server that delivers 1.3GB/s on ZFS write. We have 10Gb network connecting NAS server and CIFS clients. Using two Linux CIFS clients, we can get around 1.2GB/s. But windows 2008 clients can only give us 400MB/s. Is that the best a Windows 2008 client can deliver or we do have a poorly configured Windows client? Much appreciated.

    Read the article

  • Relating ping to perceived browser GUI response

    - by cvsdave
    We periodically get complaints of poor GUI (browser page) response that we need to explore. I am looking for a quick and cheap first check to see if the issue is network latency, or server performance. Has anyone encountered any discussion of ping time and perceived GUI response? I understand that GUI response is complicated, but it would be nice if we could find or develop a rule of thumb along the lines of "Hmmmm, ping is over 200, it might be network problems". Ideally, this lives in a script on the user's machine so that we can see the latency that they are seeing... (BASH, Linux). A reference to a good discussion page would be a fine answer, as would any recommendation of other source material.

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • Monitoring disk block access in Linux

    - by VoidPointer
    Is there a way to gather statistics about blocks being accessed on a disk? I have a scenario where a task is both memory and I/O intensive and I need to find a good balance as to how much of the available RAM I can assign to the process and how much I should leave for the system for building its I/O cache for the block device being used. I suspect that most of the I/O that is currently happening is accessing a rather small subset of the device and that performance could be optimized by increasing the RAM that is available for I/O buffering. Ideally, I would be able to create something like a "heat-map" that shows me which parts of the disk are accessed most of the time.

    Read the article

  • What are the ways to build a failover cluster?

    - by light
    I have a task where I need to build a failover cluster in two cases: first with servers on Red Hat Enterprise 5.1 and second with SUSE Linux Enterprise 11 SP1. Both cases have SAN. I know there are many ways to build failover cluster, but I can’t find out more, so I need next: The ways to build it? I know only virtualization. Any good book or resource to broad my mind? I’ll be glad to hear any suggestion. Thanks! EDIT #1: failover of servers with bussiness application on it. EDIT #2: will be great to hear summary about solutions with SLES servers? EDIT #3: So if I understand correctly, in my cases the main ways are to use internal solutions or virtualization. So now I have additional questions: Does manufacturer of blades provide some solution? For example HP or IBM. (Without virtualization) Do I need additional server to control "heartbeat" between main and redundant servers? (Virtualization) For example I have several physical servers with VMs. Do I need additional server to control availability of VMs and to move VMs to another physical server in the case their physical server failure? Sorry for my poor English. EDIT #4: Failover of VM or OS on physical server. In both cases will be used SAN , it's not specified, but I think with file system image on it. I started to think that my question is stupid and I need to remake it.

    Read the article

  • Computers with Small Capacity SSD - For caching?

    - by RXC
    Recently, in newsletters from websites, I have been seeing computers for sale from manufacturers that include an HDD and an SSD but the SSD has a small capacity like 24 GBs. I don't know if this still holds true, but I learned that when building a computer, you would want to install your OS on your fastest hard drive. I do a lot of PC gaming, so I install my OS and games on my SSD, because I learned that games and many applications make lots of system calls to the OS and performance can only be as fast as the slowest piece. Why these computers come with small capacity SSDs? Most OS's take up around 20 to 30 GBs of space, so what are the benefits of such a small SSD? Are these small size SSDs for caching? and what exactly does caching mean (what does it do and how does it help)?

    Read the article

  • HD video is slower than audio output

    - by Star
    I have an HD video files (1920x1080 H.264 DUAL AUDIO FLAC) file type: MKV file size: 1.25 GB file length: 24 minutes the problem is the video output is not synchronized with audio output, something slow too much sometime it gets too fast I tried running it on Windows Media Player , Media Player Classic , and a few other players, but the result is the same Additional Info: for device information I'm on LG S510 labtop

    Read the article

  • How does Google manage to serve results so fast?

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance?

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Why the Indian link builders or SEO companies can make so many high quality links at the same time? [closed]

    - by chiba
    There are a lot of Indian SEO companies or link builders that offer a lot of high quality link. Some of them for example offer links just from "co.uk" or "French site" with high page ranks. I have heard that even the SEO companies from other countries outsource link building to India. Do they have special connections for building links ? or Do they exchange the information between another Indian companies and have a big database of the sites where they can link?

    Read the article

  • Windows system monitoring and profiling

    - by Aris
    I have several dozen 64-bit Windows 2003 servers in a high performance environment with very bursty system utilization. I am looking for a tool (or tools) to monitor and analyze system performance (eg, CPU utilization, bandwidth, etc). This tool can either query servers from a central location (SNMP) or require installation of a component on each server. It should poll on a 1-second interval. It should be able to generate pretty graphs which show trends over time. As a nice-to-have, it should be able to send out emails or IMs when certain thresholds are exceeded. The tools I have investigated so far, including Solarwinds and PRTG, are not designed to poll this frequently. They seem to be designed for a ~30-second interval. Solarwinds wouldn't go lower than 3-sec, and PRTG chokes at 1-sec. They both default to 1-min. All three tools seem more focused on outage monitoring and reporting than metric collection. Given the bursty nature of our applications, such infrequent polling would result in a very inaccurate picture of performance. I am considering rolling my own solution using perfmon. This would be a lot of work, and seems like reinventing the wheel. Are there any tools out there that meet my needs?

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • Preloading Winforms

    - by msarchet
    I am currently working on a project where we have a couple very control heavy user controls that are being used inside a MDI Controller. This is a Line of Business app and it is very data driven. The problem that we were facing was the aforementioned controls would load very very slowly, we dipped our toes into the waters of multi-threading for the control loading but that was not a solution for a plethora of reasons. Our solution to increasing the performance of the controls ended up being to 'pre-load' the forms onto a hidden window, create a stack of the existing forms, and pop off of the stack as the user requested a form. Now the current issue that I'm seeing that will arise as we push this 'fix' out to our testers, and the ultimately our users is this: Currently the 'hidden' window that contains the preloaded forms is visible in task manager, and can be shut down thus causing all of the controls to be lost. Then you have to create them on the fly losing the performance increase. Secondly, when the user uses up the stack we lose the performance increase (current solution to this is discussed below). For the first problem, is there a way to hide this window from task manager, perhaps by creating a parent form that encapsulates both the main form for the program and the hidden form? Our current solution to the second problem is to have an inactivity timer that when it fires checks the stacks for the forms, and loads a new form onto the stack if it isn't full. However this still has the potential of causing a hang in the UI while it creates the forms. A possible solutions for this would be to put 'used' forms back onto the stack, but I feel like there may be a better way.

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >