Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 25/763 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Essbase Analytics Link (EAL) - Performance of some operation of EAL could be improved by tuning of EAL Data Synchronization Server (DSS) parameters

    - by Ahmed Awan
    Generally, performance of some operation of EAL (Essbase Analytics Link) could be improved by tuning of EAL Data Synchronization Server (DSS) parameters. a. Expected that DSS machine will be 64-bit machine with 4-8 cores and 5-8 GB of RAM dedicated to DSS. b. To change DSS configuration - open EAL Configuration Tool on DSS machine.     ->Next:     and define: "Job Units" as <Number of Cores dedicated to DSS> * 1.5 "Max Memory Size" (if this is 64-bit machine) - ~1G for each Job Unit. If DSS machine is 32-bit - max memory size is 2600 MB. "Data Store Size" - depends on number of bridges and volume of HFM applications, but in most cases 50000 MB is enough. This volume should be available in defined "Data Store Dir" driver.   Continue with configuration and finish it. After that, DSS should be restarted to take new definitions.  

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • High CPU Usage with WebGL?

    - by shoosh
    I'm checking out the nightly builds of Firefox and Chromium with support of WebGL with a few demos and tutorials and I can't help but wonder about the extremely high CPU load they cause. A simple demo like this one runs at a sustained 60% of my dual core. The large version of this one maxes out the CPU to 100% and has some visible frame loss. Chromium seems to be slightly better than firefox but not by much. I'm pretty sure that if these were desktop application the CPU load would be negligible. So what's going on here? what is it doing? Running the simple scripts of these can't be that demanding. Is it the extra layer of security or something?

    Read the article

  • WCF high instance count: anyone knows negative sideffects?

    - by Alex
    Hi there! Did anyone experience or know of negative side effects from having a high service instance count like 60k? Aside from the memory consumption of course. I am planning to increase the threshold for the maximum allowed instance count in our production environments. I am basically sick of severe production incidents just because "something" forgot to close a proxy properly. I plan to go to something like 60k instances which will allow the service to survive using default session timeouts at a call rate average for our clients. Thanks, Alex

    Read the article

  • Searching for patterns to create a TCP Connection Pool for high performance messaging

    - by JoeGeeky
    I'm creating a new Client / Server application in C# and expect to have a fairly high rate of connections. That made me think of database connection pools which help mitigate the expense of creating and disposing connections between the client and database. I would like to create a similar capability for my application and haven't been able to find any good examples of how to apply this pattern. Do I really need to spin up an instance of a TcpClient every time I want to send a message to the server and receive a receipt message? Each connection is expected to transport between 1-5KB with each receiving a 1KB response message. I realize this question is somewhat vague, but I am starting from scratch so I am open to suggestions. Even if that means my suppositions are all wrong.

    Read the article

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • Windows Services -- High availability scenarios and design approach

    - by Vadi
    Let's say I have a standalone windows service running in a windows server machine. How to make sure it is highly available? 1). What are all the design level guidelines that you can propose? 2). How to make it highly available like primary/secondary, eg., the clustering solutions currently available in the market 3). How to deal with cross-cutting concerns in case any fail-over scenarios If any other you can think of please add it here .. Note: The question is only related to windows and windows services, please try to obey this rule :)

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by mingyeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • Exchange 2010 - resolving Calendar Attendant\Requests Failed

    - by marcwenger
    On my mailbox server, I am receiving the alert: MSExchange Calendar Attendant\Requests Failed Or in Solarwinds Requests Failed (Calendar Attendant) for Exchange 2010 Mailbox Role Counters (Advanced) on *servername* All I know is this figure should be 0 at all times. Currently I am at 2 and this is the only alert on the Exchange servers. No where I can find how to resolve this. How can I fix this? thank you

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • How do I objectively measure an application's load on a server

    - by Joe
    All, I'm not even sure where to begin looking for resources to answer my question, and I realize that speculation about this kind of thing is highly subjective. I need help determining what class of server I should purchase to host a MS Silverlight application with a MSSQL server back-end on a Windows Server 2008 platform. It's an interactive program, so I can't simply generate a list of URLs to test against, and run it with 1000 simultaneous users. What tools are out there to help me determine what kind of load the application will put on a server at varying levels of concurrent users? Would you all suggest separating the SQL server form the web server, to better differentiate the generated load on the different parts of the stack?

    Read the article

  • Server configurations for hosting MySQL database

    - by shyam
    I have a web application which uses a MySQL database hosted on a virtual server. I've been using this server when I started the application and when the database was really small. Now it has grown and the server is not able to handle the db, causing frequent db errors. I'm planning to get a server and I need suggestions for that. Like I said, the db is now 9 GB, and is growing considerably fast. There are a number of tables with millions of rows, which are frequently updated and queried. The most frequent error the db shows is Lock wait timeout exceeded. Previously there used to be "The total number of locks exceeds the lock table size" errors too, but I could avoid it by increasing Innodb buffer pool size. Please suggest what configurations should I look for in the server I should buy. I read somewhere that the db should ideally have a buffer pool size greater than the size of its data, so in my case I guess I'd need memory gt 9 GB. What other things should I look for in the server? Just tell me if I should give you more info about the

    Read the article

  • Changing memory allocator to Jemalloc Centos 6

    - by Brian Lovett
    After reading this blog post about the impact of memory allocators like jemalloc on highly threaded applications, I wanted to test things on a larger scale on some of our cluster of servers. We run sphinx, and apache using threads, and on 24 core machines. Installing jemalloc was simple enough. We are running Centos 6, so yum install jemalloc jemalloc-devel did the trick. My question is, how do we change everything on the system over to using jemalloc instead of the default malloc built into Centos. Research pointed me at this as a potential option: LD_PRELOAD=$LD_PRELOAD:/usr/lib64/libjemalloc.so.1 Would this be sufficient to get everything using jemalloc?

    Read the article

  • Benchmark for website speed optimization

    - by gowri
    I working on website speed optimization. I mostly used 3 tools for analyzing speed of optimization. Speed analyzing Tools: Google pagespeed tool Yslow Firefox extenstion Web Page Performance Test I am measuring performance using above tool and benchmark result as below like before and after. Before optimization : Google PageSpeed Insights score : 53/100 Web Page Performance Test : 55/100 (First View : 10.710s, Repeat view : 6.387s ) Yahoo Overall performance score : 68 Stage 1 After optimization : Google PageSpeed Insights score : 88/100 Web Page Performance Test : 88/100 (First View : 6.733s, Repeat view : 1.908s ) Yahoo Overall performance score : 80 My question is ? Am i doing correct way ? What is the best way of benchmark for speed optimization ? Is there any standard ? Is there any much better tool for analyzing speed ?

    Read the article

  • ATG Live Webcast March 21 Reminder: Network, WAN, and PC Performance Tuning (Performance Series Part 3 of 3)

    - by BillSawyer
    A quick reminder about tomorrow's webcast:  Andy Tremayne, Senior Architect, Applications Performance, and co-author of Oracle Applications Performance Tuning Handbook from Oracle Press, and Uday Moogala, Senior Principal Engineer, Applications Performance, will discuss network performance for E-Business Suite. Andy and Uday will cover tuning the client and tuning the network. They will share real-life examples of network performance, and show you tools and techniques that you can use to estimate or simulate performance on your own network.The agenda for the Performance Tuning - Part 3 of 3 webcast includes the following topics: Tuning the Client Tuning the Network Date:               Thursday, March 21, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenters:  Andy Tremayne, Senior Architect, Applications Performance                        Uday Moogala, Senior Principal Engineer, Applications PerformanceWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99341To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591264961If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.

    Read the article

  • How to select a server that supports Windows scheduled file IO

    - by Kristof Verbiest
    Background: I am developing an application that needs to read data from disk with a fairly consistent throughput. It is important that this throughput is not influenced by other actions that happen on the disk (e.g. by other processes). For this purpose, I was hoping to use the 'Scheduled File I/O' feature in Windows (throught the GetFileBandwithReservation and SetFileBandwithReservation functions). However, this StackOverflow question has thought me that this feature is only available if the device driver supports it. Currently I have no computer at my disposition that seems to support this feature (I have an HP Proliant server and a Dell Precision workstation). Question: If I were to order a new server, how can I know beforehand if this feature will be supported by the device driver? How 'upscale' does the server have to be? Has anybody used this feature with success and cares to share his experiences?

    Read the article

  • Which will give more free RAM to linux?

    - by Linda Thomas
    Trying to avoid some issues so I've been trying to learn vm. in kernel tuning but still a little confused even after googling. The lower background_ratio is the sooner the flushes? the lower dirty_ratio is the less dirty ram that is kept, right vm.dirty_ratio = 20 vm.dirty_background_ratio = 1 or vm.dirty_ratio = 60 vm.dirty_background_ratio = 20 or vm.dirty_ratio = 20 vm.dirty_background_ratio = 10 or vm.dirty_ratio = 20 vm.dirty_background_ratio = 5

    Read the article

  • Why MySQL sat for 2 minutes doing nothing?

    - by Alex R
    This was a one-time thing, not reproducible... But I saved the show innodb status output. Can anybody tell what's going on here? The simple insert took almost 3 minutes to complete. | InnoDB | | ===================================== 110201 15:58:10 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 34 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 11963, signal count 11766 --Thread 1824 has waited at .\btr\btr0cur.c line 443 for 118.00 seconds the sema phore: S-lock on RW-latch at 09D6453C created in file .\buf\buf0buf.c line 550 a writer (thread id 1824) has reserved it in mode wait exclusive number of readers 1, waiters flag 1 Last time read locked in file .\buf\buf0flu.c line 599 Last time write locked in file .\btr\btr0cur.c line 443 Mutex spin waits 0, rounds 527817, OS waits 7133 RW-shared spins 2532, OS waits 1226; RW-excl spins 1652, OS waits 1118 ------------ TRANSACTIONS ------------ Trx id counter 0 95830 Purge done for trx's n:o < 0 95814 undo n:o < 0 0 History list length 11 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, OS thread id 3704 MySQL thread id 551, query id 2702112 localhost 127.0.0.1 root show innodb status ---TRANSACTION 0 95829, not started, OS thread id 3132 MySQL thread id 534, query id 2702020 localhost 127.0.0.1 root ---TRANSACTION 0 95828, not started, OS thread id 3152 MySQL thread id 527, query id 2701973 localhost 127.0.0.1 root ---TRANSACTION 0 95827, ACTIVE 118 sec, OS thread id 1824 inserting, thread decl ared inside InnoDB 500 mysql tables in use 1, locked 1 1 lock struct(s), heap size 320, 0 row lock(s) MySQL thread id 526, query id 2701972 localhost 127.0.0.1 root update INSERT INTO log_searchcriteria (userid,search_criteria,date,search_type) VALUES ( NAME_CONST('userid',NULL), NAME_CONST('search_criteria',_latin1' SELECT SQL_C ALC_FOUND_ROWS idx_search.CTCX_LATITUDE, idx_search.CTCX_LONGITUDE, idx_search.b uilding_id, idx_search.LN_LIST_NUMBER, idx_search.LP_LIST_PRICE, idx_search.HSN_ ADRESS_HOUSE_NUMBER, idx_search.STR_ADDRESS_STREET, idx_search.CP_ADDRESS_COMPAS S_POINT, idx_search.UN_UNIT, idx_search.CIT_CITY, idx_search.ZP_ZIP_CODE, idx_se arch.AR_AREA_NAME, idx_search.BR_BEDROOMS, idx_search.BTH_BATHS, idx_search.ST_S TATUS, idx_search.CTCX_STYLE_TYPE, idx_s -------- FILE I/O -------- I/O thread 0 state: wait Windows aio (insert buffer thread) I/O thread 1 state: wait Windows aio (log thread) I/O thread 2 state: wait Windows aio (read thread) I/O thread 3 state: wait Windows aio (write thread) Pending normal aio reads: 0, aio writes: 1, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 151006 OS file reads, 120758 OS file writes, 6844 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 5, seg size 7, 24664 inserts, 24664 merged recs, 4612 merges Hash table size 553253, node heap has 629 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s --- LOG --- Log sequence number 5 2318193115 Log flushed up to 5 2318193115 Last checkpoint at 5 2318129891 0 pending log writes, 0 pending chkp writes 3036 log i/o's done, 0.00 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 213459462; in additional pool allocated 1720192 Dictionary memory allocated 240416 Buffer pool size 8192 Free buffers 0 Database pages 7563 Modified db pages 18 Pending reads 0 Pending writes: LRU 0, flush list 18, single page 0 Pages read 150973, created 28788, written 115137 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout -------------- ROW OPERATIONS -------------- 1 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread id 2992, state: flushing buffer pool pages Number of rows inserted 794294, updated 89203, deleted 13698, read 1453084305 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ Thanks

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • How to check for bottlenecks in Windows Server 2008 R2

    - by Phil Koury
    I recently switched out a 10 year old server for a brand new server in a small office and upgraded from Windows Server 2000 to Windows Server 2008 R2. After the switch was complete and some configurations were changed around we are running into what appear to be some bottlenecks in the network speed. Accessing programs on the server is slower (resulting in long loading times, slower report generation, etc.) than it was on the old server hardware. I am wondering what options or tools I have, if there are any at all, to find out exactly where these hang ups might be coming from.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >