Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 215/783 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • which is better performance, using a disposable local variable or reusing a global one?

    - by petervaz
    This is for an android game. Suppose I have a function that is called several times for second and do some calculations involving an arraylist (or any other complex objects for what matter). Which approach would be preffered? local: private void doStuff(){ ArrayList<Type> XList = new ArrayList<Type>(); // do stuff with list } global: private ArrayList<Type> XList = new ArrayList<Type>(); private void doStuff(){ XList.clear(); // do stuff with list }

    Read the article

  • When creating a GUI wizard, should all pages/tabs be of the same size? [closed]

    - by Job
    I understand that some libraries would force me to, but my question is general. If I have a set of buttons at the bottom: Back, Next, Cancel?, (other?), then should their location ever change? If the answer is no, then what do I do about pages with little content? Do I stretch things? Place them in the lone upper left corner? According to Steve Krug, it does not make sense to add anything to GUI that does not need to be there. I understand that there are different approaches to wizards - some have tabs, others do not. Some tabs are lined horizontally at the top; others - vertically on the left. Some do not show pages/tabs, and are simply sequences of dialogs. This is probably a must when the wizard is "non-linear", e.g. some earlier choices can result in branching. Either way the problem is the same - sacrifice on the consistency of the "big picture" (outline of the page/tab + location of buttons), or the consistency of details (some tabs might be somewhat packed; others having very little content). A third choice, I suppose is putting extra effort in the content in order to make sure that organizing the content such that it is more or less evenly distributed from page to page. However, this can be difficult to do (say, when the very first tab contains only a choice of three things, and then branches off from there; there are probably other examples), and hard to maintain this balance if any of the content changes later. Can you recommend a good approach? A link to a relevant good blog post or a chapter of a book is also welcome. Let me know if you have questions.

    Read the article

  • My Oracle Suport?????

    - by Dongwei Wang
    ????????????????,??????MOS???????(????),????????????????????????????:Note 62143.1 - Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch ContentionNote 376442.1 - * How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance IssuesNote 749227.1 - * How to Gather Optimizer Statistics on 11gNote 1359094.1 - FAQ: How to Use AWR reports to Diagnose Database Performance IssuesNote 1320966.1 - Things to Consider Before Upgrading to 11.2.0.2 to Avoid Poor Performance or Wrong ResultsNote 1392633.1 - Things to Consider Before Upgrading to 11.2.0.3 to Avoid Poor Performance or Wrong Results????????????????”??“???,?????????????????(PDF??)???????????????”Rate this document“????

    Read the article

  • does unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. now I'm wondering does it have a noticeable affect in fps if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? edit final output will be a w*h cell grid, but for technical issues it's much more easier for me to allocate (w+1)*(h+1) vertices. sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect fps or not? (note that mesh is only generated once in each time you play the game)

    Read the article

  • Why am I getting such slow file transfer performance?

    - by kingdango
    Copying 4GB from a USB flash drive to my Linux partition. The flash drive is NTFS formatted (I believe, it's Windows formatted). The transfer is incredibly slow and blocks the computer frequently causing lag and hanging applications. My transfer rate is 1.2 MB/sec and that is the max it has hit when I let the File Operations window have focus. Why is this so slow under Ubuntu and significantly faster in Win 7?

    Read the article

  • How do you monitor and react when some scheduled job fails? - general question

    - by Dzida
    Hi, In many projects my team faced problems with 'silent fails' of some important components. There are lot of tasks executed behind the scenes and if somethings fails (either by errors in logic or hardware problems) in most cases responsible person is not notified (or not notified instantly). I know about heavy-weight monitoring tools that could solve some of that problems but there over-complicated and too expensive for our team. I am interested what are your solutions for such problems.

    Read the article

  • What are all things that do/can happen when a user enters in a web page's URL?

    - by Job
    Sorry if this is a naive title. I am trying to break into web development, which is so far a foreign world to me. I have taken a basic intro to OS, intro to Networking as part of my bachelors degree several years ago. I cannot say that we went into things deeply, beyond academic assignments. I would like to understand things much better regarding to this specific task (e.g what goes on in at the OS level, at networking level, what are other computers involved, where do proxy servers and hackers come in). Again, at the very basic level this has been covered, but I would like to review this specific thing better. Are there chapter(s) os a specific book, or specific SHORT reading that you recommend? I have nothing against your list of 20 favorite books, but I want to understand this one thing better in 2-3 weeks of reading after work / on the weekends (realistically 5-10 hrs per week, so I am looking for about 15-20 hrs of reading material, NO MORE). I am not a fast reader :) Thanks.

    Read the article

  • When choosing a domain does including your brand affect SEO performance?

    - by bpeterson76
    I've been asked to build a "landing page" for a local branch of an international corporation. While the corporation has a well-established domain name, the local office wants to use a unique, separate url that will be easy for them to relay to clients. However, the corporation is considered a category leader, so the local office is also concerned about the importance to carrying over the company's brand to the URL. Questions that have arisen: From an SEO perspective, is there a benefit to including the brand name in the URL? Would it be more beneficial to buy a domain that relates generically to the INDUSTRY as opposed to the specific brand name? Would the benefits of an easy-to-remember, short domain outweigh any SEO benefits that might be gained by a longer, brand-specific domain?

    Read the article

  • Slow in the Application, Fast in SSMS? Understanding Performance Mysteries

    When I read various forums about SQL Server, I frequently see questions from deeply mystified posters. They have identified a slow query or stored procedure in their application. They cull the SQL batch from the application and run it in SQL Server Management Studio (SSMS) to analyse it, only to find that the response is instantaneous. At this point they are inclined to think that SQL Server is all about magic. A similar mystery is when a developer has extracted a query in his stored procedure to run it stand-alone only to find that it runs much faster – or much slower – than inside the procedure.

    Read the article

  • Why is `cmd /C` living after it did its job?

    - by acidzombie24
    My question is why does "cmd" exist (idle) in my process after i update my exe? In my code i run this code to update myself and launch var args = string.Format(@"/C ping 1.1.1.1 -n 1 -w 3000 & move /Y ""{0}"" ""{1}"" & ""{1}"" {2}", updateFn, fn, exeargs); new Process() { StartInfo = new ProcessStartInfo(@"cmd", args) { CreateNoWindow = true, UseShellExecute = false } }.Start(); Environment.Exit(0); The idea is i exit right away and have ping stall for 3seconds before trying to replace my current exe with my updated exe. Then i launch with the necessary args The full arg for cmd looks like this /C ping 1.1.1.1 -n 1 -w 3000 & move /Y "c:\path\update" "c:\path\my.exe" & "c:\path\my.exe" exeargs Everything works fine however i see cmd in the taskmanager (looks to be idle) after my process is launched and correctly working. Why?

    Read the article

  • Do there exist programming languages where a variable can truly know its own name?

    - by Job
    In PHP and Python one can iterate over the local variables and, if there is only once choice where the value matches, you could say that you know what the variable's name is, but this does not always work. Machine code does not have variable names. C compiles to assembly and does not have any native reflection capabilities, so it would not know it's name. (Edit: per Anton's answer the pre-processor can know the variable's name). Do there exist programming languages where a variable would know it's name? It gets tricky if you do something like b = a and b does not become a copy of a but a reference to the same place. EDIT: Why in the world would you want this? I can think of one example: error checking that can survive automatic refactoring. Consider this C# snippet: private void CheckEnumStr(string paramName, string paramValue) { if (paramName != "pony" && paramName != "horse") { string exceptionMessage = String.Format( "Unexpected value '{0}' of the parameter named '{1}'.", paramValue, paramName); throw new ArgumentException(exceptionMessage); } } ... CheckEnumStr("a", a); // Var 'a' does not know its name - this will not survive naive auto-refactoring There are other libraries provided by Microsoft and others that allow to check for errors (sorry the names have escaped me). I have seen one library which with the help of closures/lambdas can accomplish error checking that can survive refactoring, but it does not feel idiomatic. This would be one reason why I might want a language where a variable knows its name.

    Read the article

  • How can I create a cron job that runs a task every three weeks?

    - by itj
    I have a task that needs to be performed on my project schedule (3 weeks). I'm able to set up cron to do this every week, or (for example) on the 3rd week of every month - but can't find a way to do this every three weeks. I could hack the script to create temporary files (or similar) so it could work out it was the third time it has been run - but this solution smells. Can it be done in a clean way?

    Read the article

  • Why doesn't my cron.d per minute job run?

    - by Travis Griggs
    I have thrown a bunch of darts trying to get a python script of mine to execute every minute. So I thought I'd simplify it to just do the "simplest thing that could could possibly work" once per minute (I'm running debian/testing). I created a single line file in /etc/cron.d/perminute: * * * * * /bin/touch /home/me/ding_dong It's owned by root, and executable (not sure if either of those matter). And then I did: sudo service cron reload And then sit back and start running ls -ltr again and again in my home directory (/home/me). But my ding_dong file never shows up. I know if I do a sudo /bin/touch /home/me/ding_dong, it shows up right away. Obviously missing something stupid here.

    Read the article

  • If the bug is 5+ years old, then is it a feature?

    - by Job
    Allow me to add details: I work at an institutional place with many coders, testers, QA analysts, product owners, etc. and here is something that bugs me: We have been able to sell crappy (albeit pretty functional) software for over a decade. It has many features and the product is competitive, but there are a some serious bugs out there, as well as thousands of "paper cuts" - little annoyances that clients need to get used to. It pains me to look at some of the things because I firmly believe that if computers do not help to make our lives easier, then we should not use them. I have confidence in my colleagues - they are smart, able, and can improve things when the focus is on doing that. But, it can be difficult to file bugs against some old functionality without seeing them closed or forgotten. "It worked like that for ions" is a typical answer. Also, when QA does regression, they tend to look for anything that is different as much as anything that does not seem right. So, a fix to an old problem can be written up as a bug, because "it has been like that before even my time". The young coder in me thinks: rewrite this freaking thing! As someone who had the opportunity to be close to sales, clients, I want to give a benefit of a doubt to this approach. I am interested in your opinion/experience as well. Please try to consider risk, cost-to-benefit, and other non-technical factors.

    Read the article

  • Bridge or Factory and How

    - by Chris
    I'm trying to learn patterns and I've got a job that is screaming for a pattern, I just know it but I can't figure it out. I know the filter type is something that can be abstracted and possibly bridged. I'M NOT LOOKING FOR A CODE REWRITE JUST SUGGESTIONS. I'm not looking for someone to do my job. I would like to know how patterns could be applied to this example. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.IO; using System.Xml; using System.Text.RegularExpressions; namespace CopyTool { class CopyJob { public enum FilterType { TextFilter, RegExFilter, NoFilter } public FilterType JobFilterType { get; set; } private string _jobName; public string JobName { get { return _jobName; } set { _jobName = value; } } private int currentIndex; public int CurrentIndex { get { return currentIndex; } } private DataSet ds; public int MaxJobs { get { return ds.Tables["Job"].Rows.Count; } } private string _filter; public string Filter { get { return _filter; } set { _filter = value; } } private string _fromFolder; public string FromFolder { get { return _fromFolder; } set { if (Directory.Exists(value)) { _fromFolder = value; } else { throw new DirectoryNotFoundException(String.Format("Folder not found: {0}", value)); } } } private List<string> _toFolders; public List<string> ToFolders { get { return _toFolders; } } public CopyJob() { Initialize(); } private void Initialize() { if (ds == null) { ds = new DataSet(); } ds.ReadXml(Properties.Settings.Default.ConfigLocation); LoadValues(0); } public void Execute() { ExecuteJob(FromFolder, _toFolders, Filter, JobFilterType); } public void ExecuteAll() { string OrigPath; List<string> DestPaths; string FilterText; FilterType FilterWay; foreach (DataRow rw in ds.Tables["Job"].Rows) { OrigPath = rw["FromFolder"].ToString(); FilterText = rw["FilterText"].ToString(); switch (rw["FilterType"].ToString()) { case "TextFilter": FilterWay = FilterType.TextFilter; break; case "RegExFilter": FilterWay = FilterType.RegExFilter; break; default: FilterWay = FilterType.NoFilter; break; } DestPaths = new List<string>(); foreach (DataRow crw in rw.GetChildRows("Job_ToFolder")) { DestPaths.Add(crw["FolderPath"].ToString()); } ExecuteJob(OrigPath, DestPaths, FilterText, FilterWay); } } private void ExecuteJob(string OrigPath, List<string> DestPaths, string FilterText, FilterType FilterWay) { FileInfo[] files; switch (FilterWay) { case FilterType.RegExFilter: files = GetFilesByRegEx(new Regex(FilterText), OrigPath); break; case FilterType.TextFilter: files = GetFilesByFilter(FilterText, OrigPath); break; default: files = new DirectoryInfo(OrigPath).GetFiles(); break; } foreach (string fld in DestPaths) { CopyFiles(files, fld); } } public void MoveToJob(int RecordNumber) { Save(); LoadValues(RecordNumber - 1); } public void AddToFolder(string folderPath) { if (Directory.Exists(folderPath)) { _toFolders.Add(folderPath); } else { throw new DirectoryNotFoundException(String.Format("Folder not found: {0}", folderPath)); } } public void DeleteToFolder(int index) { _toFolders.RemoveAt(index); } public void Save() { DataRow rw = ds.Tables["Job"].Rows[currentIndex]; rw["JobName"] = _jobName; rw["FromFolder"] = _fromFolder; rw["FilterText"] = _filter; switch (JobFilterType) { case FilterType.RegExFilter: rw["FilterType"] = "RegExFilter"; break; case FilterType.TextFilter: rw["FilterType"] = "TextFilter"; break; default: rw["FilterType"] = "NoFilter"; break; } DataRow[] ToFolderRows = ds.Tables["Job"].Rows[currentIndex].GetChildRows("Job_ToFolder"); for (int i = 0; i <= ToFolderRows.GetUpperBound(0); i++) { ToFolderRows[i].Delete(); } foreach (string fld in _toFolders) { DataRow ToFolderRow = ds.Tables["ToFolder"].NewRow(); ToFolderRow["JobId"] = ds.Tables["Job"].Rows[currentIndex]["JobId"]; ToFolderRow["Job_Id"] = ds.Tables["Job"].Rows[currentIndex]["Job_Id"]; ToFolderRow["FolderPath"] = fld; ds.Tables["ToFolder"].Rows.Add(ToFolderRow); } } public void Delete() { ds.Tables["Job"].Rows.RemoveAt(currentIndex); LoadValues(currentIndex++); } public void MoveNext() { Save(); currentIndex++; LoadValues(currentIndex); } public void MovePrevious() { Save(); currentIndex--; LoadValues(currentIndex); } public void MoveFirst() { Save(); LoadValues(0); } public void MoveLast() { Save(); LoadValues(ds.Tables["Job"].Rows.Count - 1); } public void CreateNew() { Save(); int MaxJobId = 0; Int32.TryParse(ds.Tables["Job"].Compute("Max(JobId)", "").ToString(), out MaxJobId); DataRow rw = ds.Tables["Job"].NewRow(); rw["JobId"] = MaxJobId + 1; ds.Tables["Job"].Rows.Add(rw); LoadValues(ds.Tables["Job"].Rows.IndexOf(rw)); } public void Commit() { Save(); ds.WriteXml(Properties.Settings.Default.ConfigLocation); } private void LoadValues(int index) { if (index > ds.Tables["Job"].Rows.Count - 1) { currentIndex = ds.Tables["Job"].Rows.Count - 1; } else if (index < 0) { currentIndex = 0; } else { currentIndex = index; } DataRow rw = ds.Tables["Job"].Rows[currentIndex]; _jobName = rw["JobName"].ToString(); _fromFolder = rw["FromFolder"].ToString(); _filter = rw["FilterText"].ToString(); switch (rw["FilterType"].ToString()) { case "TextFilter": JobFilterType = FilterType.TextFilter; break; case "RegExFilter": JobFilterType = FilterType.RegExFilter; break; default: JobFilterType = FilterType.NoFilter; break; } if (_toFolders == null) _toFolders = new List<string>(); _toFolders.Clear(); foreach (DataRow crw in rw.GetChildRows("Job_ToFolder")) { AddToFolder(crw["FolderPath"].ToString()); } } private static FileInfo[] GetFilesByRegEx(Regex rgx, string locPath) { DirectoryInfo d = new DirectoryInfo(locPath); FileInfo[] fullFileList = d.GetFiles(); List<FileInfo> filteredList = new List<FileInfo>(); foreach (FileInfo fi in fullFileList) { if (rgx.IsMatch(fi.Name)) { filteredList.Add(fi); } } return filteredList.ToArray(); } private static FileInfo[] GetFilesByFilter(string filter, string locPath) { DirectoryInfo d = new DirectoryInfo(locPath); FileInfo[] fi = d.GetFiles(filter); return fi; } private void CopyFiles(FileInfo[] files, string destPath) { foreach (FileInfo fi in files) { bool success = false; int i = 0; string copyToName = fi.Name; string copyToExt = fi.Extension; string copyToNameWithoutExt = Path.GetFileNameWithoutExtension(fi.FullName); while (!success && i < 100) { i++; try { if (File.Exists(Path.Combine(destPath, copyToName))) throw new CopyFileExistsException(); File.Copy(fi.FullName, Path.Combine(destPath, copyToName)); success = true; } catch (CopyFileExistsException ex) { copyToName = String.Format("{0} ({1}){2}", copyToNameWithoutExt, i, copyToExt); } } } } } public class CopyFileExistsException : Exception { public string Message; } }

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • How to improve performance of opening Microsoft Word when automated from c#?

    - by Abdullah BaMusa
    I have Microsoft Word template that I automated filling it’s fields from my application, and when the user request print I open this template. but creating word application every time user request print after filling fields is very expensive and lead to some delay while opening the template, so I choose to cache the reference to Word then just open the new filled template. that solve the performance issue as opening file is less expensive than recreating Word each time, but this work while the user just close the document not the entire Word application which when happened my reference to Word become invalid and return with exception says: “The RPC server is unavailable” next time request opening template . I tried to subscribe to BeforClosing event but his trigger for Quitting Word as well as Closing documents. My question is how to know if the word is closing document or quit the entire application so I take the proper action, or any hint for another direction of thinking about improve performance of opening word template.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >