Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 342/811 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Should checkins be small steps or complete features?

    - by Caspin
    Two of version controls uses seem to dictate different checkin styles. distibution centric: changesets will generally reflect a complete feature. In general these checkins will be larger. This style is more user/maintainer friendly. rollback centric: changesets will be individual small steps so the history can function like an incredibly powerful undo. In general these checkins will be smaller. This style is more developer friendly. I like to use my version control as really powerful undo while while I banging away at some stubborn code/bug. In this way I'm not afraid to make drastic changes just to try out a possible solution. However, this seems to give me a fragmented file history with lots of "well that didn't work" checkins. If instead I try to have my changeset reflect complete features I loose the use of my version control software for experimentation. However, it is much easier for user/maintainers to figure out how the code is evolving. Which has great advantages for code reviews, managing multiple branches, etc. So what's a developer to do? checkin small steps or complete features?

    Read the article

  • My list item's child label elements disappear in IE on accordion menu opening

    - by Scott B
    I've got an app that's working pretty flawlessly in Chrome and FF, however, when I view it in IE, all is well until I click on a header element to activate it (jQuery accordion). What happens then is that I see a brief flash where the content is there, then suddenly the entire left column disappears. This column is generated by a floated label element with a class of ".left" as seen below... <ul class="menu collapsible"> <li class='expand sectionTitle'><a href='#'>General Settings</a> <ul class='acitem'> <li class="section"> <label class="left">This item if floated left with a defined width of 190px via css. This is the item that's disappearing after a brief display</label> <input class="input" value="input element here" /> <label class="description">This element has a margin-left:212px; set via css in order to be positioned to the right of the label element as if in an adjacent table cell. When I add a max-width property to this element, it disappears in IE too!</label> </li> </ul> </li> </ul> As you can see from the comments in the code above (for the two label elements) the description label disappears once I set a max-width on it (I don't have a max-width on the left label element, but it disappears nonetheless). The initial view of this UL menu is fine (note the expand class declaration which makes this part of the accordion open at startup. Its not until I click the "General Settings" to toggle it closed, then back open, that the left class elements disappear (and only in IE)

    Read the article

  • Looking for Go equivalent of scanf.

    - by Stephen Hsu
    I'm looking for the Go equivalent of scanf(). I tried with following code: 1 package main 2 3 import ( 4 "scanner" 5 "os" 6 "fmt" 7 ) 8 9 func main() { 10 var s scanner.Scanner 11 s.Init(os.Stdin) 12 s.Mode = scanner.ScanInts 13 tok := s.Scan() 14 for tok != scanner.EOF { 15 fmt.Printf("%d ", tok) 16 tok = s.Scan() 17 } 18 fmt.Println() 19 } I run it with input from a text with a line of integers. But it always output -3 -3 ... And how to scan a line composed of a string and some integers? Changing the mode whenever encounter a new data type? The Package documentation: Package scanner A general-purpose scanner for UTF-8 encoded text. But it seems that the scanner is not for general use. Updated code: func main() { n := scanf() fmt.Println(n) fmt.Println(len(n)) } func scanf() []int { nums := new(vector.IntVector) reader := bufio.NewReader(os.Stdin) str, err := reader.ReadString('\n') for err != os.EOF { fields := strings.Fields(str) for _, f := range fields { i, _ := strconv.Atoi(f) nums.Push(i) } str, err = reader.ReadString('\n') } r := make([]int, nums.Len()) for i := 0; i < nums.Len(); i++ { r[i] = nums.At(i) } return r }

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • Iteration over a linq to sql query is very slow.

    - by devzero
    I have a view, AdvertView in my database, this view is a simple join between some tables (advert, customer, properties). Then I have a simple linq query to fetch all adverts for a customer: public IEnumerable<AdvertView> GetAdvertForCustomerID(int customerID) { var advertList = from advert in _dbContext.AdvertViews where advert.Customer_ID.Equals(customerID) select advert; return advertList; } I then wish to map this to modelItems for my MVC application: public List<AdvertModelItem> GetAdvertsByCustomer(int customerId) { List<AdvertModelItem> lstAdverts = new List<AdvertModelItem>(); List<AdvertView> adViews = _dataHandler.GetAdvertForCustomerID(customerId).ToList(); foreach(AdvertView adView in adViews) { lstAdverts.Add(_advertMapper.MapToModelClass(adView)); } return lstAdverts; } I was expecting to have some performance issues with the SQL, but the problem seems to be with the .ToList() function. I'm using ANTS performance profiler and it reports that the total runtime of the function is 1.400ms, and 850 of those is with the ToList(). So my question is, why does the tolist function take such a long time here?

    Read the article

  • Forming triangles from points and relations

    - by SiN
    Hello, I want to generate triangles from points and optional relations between them. Not all points form triangles, but many of them do. In the initial structure, I've got a database with the following tables: Nodes(id, value) Relations(id, nodeA, nodeB, value) Triangles(id, relation1_id, relation2_id, relation3_id) In order to generate triangles from both nodes and relations table, I've used the following query: INSERT INTO Triangles SELECT t1.id, t2.id , t3.id, FROM Relations t1, Relations t2, Relations t3 WHERE t1.id < t2.id AND t3.id > t1.id AND ( t1.nodeA = t2.nodeA AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeB OR t3.nodeA = t2.nodeB AND t3.nodeB = t1.nodeB) OR t1.nodeA = t2.nodeB AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeA OR t3.nodeA = t2.nodeA AND t3.nodeB = t1.nodeB) ) It's working perfectly on small sized data. (~< 50 points) In some cases however, I've got around 100 points all related to each other which leads to thousands of relations. So when the expected number of triangles is in the hundreds of thousands, or even in the millions, the query might take several hours. My main problem is not in the select query, while I see it execute in Management Studio, the returned results slow. I received around 2000 rows per minute, which is not acceptable for my case. As a matter of fact, the size of operations is being added up exponentionally and that is terribly affecting the performance. I've tried doing it as a LINQ to object from my code, but the performance was even worse. I've also tried using SqlBulkCopy on a reader from C# on the result, also with no luck. So the question is... Any ideas or workarounds?

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • Multi-client C# ODBC (Sybase/Oracle/MSSQL) table access question.

    - by Hamish Grubijan
    I am working on a feature that would allow clients pick a unique identifier (ci_name). The code below is a generic version that gets expanded to the right sql depending on the vendor. Hopefully it makes sense. #include "sql.h" create table client_identification ( ci_id TYPE_ID IDENTITY, ci_name varchar(64) not null, constraint ci_pk primary key (ci_name) ); go CREATE_SEQUENCE(ci_id) There will be simple stored procedures for adding, retrieving, and deleting these user records. This will be used by several admins. This will not happen very frequently, but there is still a possibility that something will be added or deleted since the list was initially retrieved. I have not yet decided if I need to detect the case of a double delete, but the user name cannot be created twice - primary key constraint will object. I want to be able to detect this particular case and display something like: "you snooze - you loose." :) I would like to leverage the pk constraint instead of doing some extra sql gymnastics. So, how can I detect this case cleanly, so that it works in MS SQL 2008, Sybase, and Oracle? I hope to do better than catch a general ODBC exception and parse out the text and look for what Sybase, Oracle, and MSSQL would give me back. Oracle is a little different. We actually prepend these variables to the Oracle version of stored procedures because they are not available otherwise: Vret_val out number, Vtran_count in out number, Vmessage_count in out number, Thanks. General helpful tips and comments are welcome, except for naming convention ones ( I do not have a choice here, plus I mangled the actual names a bit).

    Read the article

  • What are CDN Best Practices?

    - by Wild Thing
    Hi, I have recently started using the Rackspace Cloudfiles CDN (Limelight), about which I have some questions: I am using jQuery, jQuery UI and jQuery tools in addition to custom JS code. Also, my site is written in ASP.Net, which means there is some ASP.Net generated JS code. Right now what I have done is that I have combined all of the js (including the jquery code), except the ASP.Net generated JS into one file. I am hosting this on the Rackspace CDN. I am wondering if it would make more sense to just get the jQuery, jQuery UI files from the Google hosted CDN (which I suspect would work very well in serving these files, since they will be in many users' cache already)? This would mean one extra HTTP request, so I'm not sure if it'll help. Right now I have multiple containers for my assets. For example, in Rackspace I have 3 containers: JS, CSS and Images. The URL subdomain for all 3 is different. Will that lead to a performance penalty? Should I just use one container (and thus one domain for the CDN)? Is there a way of having the MS ASP.Net generated JS loaded from MS CDN? Would this have a performance hit as per the above question? Thanks in advance, WT

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • Dictionaries with more than one key per value in Python

    - by nickname
    I am attempting to create a nice interface to access a data set where each value has several possible keys. For example, suppose that I have both a number and a name for each value in the data set. I want to be able to access each value using either the number OR the name. I have considered several possible implementations: Using two separate dictionaries, one for the data values organized by number, and one for the data values organized by name. Simply assigning two keys to the same value in a dictionary. Creating dictionaries mapping each name to the corresponding number, and vice versa Attempting to create a hash function that maps each name to a number, etc. (related to the above) Creating an object to encapsulate all three pieces of data, then using one key to map dictionary keys to the objects and simply searching the dictionary to map the other key to the object. None of these seem ideal. The first seems ugly and unmaintainable. The second also seems fragile. The third/fourth seem plausible, but seem to require either much manual specification or an overly complex implementation. Finally, the fifth loses constant-time performance for one of the lookups. In C/C++, I believe that I would use pointers to reference the same piece of data from different keys. I know that the problem is rather similar to a database lookup problem by a non-key column, however, I would like (if possible), to maintain the approximate O(1) performance of Python dictionaries. What is the most Pythonic way to achieve this data structure?

    Read the article

  • Flash Player: Any remedy for the stale video image data problem (in a reused NetStream object)?

    - by amn
    Has anyone experienced stale stills of a previous playback for a reused NetStream object? If so, what are the workarounds for this, except re-creating the object (which eats performance and time)? It is hard to reuse NetStream objects because of a (in my opinion) fundamental issue with NetStream objects - when you 'close' a playing stream and at a later point issue a 'play' call on it again with a different name, the stream appears to still contain a stale image lingering from previous playback, and this is of course displayed in the Video object for a moment - the moment I assume it takes for new stream data to become available from server. Because of this behavior, to improve my users' visual experience, I simply discard a NetStream object after a playback session, and assign a new NetStream object to the same variable, set it up, and play something else. It appears to work - no stale image - but what bugs me is that it's a work around and costs performance (construction and setting up the object again - event listeners and 'client' delegates and more memory usage - NetStream objects are not garbage collected immediately, it takes some time). It would be really nice to REALLY be able to reuse a stream. I am thinking of something akin to Video.clear method, but for the NetStream class. Am I missing something?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • Python vs all the major professional languages [closed]

    - by Matt
    I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros. Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages. Is this idea more or less accurate? (I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.) Anyway, thanks for any input

    Read the article

  • Basic HTML Internal Link not working in IE8

    - by Eamonn
    This one has me scratching my head. I have a page with several internal links bringing the user down the page to further content. Beneath various sections there are further internal links bringing the user back to the top. This works fine in all browsers apart from IE8 (actual IE8 - not IE9 in IE8 mode) where they work go down, but not coming back up! Examples: <a href="page.php?v=35&amp;u=admin-videos#a">A &ndash; General</a> .... //travelling DOWN the page <h3><strong>A &ndash; General<a name="a"></a></strong></h3> <a name="top"></a> .... //travelling back up <a href="page.php?v=35&amp;u=admin-videos#top">Back to top.</a> I've tried filling the 'top' anchor with &nbsp; but that hasn't changed it. Any ideas? Actual use case: http://databizsolutions.ie/contents/page.php?v=35&u=admin-videos

    Read the article

  • How can I optimize this loop?

    - by Moshe
    I've got a piece of code that returns a super-long string that represents "search results". Each result is represented by a double HTML break symbol. For example: Result1<br><br>Result 2<br><br>Result3 I've got the following loop that takes each result and puts it into an array, stripping out the break indicator, "kBreakIndicator" (<br><br>). The problem is that this lopp takes way too long to execute. With a few results it's fine, but once you hit a hundred results, it's about 20-30 seconds slower. It's unacceptable performance. What can I do to improve performance? Here's my code: content is the original NSString. NSMutableArray *results = [[NSMutableArray alloc] init]; //Loop through the string of results and take each result and put it into an array while(![content isEqualToString:@""]){ NSRange rangeOfResult = [content rangeOfString:kBreakIndicator]; NSString *temp = (rangeOfResult.location != NSNotFound) ? [content substringToIndex:rangeOfResult.location] : nil; if (temp) { [results addObject:temp]; content = [[[content stringByReplacingOccurrencesOfString:[NSString stringWithFormat:@"%@%@", temp, kBreakIndicator] withString:@""] mutableCopy] autorelease]; }else{ [results addObject:[content description]]; content = [[@"" mutableCopy] autorelease]; } } //Do something with the results array. [results release];

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Is it possible to create a null function that will not produce warnings?

    - by bbazso
    I have a logger in a c++ application that uses defines as follows: #define FINEST(...) Logger::Log(FINEST, _FILE, __LINE, __func, __VA_ARGS_) However what I would like to do is to be able to switch off these logs since they have a serious performance impact on my system. And, it's not sufficient to simply have my Logger not write to the system log. I really need to get rid of the code produced by the logs. In order to do this, I changed the define to: #define FINEST(...) Which works, but this produces a whole bunch of warning in my code since variables are unused now. So what I would like to have is a sort of NULL FUNCTION that would not actually exist, but would not produce warnings for the unused variables. So, said another way, I would like it to compile with no warnings (i.e. the compiler thinks that the variables are used for a function) but the function does not actually exist in the application (i.e. produces no performance hit). Is this possible? Thanks!

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • When to use reinterpret_cast?

    - by HeretoLearn
    I am little confused with the applicability of reinterpret_cast vs static_cast. From what I have read the general rules are to use static cast when the types can be interpreted at compile time hence the word static. This is the cast the C++ compiler uses internally for implicit casts also. reinterpret_cast are applicable in two scenarios, convert integer types to pointer types and vice versa or to convert one pointer type to another. The general idea I get is this is unportable and should be avoided. Where I am a little confused is one usage which I need, I am calling C++ from C and the C code needs to hold on to the C++ object so basically it holds a void*. What cast should be used to convert between the void * and the Class type? I have seen usage of both static_cast and reinterpret_cast? Though from what I have been reading it appears static is better as the cast can happen at compile time? Though it says to use reinterpret_cast to convert from one pointer type to another?

    Read the article

  • Why doesn't String's hashCode() cache 0?

    - by polygenelubricants
    I noticed in the Java 6 source code for String that hashCode only caches values other than 0. The difference in performance is exhibited by the following snippet: public class Main{ static void test(String s) { long start = System.currentTimeMillis(); for (int i = 0; i < 10000000; i++) { s.hashCode(); } System.out.format("Took %d ms.%n", System.currentTimeMillis() - start); } public static void main(String[] args) { String z = "Allocator redistricts; strict allocator redistricts strictly."; test(z); test(z.toUpperCase()); } } Running this in ideone.com gives the following output: Took 1470 ms. Took 58 ms. So my questions are: Why doesn't String's hashCode() cache 0? What is the probability that a Java string hashes to 0? What's the best way to avoid the performance penalty of recomputing the hash value every time for strings that hash to 0? Is this the best-practice way of caching values? (i.e. cache all except one?) For your amusement, each line here is a string that hash to 0: pollinating sandboxes amusement & hemophilias schoolworks = perversive electrolysissweeteners.net constitutionalunstableness.net grinnerslaphappier.org BLEACHINGFEMININELY.NET WWW.BUMRACEGOERS.ORG WWW.RACCOONPRUDENTIALS.NET Microcomputers: the unredeemed lollipop... Incentively, my dear, I don't tessellate a derangement. A person who never yodelled an apology, never preened vocalizing transsexuals.

    Read the article

  • own drawImage / drawLine in OpenGL

    - by Chrise
    I'm implementing some native 2D-draw functions in my graphics engine for android, but now there's another question coming up, when I observe the performance of my program. At the moment I'm implementing a drawLine/drawImage function. In summary, there are following different values for drawing each different line / image: the color the alpha value the width of the line rotation (only for images) size/scale (also for images) blending method (subrtract, add, normal-alpha) Now, when an imageLine is drawn, I put the CPU-calculated vertex-positions and uv-values for 6 vertices (2 triangles), into a Floatbuffer and draw it immediately with drawArrays, after passing information for drawing (color,alpha, etc.) via uniforms to the shader. When I draw an image, the pre-set VBO is directly drawn after passing information. The first fact I recognized, is: of course drawing Images is much faster, than imagelines (beacuse of VBOs), but also: I cannot pre-put vertex-data into a VBO for imageLines, because imageLines have no static shape like normal images (varying linelength, varying linewidth and the vertex positions of x1,y1 and x2,y2 change too often) That's why I use a normal Floatbuffer, instead of a VBO. So my question is: What's the best way for managing images, and other 2D-graphics functions. For me it's some kind of important, that the user of the engine is able to draw as many images/2D graphics as possible, without loosing to much performance. You can find the functions for drawing images, imagelines, rects, quads, etc. here: https://github.com/Chrise55/LLama3D/blob/master/Llama3DLibrary/src/com/llama3d/object/graphics/image/ImageBase.java Here an example how it looks with many images (testing artificial neural networks), it works fine, but already little bit slow with that many images... :(

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • C#: Any faster way of copying arrays?

    - by Yang
    I have three arrays that need to be combined in one three-dimension array. The following code shows slow performance in Performance Explorer. Is there a faster solution? for (int i = 0; i < sortedIndex.Length; i++) { if (i < num_in_left) { // add instance to the left child leftnode[i, 0] = sortedIndex[i]; leftnode[i, 1] = sortedInstances[i]; leftnode[i, 2] = sortedLabels[i]; } else { // add instance to the right child rightnode[i-num_in_left, 0] = sortedIndex[i]; rightnode[i-num_in_left, 1] = sortedInstances[i]; rightnode[i-num_in_left, 2] = sortedLabels[i]; } } Update: I'm actually trying to do the following: //given three 1d arrays double[] sortedIndex, sortedInstances, sortedLabels; // copy them over to a 3d array (forget about the rightnode for now) double[] leftnode = new double[sortedIndex.Length, 3]; // some magic happens here so that leftnode = {sortedIndex, sortedInstances, sortedLabels};

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >