Search Results

Search found 3828 results on 154 pages for 'mathematical optimization'.

Page 141/154 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Is this a good KVO-compliant way to model a mutable to-many relationship?

    - by andyvn22
    Say I'd like a mutable, unordered to-many relationship. For internal optimization reasons, it'd be best to store this in an NSMutableDictionary rather than an NSMutableSet. But I'd like to keep that implementation detail private. I'd also like to provide some KVO-compliant accessors, so: - (NSSet*)things; - (NSUInteger)countOfThings; - (void)addThings:(NSSet*)someThings; - (void)removeThings:(NSSet*)someThings; Now, it'd be convenient and less evil to provide accessors (private ones, of course, in my implementation file) for the dictionary as well, so: @interface MYClassWithThings () @property (retain) NSMutableDictionary* keyedThings; @end This seems good to me! I can use accessors to mess with my keyedThings within the class, but other objects think they're dealing with a mutable, unordered (, unkeyed!) to-many relationship. I'm concerned that several things I'm doing may be "evil" though, according to good style and Apple approval and whatnot. Have I done anything evil here? (For example, is it wrong not to provide setThings, since the things property is supposedly mutable?)

    Read the article

  • How to use the Request URL/URL Rewriting For Localization in ASP.NET - Using an HTTP Module or Globa

    - by LocalizedUrlDMan
    I wanted to see if there is a way to use the request URL/URL rewriting to set the language a page is rendered in by examining a portion of the URL in ASP.NET. We have a site that already works with ASP.NET’s resource localization and user’s can change the language that they see pages/resources on the site in, however the current mechanism in not very search engine friendly since the language variations for each language all appear as one page. It would be much better if we could have pages like www.site.com/en-mx/realfolder/realpage.aspx that allow linking to culture specific versions of a page. I know lots of people have likely done localization through URL structures before and I wanted to know if one of your could share how to do this in the Global.asax file or with an HTTP Module (pointing to links to blog postings would be great too). We have a restriction that the site is based on ASP.NET 2.0 (so we can't used the 3.5+ features yet). Here is the example scenario: A real page exits at: www.site.com/realfolder/realpage.aspx The page has a mechanism for the user to change the language it is displayed in via a dropdown. There are search engine optimization and user links sharing benefits to doing this since people can link directly to a page that has content that is applicable to a certain language (this could also include right-to-left layouts for languages like Japanese). I would like to use an HTTP module to see if the first part of the URL after www.site.com, site.com, subdomain.site.com, etc. contains a valid culture code (e.g. en-us, es-mx) then use that value to set the localization culture of the page/resources based on that URL. So if the user accesses the URL www.site.com/en-MX/realfolder/realpage.aspx Then the page will render in Mexico’s variant of Spanish. If the user goes to www.site.com/realfolder/realpage.aspx directly the page would just use their browser’s language settings.

    Read the article

  • Is it possible to implement an infinite IEnumerable without using yield with only C# code?

    - by sinelaw
    Edit: Apparently off topic...moving to Programmers.StackExchange.com. This isn't a practical problem, it's more of a riddle. Problem I'm curious to know if there's a way to implement something equivalent to the following, but without using yield: IEnumerable<T> Infinite<T>() { while (true) { yield return default(T); } } Rules You can't use the yield keyword Use only C# itself directly - no IL code, no constructing dynamic assemblies etc. You can only use the basic .NET lib (only mscorlib.dll, System.Core.dll? not sure what else to include). However if you find a solution with some of the other .NET assemblies (WPF?!), I'm also interested. Don't implement IEnumerable or IEnumerator. Notes The closest I've come yet: IEnumerable<int> infinite = null; infinite = new int[1].SelectMany(x => new int[1].Concat(infinite)); This is "correct" but hits a StackOverflowException after 14399 iterations through the enumerable (not quite infinite). I'm thinking there might be no way to do this due to the CLR's lack of tail recursion optimization. A proof would be nice :)

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • pinpointing "conditional jump or move depends on uninitialized value(s)" valgrind message

    - by kamziro
    So I've been getting some mysterious uninitialized values message from valgrind and it's been quite the mystery as of where the bad value originated from. Seems that valgrind shows the place where the unitialised value ends up being used, but not the origin of the uninitialised value. ==11366== Conditional jump or move depends on uninitialised value(s) ==11366== at 0x43CAE4F: __printf_fp (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x43C6563: vfprintf (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x43EAC03: vsnprintf (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x42D475B: (within /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42E2C9B: std::ostreambuf_iterator<char, std::char_traits<char> > std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::_M_insert_float<double>(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, char, double) const (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42E31B4: std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::do_put(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, double) const (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42EE56F: std::ostream& std::ostream::_M_insert<double>(double) (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x81109ED: Snake::SnakeBody::syncBodyPos() (ostream:221) ==11366== by 0x810B9F1: Snake::Snake::update() (snake.cpp:257) ==11366== by 0x81113C1: SnakeApp::updateState() (snakeapp.cpp:224) ==11366== by 0x8120351: RoenGL::updateState() (roengl.cpp:1180) ==11366== by 0x81E87D9: Roensachs::update() (rs.cpp:321) As can be seen, it gets quite cryptic.. especially because when it's saying by Class::MethodX, it sometimes points straight to ostream etc. Perhaps this is due to optimization? ==11366== by 0x81109ED: Snake::SnakeBody::syncBodyPos() (ostream:221) Just like that. Is there something I'm missing? What is the best way to catch bad values without having to resort to super-long printf detective work?

    Read the article

  • MySQL: order by and limit gives wrong result

    - by Larry K
    MySQL ver 5.1.26 I'm getting the wrong result with a select that has where, order by and limit clauses. It's only a problem when the order by uses the id column. I saw the MySQL manual for LIMIT Optimization My guess from reading the manual is that there is some problem with the index on the primary key, id. But I don't know where I should go from here... Question: what should I do to best solve the problem? Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.00 sec) WRONG result when limit added! Should be the first row, id - 1336 mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 1 row in set (0.00 sec) Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.01 sec) Works correctly with limit: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | +------+---------------------+ 1 row in set (0.01 sec) Additional info: explain SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | 1 | SIMPLE | billing_invoices | range | index_billing_invoices_on_account_id | index_billing_invoices_on_account_id | 4 | NULL | 3 | Using where | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+

    Read the article

  • Project Euler problem 214, How can i make it more efficient?

    - by Once
    I am becoming more and more addicted to the Project Euler problems. However since one week I am stuck with the #214. Here is a short version of the problem: PHI() is Euler's totient function, i.e. for any given integer n, PHI(n)=numbers of k<=n for which gcd(k,n)=1. We can iterate PHI() to create a chain. For example starting from 18: PHI(18)=6 = PHI(6)=2 = PHI(2)=1. So starting from 18 we get a chain of length 4 (18,6,2,1) The problem is to calculate the sum of all primes less than 40e6 which generate a chain of length 25. I built a function that calculates the chain length of any number and I tested it for small values: it works well and fast. sum of all primes<=20 which generate a chain of length 4: 12 sum of all primes<=1000 which generate a chain of length 10: 39383 Unfortunately my algorithm doesn't scale well. When I apply it to the problem, it takes several hours to calculate... so I stop it because the Project Euler problems must be solved in less than one minute. I thought that my prime detection function might be slow so I fed the program with a list of primes <40e6 to avoid the primality test... The code runs now a little bit faster, but there is still no way to get a solution in less than a few hours (and I don't want this). So is there any "magic trick" that I am missing here ? I really don't understand how to be more efficient on this one... I am not asking for the solution, because fighting with optimization is all the fun of Project Euler. However, any small hint that could put me on the right track would be welcome. Thanks !

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • Is there a significant mechanical difference between these faux simulations of default parameters?

    - by ccomet
    C#4.0 introduced a very fancy and useful thing by allowing default parameters in methods. But C#3.0 doesn't. So if I want to simulate "default parameters", I have to create two of that method, one with those arguments and one without those arguments. There are two ways I could do this. Version A - Call the other method public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return CutBetween(str, left, right, false); } Version B - Copy the method body public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return str.CutAfter(left, false).CutBefore(right, false); } Is there any real difference between these? This isn't a question about optimization or resource usage or anything (though part of it is my general goal of remaining consistent), I don't even think there is any significant effect in picking one method or the other, but I find it wiser to ask about these things than perchance faultily assume.

    Read the article

  • Constructor or Assignment Operator

    - by ju
    Can you help me is there definition in C++ standard that describes which one will be called constructor or assignment operator in this case: #include <iostream> using namespace std; class CTest { public: CTest() : m_nTest(0) { cout << "Default constructor" << endl; } CTest(int a) : m_nTest(a) { cout << "Int constructor" << endl; } CTest(const CTest& obj) { m_nTest = obj.m_nTest; cout << "Copy constructor" << endl; } CTest& operatorint rhs) { m_nTest = rhs; cout << "Assignment" << endl; return *this; } protected: int m_nTest; }; int _tmain(int argc, _TCHAR* argv[]) { CTest b = 5; return 0; } Or is it just a matter of compiler optimization?

    Read the article

  • How to transform vertical table into horizontal table?

    - by avivo
    Hello, I have one table Person: Id Name 1 Person1 2 Person2 3 Person3 And I have its child table Profile: Id PersonId FieldName Value 1 1 Firstname Alex 2 1 Lastname Balmer 3 1 Email [email protected] 4 1 Phone +1 2 30004000 And I want to get data from these two tables in one row like this: Id Name Firstname Lastname Email Phone 1 Person1 Alex Balmer [email protected] +1 2 30004000 What is the most optimized query to get these vertical (key, value) values in one row like this? Now I have a problem that I done four joins of child table to parent table because I need to get these four fields. Some optimization is for sure possible. I would like to be able to modify this query in easy way when I add new field (key,value). What is the best way to do this? To create some StoreProcedure? I would like to have strongly types in my DB layer (C#) and using LINQ (when programming) so it means when I add some new Key, Value pair in Profile table I would like to do minimal modifications in DB and C# if possible. Actually I am trying to get some best practices in this case.

    Read the article

  • Browser performance when combining image resizing with animated movement

    - by Steve Reichgut
    I have been tasked with the job of converting a Flash animation into HTML. The animation is rather complex and requires the need to move multiple images (9) from location (x,y) to location (x2,y2) while simultaneously increasing the image size from 215px wide to 930px wide. While doing some initial testing of this animation with just 1-2 images, I noticed a lot of choppiness in FF handling of this animation. To try and isolate the problem, I removed the dynamic resizing of the animation and just moved it from point A to point B. What was interesting was that I saw the same behavior when simply moving a 930px image that was resized down to 215px (via the CSS width or inline width properties). When I try the same animation with a different image that is actually 215px wide, it performed smoothly. I then tried the same animation with the original 930px wide image (with no resizing) and it performed well also. This makes me wonder if the browser is having to "resize" the image down to 215px each time it is moved which is causing the choppiness. Is this a correct assumption? If so, is there any other way to optimize the animation to allow for simultaneous image resizing and image movement? Notes: 1) One optimization I have done is to position the images absolutely in order to minimize the reflow process. 2) I have tested the animation using both jQuery and the fX animation framework.

    Read the article

  • Why are references compacted inside Perl lists?

    - by parkan
    Putting a precompiled regex inside two different hashes referenced in a list: my @list = (); my $regex = qr/ABC/; push @list, { 'one' => $regex }; push @list, { 'two' => $regex }; use Data::Dumper; print Dumper(\@list); I'd expect: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => qr/(?-xism:ABC)/ } ]; But instead we get a circular reference: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => $VAR1->[0]{'one'} } ]; This will happen with indefinitely nested hash references and shallowly copied $regex. I'm assuming the basic reason is that precompiled regexes are actually references, and references inside the same list structure are compacted as an optimization (\$scalar behaves the same way). I don't entirely see the utility of doing this (presumably a reference to a reference has the same memory footprint), but maybe there's a reason based on the internal representation Is this the correct behavior? Can I stop it from happening? Aside from probably making GC more difficult, these circular structures create pretty serious headaches. For example, iterating over a list of queries that may sometimes contain the same regular expression will crash the MongoDB driver with a nasty segfault (see https://rt.cpan.org/Public/Bug/Display.html?id=58500)

    Read the article

  • What happens when value types are created?

    - by Bob
    I'm developing a game using XNA and C# and was attempting to avoid calling new struct() type code each frame as I thought it would freak the GC out. "But wait," I said to myself, "struct is a value type. The GC shouldn't get called then, right?" Well, that's why I'm asking here. I only have a very vague idea of what happens to value types. If I create a new struct within a function call, is the struct being created on the stack? Will it simply get pushed and popped and performance not take a hit? Further, would there be some memory limit or performance implications if, say, I need to create many instances in a single call? Take, for instance, this code: spriteBatch.Draw(tex, new Rectangle(x, y, width, height), Color.White); Rectangle in this case is a struct. What happens when that new Rectangle is created? What are the implications of having to repeat that line many times (say, thousands of times)? Is this Rectangle created, a copy sent to the Draw method, and then discarded (meaning no memory getting eaten up the more Draw is called in that manner in the same function)? P.S. I know this may be pre-mature optimization, but I'm mostly curious and wish to have a better understanding of what is happening.

    Read the article

  • Strange build issue using Flex Mojo. Looking for troubleshooting suggestions.

    - by WeeJavaDude
    I have ran into a strange issue and I was hoping for some suggeestion on how to attack the problem. Here is the environment. 1) We develop locally using Flex Builder. 2) We use QuickBuild with FlexMojo 3.4.2 for test builds and production 3) In both cases we don't believe optimization is enabled. What we are seeing is some strange behavior relating to the Ctrl-Enter key when doing testing on IE only in our test environment but not locally. By copying some files over locally I have narrowed the issue down to swf files differences. We do see a difference in the size of swf files in our test environment vs our local environments. Couple things that would help me in troubleshooting would be. 1) Is there a way to know what exactly is in the SWF file? What SWCs are included. 2) How does one compare compile settings between a maven mojo configuration and Flex IDE envioronment? Any thoughts or opinions would be very helpful.

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • Utility of List<T>.Sort() versus List<T>.OrderBy() for a member of a custom container class

    - by ccomet
    I've found myself running back through some old 3.5 framework legacy code, and found some points where there are a whole bunch of lists and dictionaries that must be updated in a synchronized fashion. I've determined that I can make this process infinitely easier to both utilize and understand by converging these into custom container classes of new custom classes. There are some points, however, where I came to concerns with organizing the contents of these new container classes by a specific inner property. For example, sorting by the ID number property of one class. As the container classes are primarily based around a generic List object, my first instinct was to write the inner classes with IComparable, and write the CompareTo method that compares the properties. This way, I can just call items.Sort() when I want to invoke the sorting. However, I've been thinking instead about using items = items.OrderBy(Func) instead. This way it is more flexible if I need to sort by any other property. Readability is better as well, since the property used for sorting will be listed in-line with the sort call rather than having to look up the IComparable code. The overall implementation feels cleaner as a result. I don't care for premature or micro optimization, but I like consistency. I find it best to stick with one kind of implementation for as many cases as it is appropriate, and use different implementations where it is necessary. Is it worth it to convert my code to use the LINQ OrderBy instead of using List.Sort? Is it a better practice to stick with the IComparable implementation for these custom containers? Are there any significant mechanical advantages offered by either path that I should be weighing the decision on? Or is their end-functionality equivalent to the point that it just becomes coder's preference?

    Read the article

  • Ad distribution problem: an optimal solution?

    - by Mokuchan
    I'm asked to find a 2 approximate solution to this problem: You’re consulting for an e-commerce site that receives a large number of visitors each day. For each visitor i, where i € {1, 2 ..... n}, the site has assigned a value v[i], representing the expected revenue that can be obtained from this customer. Each visitor i is shown one of m possible ads A1, A2 ..... An as they enter the site. The site wants a selection of one ad for each customer so that each ad is seen, overall, by a set of customers of reasonably large total weight. Thus, given a selection of one ad for each customer, we will define the spread of this selection to be the minimum, over j = 1, 2 ..... m, of the total weight of all customers who were shown ad Aj. Example Suppose there are six customers with values 3, 4, 12, 2, 4, 6, and there are m = 3 ads. Then, in this instance, one could achieve a spread of 9 by showing ad A1 to customers 1, 2, 4, ad A2 to customer 3, and ad A3 to customers 5 and 6. The ultimate goal is to find a selection of an ad for each customer that maximizes the spread. Unfortunately, this optimization problem is NP-hard (you don’t have to prove this). So instead give a polynomial-time algorithm that approximates the maximum spread within a factor of 2. The solution I found is the following: Order visitors values in descending order Add the next visitor value (i.e. assign the visitor) to the Ad with the current lowest total value Repeat This solution actually seems to always find the optimal solution, or I simply can't find a counterexample. Can you find it? Is this a non-polinomial solution and I just can't see it?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >