Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 563/1086 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • cl.exe Difference in object files when /E output is the same and flags are the same

    - by madiyaan damha
    Hello: I am using Visual Studio 2005's cl.exe compiler. I call it with a bunch of /I /D and some compilation/optimization flags (example: /Ehsc). I have two compilation scripts, and both differ only in the /I flags (include directories are different). All other flags are the same. These scripts produce different object files (and not just a timestamp difference as noted below). The strange thing is that the /E output of both scripts is the same. That means that the include files are not causing the difference in object files, but then again, where is the difference coming from? Can anyone elucidate on how I am seeing two different object files in my situation. If the include files are causing the difference, how come I see identical /E output? PS. The object files are different not only in the timestamp, but in the code sections also. In fact the behavior of my final executable is different in both cases. Edit: PSS: I even looked at the /includeFiles output of cl.exe and that output is identical. The object files, however, differ in more than just the timestamp (in fact, one is 1KB bigger than another!)

    Read the article

  • Strange error when filling a data adapter.

    - by Tim C
    I am receiving the following error in my code (c#, .Net 3.5, VS2008) when I try to connect to an Excel sheet and fill a OleDbDataAdapter with the results of a query. First the error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. And here is the code, which is honestly pretty simple: var excelFileName = string.Format("c:/Metadata_Tool.xlsm"); var connectionString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; Extended Properties=Excel 12.0;HDR=YES;", excelFileName); var adapter = new OleDbDataAdapter("Select * FROM [Video Tagging XML]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "VTX"); DataTable data = ds.Tables["VTX"]; foreach (DataRow myRow in data.Rows) { foreach (DataColumn myColumn in data.Columns) { Console.Write("\t{0}", myRow[myColumn]); } Console.WriteLine(); } Console.ReadLine(); I get the error on the line adapter.Fill(ds,"VTX");. I did find a microsoft forum post saying to turn on JIT optimization in VS2008 from the Tools/Options/Debug/General menu, but this did not seem to help. Any help would be greatly appreciated thanks!

    Read the article

  • Possible to Inspect Innards of Core C# Functionality

    - by Nick Babcock
    I was struck today, with the inclination to compare the innards of Buffer.BlockCopy and Array.CopyTo. I am curious to see if Array.CopyTo called Buffer.BlockCopy behind the scenes. There is no practical purpose behind this, I just want to further my understanding of the C# language and how it is implemented. Don't jump the gun and accuse me of micro-optimization, but you can accuse me of being curious! When I ran ILasm on mscorlib.dll I received this for Array.CopyTo .method public hidebysig newslot virtual final instance void CopyTo(class System.Array 'array', int32 index) cil managed { // Code size 0 (0x0) } // end of method Array::CopyTo and this for Buffer.BlockCopy .method public hidebysig static void BlockCopy(class System.Array src, int32 srcOffset, class System.Array dst, int32 dstOffset, int32 count) cil managed internalcall { .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) } // end of method Buffer::BlockCopy Which, frankly, baffles me. I've never run ILasm on a dll/exe I didn't create. Does this mean that I won't be able to see how these functions are implemented? Searching around only revealed a stackoverflow question, which Marc Gravell said [Buffer.BlockCopy] is basically a wrapper over a raw mem-copy While insightful, it doesn't answer my question if Array.CopyTo calls Buffer.BlockCopy. I'm specifically interested in if I'm able to see how these two functions are implemented, and if I had future questions about the internals of C#, if it is possible for me to investigate it. Or am I out of luck?

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • Is this a good KVO-compliant way to model a mutable to-many relationship?

    - by andyvn22
    Say I'd like a mutable, unordered to-many relationship. For internal optimization reasons, it'd be best to store this in an NSMutableDictionary rather than an NSMutableSet. But I'd like to keep that implementation detail private. I'd also like to provide some KVO-compliant accessors, so: - (NSSet*)things; - (NSUInteger)countOfThings; - (void)addThings:(NSSet*)someThings; - (void)removeThings:(NSSet*)someThings; Now, it'd be convenient and less evil to provide accessors (private ones, of course, in my implementation file) for the dictionary as well, so: @interface MYClassWithThings () @property (retain) NSMutableDictionary* keyedThings; @end This seems good to me! I can use accessors to mess with my keyedThings within the class, but other objects think they're dealing with a mutable, unordered (, unkeyed!) to-many relationship. I'm concerned that several things I'm doing may be "evil" though, according to good style and Apple approval and whatnot. Have I done anything evil here? (For example, is it wrong not to provide setThings, since the things property is supposedly mutable?)

    Read the article

  • How to use the Request URL/URL Rewriting For Localization in ASP.NET - Using an HTTP Module or Globa

    - by LocalizedUrlDMan
    I wanted to see if there is a way to use the request URL/URL rewriting to set the language a page is rendered in by examining a portion of the URL in ASP.NET. We have a site that already works with ASP.NET’s resource localization and user’s can change the language that they see pages/resources on the site in, however the current mechanism in not very search engine friendly since the language variations for each language all appear as one page. It would be much better if we could have pages like www.site.com/en-mx/realfolder/realpage.aspx that allow linking to culture specific versions of a page. I know lots of people have likely done localization through URL structures before and I wanted to know if one of your could share how to do this in the Global.asax file or with an HTTP Module (pointing to links to blog postings would be great too). We have a restriction that the site is based on ASP.NET 2.0 (so we can't used the 3.5+ features yet). Here is the example scenario: A real page exits at: www.site.com/realfolder/realpage.aspx The page has a mechanism for the user to change the language it is displayed in via a dropdown. There are search engine optimization and user links sharing benefits to doing this since people can link directly to a page that has content that is applicable to a certain language (this could also include right-to-left layouts for languages like Japanese). I would like to use an HTTP module to see if the first part of the URL after www.site.com, site.com, subdomain.site.com, etc. contains a valid culture code (e.g. en-us, es-mx) then use that value to set the localization culture of the page/resources based on that URL. So if the user accesses the URL www.site.com/en-MX/realfolder/realpage.aspx Then the page will render in Mexico’s variant of Spanish. If the user goes to www.site.com/realfolder/realpage.aspx directly the page would just use their browser’s language settings.

    Read the article

  • OpenGL depth buffer on Android

    - by kayahr
    I'm currently learning OpenGL ES programming on Android (2.1). I started with the obligatory rotating cube. It's rotating fine but I can't get the depth buffer to work. The polygons are always displayed in the order the GL commands render them. I do this during initialization of GL: gl.glClearColor(.5f, .5f, .5f, 1); gl.glShadeModel(GL10.GL_SMOOTH); gl.glClearDepthf(1f); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glDepthFunc(GL10.GL_LEQUAL); gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); On surface-change I do this: gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100f); When I enable backface culling then everything looks correct. But backface culling is only a speed-optimization so it should also work with only the depth buffer or not? So what is missing here?

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • Dealing with Time Zones

    - by RavIncredible
    Hi All, I need some guidance with one of my project requirements, I am developing an application which has to deal with various time zones. Scenario: User 1 is from India, so his time zone would be GMT+05:30 User 2 is from UK, so his time zone would be GMT+01:00 If the User 1 sends a message to User 2, I want to show the Message Sent/Received Time as per the user’s time zone. For example User 1 sends a message at 6:30 Indian time, when User 2 would view the message it would show as 2:00 UK time. Here goes my question, whenever I save the message should I convert it to GMT+00, so all my base times stamps are the same and then later when I display the message, I convert it back to User specific time zone. Would this be complex? Is this the best way of doing this? I like to get views for both saving and displaying, also when I should do the time conversion from optimization point of view. I would need do deal with any/all timezones. I am developing this application with PHP and MySQL and I am aware of timezone conversion method come with both PHP and MySQL. I am just trying to figure out the best way of doing this. Look forward to have all valuable suggestions. Note : As of now I am not much worried with day/light savings. Thanks Ravi

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • pinpointing "conditional jump or move depends on uninitialized value(s)" valgrind message

    - by kamziro
    So I've been getting some mysterious uninitialized values message from valgrind and it's been quite the mystery as of where the bad value originated from. Seems that valgrind shows the place where the unitialised value ends up being used, but not the origin of the uninitialised value. ==11366== Conditional jump or move depends on uninitialised value(s) ==11366== at 0x43CAE4F: __printf_fp (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x43C6563: vfprintf (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x43EAC03: vsnprintf (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x42D475B: (within /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42E2C9B: std::ostreambuf_iterator<char, std::char_traits<char> > std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::_M_insert_float<double>(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, char, double) const (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42E31B4: std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::do_put(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, double) const (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42EE56F: std::ostream& std::ostream::_M_insert<double>(double) (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x81109ED: Snake::SnakeBody::syncBodyPos() (ostream:221) ==11366== by 0x810B9F1: Snake::Snake::update() (snake.cpp:257) ==11366== by 0x81113C1: SnakeApp::updateState() (snakeapp.cpp:224) ==11366== by 0x8120351: RoenGL::updateState() (roengl.cpp:1180) ==11366== by 0x81E87D9: Roensachs::update() (rs.cpp:321) As can be seen, it gets quite cryptic.. especially because when it's saying by Class::MethodX, it sometimes points straight to ostream etc. Perhaps this is due to optimization? ==11366== by 0x81109ED: Snake::SnakeBody::syncBodyPos() (ostream:221) Just like that. Is there something I'm missing? What is the best way to catch bad values without having to resort to super-long printf detective work?

    Read the article

  • MySQL: order by and limit gives wrong result

    - by Larry K
    MySQL ver 5.1.26 I'm getting the wrong result with a select that has where, order by and limit clauses. It's only a problem when the order by uses the id column. I saw the MySQL manual for LIMIT Optimization My guess from reading the manual is that there is some problem with the index on the primary key, id. But I don't know where I should go from here... Question: what should I do to best solve the problem? Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.00 sec) WRONG result when limit added! Should be the first row, id - 1336 mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 1 row in set (0.00 sec) Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.01 sec) Works correctly with limit: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | +------+---------------------+ 1 row in set (0.01 sec) Additional info: explain SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | 1 | SIMPLE | billing_invoices | range | index_billing_invoices_on_account_id | index_billing_invoices_on_account_id | 4 | NULL | 3 | Using where | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+

    Read the article

  • Project Euler problem 214, How can i make it more efficient?

    - by Once
    I am becoming more and more addicted to the Project Euler problems. However since one week I am stuck with the #214. Here is a short version of the problem: PHI() is Euler's totient function, i.e. for any given integer n, PHI(n)=numbers of k<=n for which gcd(k,n)=1. We can iterate PHI() to create a chain. For example starting from 18: PHI(18)=6 = PHI(6)=2 = PHI(2)=1. So starting from 18 we get a chain of length 4 (18,6,2,1) The problem is to calculate the sum of all primes less than 40e6 which generate a chain of length 25. I built a function that calculates the chain length of any number and I tested it for small values: it works well and fast. sum of all primes<=20 which generate a chain of length 4: 12 sum of all primes<=1000 which generate a chain of length 10: 39383 Unfortunately my algorithm doesn't scale well. When I apply it to the problem, it takes several hours to calculate... so I stop it because the Project Euler problems must be solved in less than one minute. I thought that my prime detection function might be slow so I fed the program with a list of primes <40e6 to avoid the primality test... The code runs now a little bit faster, but there is still no way to get a solution in less than a few hours (and I don't want this). So is there any "magic trick" that I am missing here ? I really don't understand how to be more efficient on this one... I am not asking for the solution, because fighting with optimization is all the fun of Project Euler. However, any small hint that could put me on the right track would be welcome. Thanks !

    Read the article

  • MKL Accelerated Math Libraries for Java...

    - by Kaopua
    I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming. I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here). In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java? Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math). I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven. Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stock so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • Is there a significant mechanical difference between these faux simulations of default parameters?

    - by ccomet
    C#4.0 introduced a very fancy and useful thing by allowing default parameters in methods. But C#3.0 doesn't. So if I want to simulate "default parameters", I have to create two of that method, one with those arguments and one without those arguments. There are two ways I could do this. Version A - Call the other method public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return CutBetween(str, left, right, false); } Version B - Copy the method body public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return str.CutAfter(left, false).CutBefore(right, false); } Is there any real difference between these? This isn't a question about optimization or resource usage or anything (though part of it is my general goal of remaining consistent), I don't even think there is any significant effect in picking one method or the other, but I find it wiser to ask about these things than perchance faultily assume.

    Read the article

  • Constructor or Assignment Operator

    - by ju
    Can you help me is there definition in C++ standard that describes which one will be called constructor or assignment operator in this case: #include <iostream> using namespace std; class CTest { public: CTest() : m_nTest(0) { cout << "Default constructor" << endl; } CTest(int a) : m_nTest(a) { cout << "Int constructor" << endl; } CTest(const CTest& obj) { m_nTest = obj.m_nTest; cout << "Copy constructor" << endl; } CTest& operatorint rhs) { m_nTest = rhs; cout << "Assignment" << endl; return *this; } protected: int m_nTest; }; int _tmain(int argc, _TCHAR* argv[]) { CTest b = 5; return 0; } Or is it just a matter of compiler optimization?

    Read the article

  • How to transform vertical table into horizontal table?

    - by avivo
    Hello, I have one table Person: Id Name 1 Person1 2 Person2 3 Person3 And I have its child table Profile: Id PersonId FieldName Value 1 1 Firstname Alex 2 1 Lastname Balmer 3 1 Email [email protected] 4 1 Phone +1 2 30004000 And I want to get data from these two tables in one row like this: Id Name Firstname Lastname Email Phone 1 Person1 Alex Balmer [email protected] +1 2 30004000 What is the most optimized query to get these vertical (key, value) values in one row like this? Now I have a problem that I done four joins of child table to parent table because I need to get these four fields. Some optimization is for sure possible. I would like to be able to modify this query in easy way when I add new field (key,value). What is the best way to do this? To create some StoreProcedure? I would like to have strongly types in my DB layer (C#) and using LINQ (when programming) so it means when I add some new Key, Value pair in Profile table I would like to do minimal modifications in DB and C# if possible. Actually I am trying to get some best practices in this case.

    Read the article

  • Browser performance when combining image resizing with animated movement

    - by Steve Reichgut
    I have been tasked with the job of converting a Flash animation into HTML. The animation is rather complex and requires the need to move multiple images (9) from location (x,y) to location (x2,y2) while simultaneously increasing the image size from 215px wide to 930px wide. While doing some initial testing of this animation with just 1-2 images, I noticed a lot of choppiness in FF handling of this animation. To try and isolate the problem, I removed the dynamic resizing of the animation and just moved it from point A to point B. What was interesting was that I saw the same behavior when simply moving a 930px image that was resized down to 215px (via the CSS width or inline width properties). When I try the same animation with a different image that is actually 215px wide, it performed smoothly. I then tried the same animation with the original 930px wide image (with no resizing) and it performed well also. This makes me wonder if the browser is having to "resize" the image down to 215px each time it is moved which is causing the choppiness. Is this a correct assumption? If so, is there any other way to optimize the animation to allow for simultaneous image resizing and image movement? Notes: 1) One optimization I have done is to position the images absolutely in order to minimize the reflow process. 2) I have tested the animation using both jQuery and the fX animation framework.

    Read the article

  • Why are references compacted inside Perl lists?

    - by parkan
    Putting a precompiled regex inside two different hashes referenced in a list: my @list = (); my $regex = qr/ABC/; push @list, { 'one' => $regex }; push @list, { 'two' => $regex }; use Data::Dumper; print Dumper(\@list); I'd expect: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => qr/(?-xism:ABC)/ } ]; But instead we get a circular reference: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => $VAR1->[0]{'one'} } ]; This will happen with indefinitely nested hash references and shallowly copied $regex. I'm assuming the basic reason is that precompiled regexes are actually references, and references inside the same list structure are compacted as an optimization (\$scalar behaves the same way). I don't entirely see the utility of doing this (presumably a reference to a reference has the same memory footprint), but maybe there's a reason based on the internal representation Is this the correct behavior? Can I stop it from happening? Aside from probably making GC more difficult, these circular structures create pretty serious headaches. For example, iterating over a list of queries that may sometimes contain the same regular expression will crash the MongoDB driver with a nasty segfault (see https://rt.cpan.org/Public/Bug/Display.html?id=58500)

    Read the article

  • Strange build issue using Flex Mojo. Looking for troubleshooting suggestions.

    - by WeeJavaDude
    I have ran into a strange issue and I was hoping for some suggeestion on how to attack the problem. Here is the environment. 1) We develop locally using Flex Builder. 2) We use QuickBuild with FlexMojo 3.4.2 for test builds and production 3) In both cases we don't believe optimization is enabled. What we are seeing is some strange behavior relating to the Ctrl-Enter key when doing testing on IE only in our test environment but not locally. By copying some files over locally I have narrowed the issue down to swf files differences. We do see a difference in the size of swf files in our test environment vs our local environments. Couple things that would help me in troubleshooting would be. 1) Is there a way to know what exactly is in the SWF file? What SWCs are included. 2) How does one compare compile settings between a maven mojo configuration and Flex IDE envioronment? Any thoughts or opinions would be very helpful.

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >