Search Results

Search found 4549 results on 182 pages for 'quickly'.

Page 146/182 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • JVM GC demote object to eden space?

    - by Kevin
    I'm guessing this isn't possible...but here goes. My understanding is that eden space is cheaper to collect than old gen space, especially when you start getting into very large heaps. Large heaps tend to come up with long running applications (server apps) and server apps a lot of the time want to use some kind of caches. Caches with some kind of eviction (LRU) tend to defeat some assumptions that GC makes (temporary objects die quickly). So cache evictions end up filling up old gen faster than you'd like and you end up with a more costly old gen collection. Now, it seems like this sort of thing could be avoided if java provided a way to mark a reference as about to die (delete keyword)? The difference between this and c++ is that the use is optional. And calling delete does not actually delete the object, but rather is a hint to the GC that it should demote the object back to Eden space (where it will be more easily collected). I'm guessing this feature doesn't exist, but, why not (is there a reason it's a bad idea)?

    Read the article

  • Is it possible to filter data used by pivot table based on filtering the rows in a source table in Excel?

    - by Geoffrey Stoel
    I have developed a dashboard in Excel 2007 that uses one source table in a sheet (being filled with a query on our data warehouse) and multiple pivot tables making different cross sections on this data. I use the GETPIVOTDATA in almost a hundred formulas to give me the right value for a specific indicator in my dashboard. This all works fine. However I now have received the question to make the dashboard for 5 different segments. As you can imagine I don't want to create 5 different workbooks for this and need to maintain the dashboard logic on all of them. So my question is the following. Is it possible to automatically (through VBA or any other means) filter the results in my source table which is the source for my pivot tables and thus for my dashboard values. So schematically: DATABASE_VIEW -- SOURCE_TABLE -- 12 pivot tables -- 100 GETPIVOTDATA functions Preferably I would like to load all the segments in the source_table (one view on my database) and then filter the data in the source table, which results in filterd source_dat for my pivots. This way I can (without requerying the db) quickly change between segments in the dashboards (refreshing pivots only). Data in the source table has the column: CUSTOMER_SEGMENT available to filter upon. Any help is appreciated. Geoffrey

    Read the article

  • What Getters and Setters should and shouldn't do.

    - by cyclotis04
    I've run into a lot of differing opinions on Getters and Setters lately, so I figured I should make it into it's own question. A previous question of mine received an immediate comment (later deleted) that stated setters shouldn't have any side effects, and a SetProperty method would be a better choice. Indeed, this seems to be Microsoft's opinion as well. However, their properties often raise events, such as Resized when a form's Width or Height property is set. OwenP also states "you shouldn't let a property throw exceptions, properties shouldn't have side effects, order shouldn't matter, and properties should return relatively quickly." Yet Michael Stum states that exceptions should be thrown while validating data within a setter. If your setter doesn't throw an exception, how could you effectively validate data, as so many of the answers to this question suggest? What about when you need to raise an event, like nearly all of Microsoft's Control's do? Aren't you then at the mercy of whomever subscribed to your event? If their handler performs a massive amount of information, or throws an error itself, what happens to your setter? Finally, what about lazy loading within the getter? This too could violate the previous guidelines. What is acceptable to place in a getter or setter, and what should be kept in only accessor methods?

    Read the article

  • Developing on a windows machine that interacts with a linux system

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • How to use MySQL geospatial extensions with spherical geometries

    - by Joshua
    Hi Everyone, I would like to store thousands of latitude/longitude points in a MySQL db. I was successful at setting up the tables and adding the data using the geospatial extensions where the column 'coord' is a Point(lat, lng). Problem: I want to quickly find the 'N' closest entries to latitude 'X' degrees and longitude 'Y' degrees. Since the Distance() function has not yet been implemented, I used GLength() function to calculate the distance between (X,Y) and each of the entries, sorting by ascending distance, and limiting to 'N' results. The problem is that this is not calculating shortest distance with spherical geometry. Which means if Y = 179.9 degrees, the list of closest entries will only include longitudes of starting at 179.9 and decreasing even though closer entries exist with longitudes increasing from -179.9. How does one typically handle the discontinuity in longitude when working with spherical geometries in databases? There has to be an easy solution to this, but I must just be searching for the wrong thing because I have not found anything helpful. Should I just forget the GLength() function and create my own function for calculating angular separation? If I do this, will it still be fast and take advantage of the geospatial extensions? Thanks! josh UPDATE: This is exactly what I am describing above. However, it is only for SQL Server. Apparently SQL Server has a Geometry and Geography datatypes. The geography does exactly what I need. Is there something similar in MySQL?

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • How do I serialise a graph in Java without getting StackOverflowException?

    - by Tim Cooper
    I have a graph structure in java, ("graph" as in "edges and nodes") and I'm attempting to serialise it. However, I get "StackOverflowException", despite significantly increasing the JVM stack size. I did some googling, and apparently this is a well known limitation of java serialisation: that it doesn't work for deeply nested object graphs such as long linked lists - it uses a stack record for each link in the chain, and it doesn't do anything clever such as a breadth-first traversal, and therefore you very quickly get a stack overflow. The recommended solution is to customise the serialisation code by overriding readObject() and writeObject(), however this seems a little complex to me. (It may or may not be relevant, but I'm storing a bunch of fields on each edge in the graph so I have a class JuNode which contains a member ArrayList<JuEdge> links;, i.e. there are 2 classes involved, rather than plain object references from one node to another. It shouldn't matter for the purposes of the question). My question is threefold: (a) why don't the implementors of Java rectify this limitation or are they already working on it? (I can't believe I'm the first person to ever want to serialise a graph in java) (b) is there a better way? Is there some drop-in alternative to the default serialisation classes that does it in a cleverer way? (c) if my best option is to get my hands dirty with low-level code, does someone have an example of graph serialisation java source-code that can use to learn how to do it?

    Read the article

  • Deserialize xml which uses attribute name/value pairs

    - by Bodyloss
    My application receives a constant stream of xml files which are more or less a direct copy of the database record <record type="update"> <field name="id">987654321</field> <field name="user_id">4321</field> <field name="updated">2011-11-24 13:43:23</field> </record> And I need to deserialize this into a class which provides nullable property's for all columns class Record { public long? Id { get; set; } public long? UserId { get; set; } public DateTime? Updated { get; set; } } I just cant seem to work out a method of doing this without having to parse the xml file manually and switch on the field's name attribute to store the values. Is their a way this can be achieved quickly using an XmlSerializer? And if not is their a more efficient way of parsing it manually? Regards and thanks My main problem is that the attribute name needs to have its value set to a property name and its value as the contents of a <field>..</field> element

    Read the article

  • What good open source programs exist for fuzzing popular image file types?

    - by JohnnySoftware
    I am looking for a free, open source, portable fuzzing tool for popular image file types that is written in either Java, Python, or Jython. Ideally, it would accept specifications for the fuzzable fields using some kind of declarative constraints. Non-procedural grammar for specifying constraints are greatly preferred. Otherwise, might as well write them all in Python or whatever. Just specifying ranges of valid values or expressions for them. Ideally, it would support some kind of generative programming to export the fuzzer into various programming languages to suit cases where more customization was required. If it supported a direct-manipulation GUI for controlling parameter values and ranges, that would be nice too. The file formats that should be supported are: GIF JPEG PNG So basically, it should be sort of a toolkit consisting of ready-to-run utility, a framework or library, and be capable of generating the fuzzed files directly as well as from programs it generates. It needs to be simple so that test images can be created quickly. It should have a batch capability for creating a series of images. Creating just one at a time would be too painful. I do not want a hacking tool, just a QA tool. Basically, I just want to address concerns that it is taking too long to get commonplace image rendering/parsing libraries stable and trustworthy.

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • [C#] Three System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • Using the <h2> as the title after sent?

    - by Delan Azabani
    Currently, I have a semi-dynamic system for my website's pages. head.php has all the tags before the content body, foot.php the tags after. Any page using the main theme will include head.php, then write the content, then output foot.php. Currently, to be able to set the title, I quickly set a variable $title before inclusion: <?php $title = 'Untitled document'; include_once '../head.php'; ?> <h2>Untitled document</h2> Content here... <?php include_once '../foot.php'; ?> So that in head.php... <title><?php echo $title; ?> | Delan Azabani</title> However, this seems kludgy as the title is most of the time, the same as the content of the h2 tag. Is there a way I can get PHP to read the content of h2, track back and insert it, then send the whole thing at the end?

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Database schema for Product Properties

    - by Chemosh
    As so many people I'm looking for a Products /Product Properties database schema. I'm using Ruby on Rails and (Thinking) Sphinx for faceted searches. Requirements: Adding new product types and their options should not require a change to the database schema Support faceted searches using Sphinx. Solutions I've come across: (See Bill Karwin's answer) Option 1: Single Table Inheritance Not an option really. The table would contain way to many columns. Option 2: Class Table Inheritance Ruby on Rails caches the database schema on start-up which means a restart whenever a new type of product is introduced. If you have a size able product catalog this could mean hundreds of tables. Option 3: Serialized LOB Kills being able to do faceted searches without heavy application logic. Option 4: Entity-Attribute-Value For testing purposes, EAV worked fine. However it could quickly become a mess and a maintenance hell as you add more and more options (e.g. when an option increase the prices or delivery time). What option should I go with? What other solutions are out there? Is there a silver bullet (ha) I overlooked?

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • How do I stop a bouncy JQuery animation?

    - by Miguel
    In a webapp I'm working on, I want to create some slider divs that will move up and down with mouseover & mouseout (respectively.) I currently have it implemented with JQuery's hover() function, by using animate() and reducing/increasing it's top css value as needed. This works fairly well, actually. The problem is that it tends to get stuck. If you move the mouse over it (especially near the bottom), and quickly remove it, it will slide up & down continuously and won't stop until it's completed 3-5 cycles. To me, it seems that the issue might have to do with one animation starting before another is done (e.g. the two are trying to run, so they slide back and forth.) Okay, now for the code. Here's the basic JQuery that I'm using: $('.slider').hover( /* mouseover */ function(){ $(this).animate({ top : '-=120' }, 300); }, /* mouseout*/ function(){ $(this).animate({ top : '+=120' }, 300); } ); I've also recreated the behavior in a JSFiddle. Any ideas on what's going on? :) ==EDIT== UPDATED JSFiddle

    Read the article

  • setup Qt and PyQt on mac osx so my app can also deployable on windows

    - by hk_programmer
    Hi, I've been coding with Python and C++ and now need to work on building a gui for data visualization purposes. I work on Mac Snow Leopard (intel), python 3.1 using gcc 4.2.1 (from Xcode 3.1) I wanted to first install Qt and then PyQt. And my goals are to be able to: - quickly prototype GUI and the accompanied logic that drives the GUI using PyQt and python - if I decided I need the speed, or if it's fairly easy to translate my GUI into C++ using the Qt tools, I have the options to translate my app into C++ - Be able to deploy my application onto Windows (both the python and c++ version of my app) Give the goals above, what are the correct steps I should take and what issues i should be aware of when setting up Qt and PyQt. Which other deployment tools do I need? From my readings so far, here's what I have: download the Qt source for mac and configure it with -platform macx-g++42 -arch x86_64 -no-framework (i've read somewhere that building as framework causes some trouble in deployment and/or debugging, can't find the article anymore) download latest SIP source and build download latest PyQt and build from source (any special options I should pay attention to?) For deployment, I've read that I would need to use py2exe/cx_freeze for windows, p2app for mac: http://arstechnica.com/open-source/guides/2009/03/how-to-deploying-pyqt-applications-on-windows-and-mac-os-x.ars but seems like what the article describe is deploying an app you build on windows on the windows platform and vice versa. How do you deploy to windows (is it even possible?) if you are writing your Qt app on a mac ? Really appreciate the help

    Read the article

  • Is Assert.Fail() considered bad practice?

    - by Mendelt
    I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body. When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why? @Martin Meredith That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first. @Jimmeh That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out. @Matt Howells Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case @Mitch Wheat That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.

    Read the article

  • Efficiency of manually written loops vs operator overloads (C++)

    - by Sagekilla
    Hi all, in the program I'm working on I have 3-element arrays, which I use as mathematical vectors for all intents and purposes. Through the course of writing my code, I was tempted to just roll my own Vector class with simple +, -, *, /, etc overloads so I can simplify statements like: for (int i = 0; i < 3; i++) r[i] = r1[i] - r2[i]; // becomes: r = r1 - r2; Which should be more or less identical in generated code. But when it comes to more complicated things, could this really impact my performance heavily? One example that I have in my code is this: Manually written version: for (int j = 0; j < 3; j++) { p.vel[j] = p.oldVel[j] + (p.oldAcc[j] + p.acc[j]) * dt2 + (p.oldJerk[j] - p.jerk[j]) * dt12; p.pos[j] = p.oldPos[j] + (p.oldVel[j] + p.vel[j]) * dt2 + (p.oldAcc[j] - p.acc[j]) * dt12; } Using a Vector class with operator overloads: p.vel = p.oldVel + (p.oldAcc + p.acc) * dt2 + (p.oldJerk - p.jerk) * dt12; p.pos = p.oldPos + (p.oldVel + p.vel) * dt2 + (p.oldAcc - p.acc) * dt12; I am compiling my code for maximum possible speed, as it's extremely important that this code runs quickly and calculates accurately. So will me relying on my Vector's for these sorts of things really affect me? For those curious, this is part of some numerical integration code which is not trivial to run in my program. Any insight would be appreciated, as would any idioms or tricks I'm unaware of.

    Read the article

  • On Disk Substring index

    - by emeryc
    I have a file (fasta file to be specific) that I would like to index, so that I can quickly locate any substring within the file and then find the location within the original fasta file. This would be easy to do in many cases, using a Trie or substring array, unfortunately the strings I need to index are 800+ MBs which means that doing them in memory in unacceptable, so I'm looking for a reasonable way to create this index on disk, with minimal memory usage. (edit for clarification) I am only interested in the headers of proteins, so for the largest database I'm interested in, this is about 800 MBs of text. I would like to be able to find an exact substring within O(N) time based on the input string. This must be useable on 32 bit machines as it will be shipped to random people, who are not expected to have 64 bit machines. I want to be able to index against any word break within a line, to the end of the line (though lines can be several MBs long). Hopefully this clarifies what is needed and why the current solutions given are not illuminating. I should also add that this needs to be done from within java, and must be done on client computers on various operating systems, so I can't use any OS Specific solution, and it must be a programatic solution.

    Read the article

  • question about MySQL transaction and trigger

    - by WilliamLou
    I quickly browsed MySQL manual but didn't find the exact information about my question. Here is my question: if I have a InnoDB table A with two triggers triggered by 'AFTER INSERT ON A' and 'AFTER UPDATE ON A'. More specifically, For example: one trigger is defined as: CREATE TRIGGER test_trigger AFTER INSERT ON A FOR EACH ROW BEGIN INSERT INTO B SELECT * FROM A WHERE A.col1 = NEW.col1 END; You can ignore the query between BEGIN AND END, basically I mean this trigger will insert several rows into table B which is also a InnoDB table. Now, if I started a transaction and then insert many rows, say: 10K rows, into table A. If there is no trigger associated with table A, all these inserts are atomic, that's for sure. Now, if table A is associated with several insert/update triggers which insert/update many rows to table B and/or table C etc.. will all these inserts and/or updates are still all atomic? I think it's still atomic, but it's kind of difficult to test and I can't find any explanations in the Manual. Anyone can confirm this? Thanks a lot!

    Read the article

  • Any utility to test expand C/C++ #define macros?

    - by Randy
    It seems I often spend way too much time trying to get a #define macro to do exactly what i want. I'll post my current delemia below and any help is appreciated. But really the bigger question is whether there is any utility someone could reccomend, to quickly display what a macro is actually doing? It seems like even the slow trial and error process would go much faster if I could see what is wrong. Currently, I'm dynamically loading a long list of functions from a DLL I made. The way I've set things up, the function pointers have the same nanes as the exported functions, and the typedef(s) used to prototyp them have the same names, but with a prepended underscor. So I want to use a define to simplfy assignments of a long long list of function pointers. For example, In the code statement below, 'hexdump' is the name of a typdef'd function point, and is also the name of the function, while _hexdump is the name of the typedef. If GetProcAddress() fails, a failure counter in incremented. if (!(hexdump = (_hexdump)GetProcAddress(h, "hexdump"))) --iFail; So lets say I'd like to rplace each line like the above with a macro, like this... GETADDR_FOR(hexdump ) Well this is the best I've come up with so far. It doesn't work (my // comment is just to prevent text formatting in the message)... // #define GETADDR_FOR(a) if (!(a = (#_#a)GetProcAddress(h, "/""#a"/""))) --iFail; And again, while I'd APPRECIATE an insight into what silly mistake I've made, it would make my day to have a utility that would show me the error of my ways, by simply plugging in my macro

    Read the article

  • Do I need to write a trigger for such a simple constraint?

    - by Paul Hanbury
    I really had a hard time knowing what words to put into the title of my question, as I am not especially sure if there is a database pattern related to my problem. I will try to simplify matters as much as possible to get directly to the heart of the issue. Suppose I have some tables. The first one is a list of widget types: create table widget_types ( widget_type_id number(7,0) primary key, description varchar2(50) ); The next one contains icons: create table icons ( icon_id number(7,0) primary key, picture blob ); Even though the users get to select their preferred widget, there is a predefined subset of widgets that they can choose from for each widget type. create table icon_associations ( widget_type_id number(7,0) references widget_types, icon_id number(7,0) references icons, primary key (widget_type_id, icon_id) ); create table icon_prefs ( user_id number(7,0) references users, widget_type_id number(7,0), icon_id number(7,0), primary key (user_id, widget_type_id), foreign key (widget_type_id, icon_id) references icon_associations ); Pretty simple so far. Let us now assume that if we are displaying an icon to a user who has not set up his preferences, we choose one of the appropriate images associated with the current widget. I'd like to specify the preferred icon to display in such a case, and here's where I run into my problem: alter table icon_associations add ( is_preferred char(1) check( is_preferred in ('y','n') ) ) ; I do not see how I can enforce that for each widget_type there is one, and only one, row having is_preferred set to 'y'. I know that in MySQL, I am able to write a subquery in my check constraint to quickly resolve this issue. This is not possible with Oracle. Is my mistake that this column has no business being in the icon_associations table? If not where should it go? Is this a case where, in Oracle, the constraint can only be handled with a trigger? I ask only because I'd like to go the constraint route if at all possible. Thanks so much for your help, Paul

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >