Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 152/545 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Why are my basic Heroku Apps Taking 2 seconds to load?

    - by viatropos
    I have created two very simple heroku apps to test out the service, but it's often taking several seconds to load the page when I first visit them: Cropify - Basic Sinatra App (on github) Textile2HTML - Even more basic Sinatra App (on github) All I did was create a simple sinatra app and deploy it. I haven't done anything to mess with or test the heroku servers. What can I do to improve response time? It's very slow right now and I'm not sure where to start. The code for the projects are on github if that helps. Thanks so much.

    Read the article

  • Which is faster in Python: x**.5 or math.sqrt(x)?

    - by Casey
    I've been wondering this for some time. As the title say, which is faster, the actual function or simply raising to the half power? UPDATE This is not a matter of premature optimization. This is simply a question of how the underlying code actually works. What is the theory of how Python code works? I sent Guido van Rossum an email cause I really wanted to know the differences in these methods. My email: There are at least 3 ways to do a square root in Python: math.sqrt, the '**' operator and pow(x,.5). I'm just curious as to the differences in the implementation of each of these. When it comes to efficiency which is better? His response: pow and ** are equivalent; math.sqrt doesn't work for complex numbers, and links to the C sqrt() function. As to which one is faster, I have no idea...

    Read the article

  • Benchmarking a particular method in Objective-C

    - by Jasconius
    I have a critical method in an Objective-C application that I need to optimize as much as possible. I first need to take some easy benchmarks on this one single method so I can compare my progress as I optimize. What is the easiest way to track the execution time of a given method in, say, milliseconds, and print that to console.

    Read the article

  • C# TabPage.Controls.Add too long

    - by Toto
    Hi, At run time, I add one control to a tabpage and I notice that it takes 0.5 sec to do it. It's rather long and I would like to reduce this time. I tried Suspend/ResumeLayout but for only one action it's no relevant and do not improved anythng. Any ideas ? Thx

    Read the article

  • Trying to speed up a SQLITE UNION QUERY

    - by user142683
    I have the below SQLITE code SELECT x.t, CASE WHEN S.Status='A' AND M.Nomorebets=0 THEN S.PriceText ELSE '-' END AS Show_Price FROM sb_Market M LEFT OUTER JOIN (select 2010 t union select 2020 t union select 2030 t union select 2040 t union select 2050 t union select 2060 t union select 2070 t ) as x LEFT OUTER JOIN sb_Selection S ON S.MeetingId=M.MeetingId AND S.EventId=M.EventId AND S.MarketId=M.MarketId AND x.t=S.team WHERE M.meetingid=8051 AND M.eventid=3 AND M.Name='Correct Score' With the current interface restrictions, I have to use the above code to ensure that if one selection is missing, that a '-' appears. Some feed would be something like the following SelectionId Name Team Status PriceText =================================== 1 Barney 2010 A 10 2 Jim 2020 A 5 3 Matt 2030 A 6 4 John 2040 A 8 5 Paul 2050 A 15/2 6 Frank 2060 S 10/11 7 Tom 2070 A 15 Is using the above SQL code the quickest & efficient?? Please advise of anything that could help. Messages with updates would be preferable.

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • HttpWebRequest is extremely slow!

    - by Earlz
    Hello, I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results Ruby program: 2.11seconds for 100 HTTP GETs C# library: 20.81seconds for 100 HTTP GETs I have profiled and found the problem to be this function: private HttpWebResponse GetRawResponse(HttpWebRequest request) { HttpWebResponse raw = null; try { raw = (HttpWebResponse)request.GetResponse(); //This line! } catch (WebException ex) { if (ex.Response is HttpWebResponse) { raw = ex.Response as HttpWebResponse; } } return raw; } The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue. What could be causing this huge slow down?

    Read the article

  • Fast modulo 3 or division algorithm?

    - by aaa
    Hello is there a fast algorithm, similar to power of 2, which can be used with 3, i.e. n%3. Perhaps something that uses the fact that if sum of digits is divisible by three, then the number is also divisible. This leads to a next question. What is the fast way to add digits in a number? I.e. 37 - 3 +7 - 10 I am looking for something that does not have conditionals as those tend to inhibit vectorization thanks

    Read the article

  • stopwatch accuracy

    - by oo
    How accurate is System.Diagnostics.Stopwatch? I am trying to do some metrics for different code paths and I need it to be exact. Should I be using stopwatch or is there another solution that is more accurate. I have been told that sometimes stopwatch gives incorrect information.

    Read the article

  • Is regex too slow? Real life examples where simple non-regex alternative is better

    - by polygenelubricants
    I've seen people here made comments like "regex is too slow!", or "why would you do something so simple using regex!" (and then present a 10+ lines alternative instead), etc. I haven't really used regex in industrial setting, so I'm curious if there are applications where regex is demonstratably just too slow, AND where a simple non-regex alternative exists that performs significantly (maybe even asymptotically!) better. Obviously many highly-specialized string manipulations with sophisticated string algorithms will outperform regex easily, but I'm talking about cases where a simple solution exists and significantly outperforms regex. What counts as simple is subjective, of course, but I think a reasonable standard is that if it uses only String, StringBuilder, etc, then it's probably simple.

    Read the article

  • how to profile silverlight mvvm application with a lot of custom controls

    - by tomo
    There is a quite big LOB silverlight application and we wrote a lot of custom controls which are rather heavy in drawing. All data is loaded by RIA service, processed and bound (using INofityPropertyChanged interface) to the view. The problem is that first drawing takes a lot time. Following calls to the service (server) and redrawing is quite fast. I used Equatec profiler to track the problem. I saw that processing takes a couple of miliseconds only so my idea is that the drawing by SL engine is slow. I'm wondering if it is possible to profile somehow processes inside SL to check which drawing operations are taking too much time. Are there any guidelines how to implement faster drawing of complex custom controls?

    Read the article

  • Why does appending "" to a String save memory?

    - by hsmit
    I used a variable with a lot of data in it, say String data. I wanted to use a small part of this string in the following way: this.smallpart = data.substring(12,18); After some hours of debugging (with a memory visualizer) I found out that the objects field smallpart remembered all the data from data, although it only contained the substring. When I changed the code into: this.smallpart = data.substring(12,18)+""; ..the problem was solved! Now my application uses very little memory now! How is that possible? Can anyone explain this? I think this.smallpart kept referencing towards data, but why? UPDATE: How can I clear the big String then? Will data = new String(data.substring(0,100)) do the thing?

    Read the article

  • Is it possible to disable user sessions for guest in Joomla 1.5

    - by WebolizeR
    Hi; Is it possible to disable session handling in Joomla 1.5 for guests. I donot use user system in the frontend, i assumed that it's not needed to run queries like below: Site will use APC or Memcache as caching system under heavy load, so it's important for me tHanks for your comments # DELETE FROM jos_session WHERE ( time < '1274709357' ) # SELECT * FROM jos_session WHERE session_id = '70c247cde8dcc4dad1ce111991772475' # UPDATE jos_session SET time='1274710257',userid='0',usertype='',username='',gid='0',guest='1',client_id='0',data='__default|a:8:{s:15:\"session.counter\";i:5;s:19:\"session.timer.start\";i:1274709740;s:18:\"session.timer.last\";i:1274709749;s:17:\"session.timer.now\";i:1274709754;s:22:\"session.client.browser\";s:88:\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3\";s:8:\"registry\";O:9:\"JRegistry\":3:{s:17:\"_defaultNameSpace\";s:7:\"session\";s:9:\"_registry\";a:1:{s:7:\"session\";a:1:{s:4:\"data\";O:8:\"stdClass\":0:{}}}s:7:\"_errors\";a:0:{}}s:4:\"user\";O:5:\"JUser\":19:{s:2:\"id\";i:0;s:4:\"name\";N;s:8:\"username\";N;s:5:\"email\";N;s:8:\"password\";N;s:14:\"password_clear\";s:0:\"\";s:8:\"usertype\";N;s:5:\"block\";N;s:9:\"sendEmail\";i:0;s:3:\"gid\";i:0;s:12:\"registerDate\";N;s:13:\"lastvisitDate\";N;s:10:\"activation\";N;s:6:\"params\";N;s:3:\"aid\";i:0;s:5:\"guest\";i:1;s:7:\"_params\";O:10:\"JParameter\":7:{s:4:\"_raw\";s:0:\"\";s:4:\"_xml\";N;s:9:\"_elements\";a:0:{}s:12:\"_elementPath\";a:1:{i:0;s:74:\"C:\xampp\htdocs\sites\iv.mynet.com\libraries\joomla\html\parameter\element\";}s:17:\"_defaultNameSpace\";s:8:\"_default\";s:9:\"_registry\";a:1:{s:8:\"_default\";a:1:{s:4:\"data\";O:8:\"stdClass\":0:{}}}s:7:\"_errors\";a:0:{}}s:9:\"_errorMsg\";N;s:7:\"_errors\";a:0:{}}s:13:\"session.token\";s:32:\"a2b19c7baf223ad5fd2d5503e18ed323\";}' WHERE session_id='70c247cde8dcc4dad1ce111991772475'

    Read the article

  • How do I make "simple" throughput servlet-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • When should I open and close a website's cached WCF proxy?

    - by Brandon Linton
    I've browsed around the other articles on StackOverflow that relate to caching WCF proxies for reuse, and I've read this article explaining why I should explicitly open the proxy before calling anything on it. I'm still a little hazy on the best implementation details. My question is: when should I open and close proxies for service calls on a website, and what should their lifetime be (per call, per request, or per web app)? We aren't planning on leveraging cached security contexts at the moment (but it's not unforeseeable). Thanks!

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • Python: Why is IDLE so slow?

    - by Adam Matan
    Hi, IDLE is my favorite Python editor. It offers very nice and intuitive Python shell which is extremely useful for unit-testing and debugging, and a neat debugger. However, code executed under IDLE is insanely slow. By insanely I mean 3 orders of magnitude slow: bash time echo "for i in range(10000): print 'x'," | python Takes 0.052s, IDLE import datetime start=datetime.datetime.now() for i in range(10000): print 'x', end=datetime.datetime.now() print end-start Takes: >>> 0:01:44.853951 Which is roughly 2,000 times slower. Any thoughts, or ideas how to improve this? I guess it has something to do with the debugger in the background, but I'm not really sure. Adam

    Read the article

  • How to efficiently save changes made in UI/main thread with Core Data?

    - by Jaanus
    So, there have been several posts here about importing and saving data from an external data source into Core Data. Apple documents a reasonable pattern for this: "import and save on background thread, merge saved objects to main thread." All fine and good. I have a related but different problem: the user is modifying data in the UI and main thread, and thus modifies state of some objects in the managed object context (MOC). I would like to save these changes from time to time. What is a good way to do that? Now, you could say that I could do the same: create a background thread with its own MOC and pass the changed objectID-s there. The catch-22 for me with this is that an object's ID changes when it is saved, and I cannot guarantee the order of things happening. I may end up passing a different objectID into the background thread for the same object, based on whether the object has been previously saved or not, and I don't know if Core Data can resolve this and see that different objectID-s are pointing to the same object and not create duplicates for me. (I could test this, but I'm lazywebbing with this question first.) One thought I had: I could always do MOC saves on a background thread, and queue them up with operationqueue, so that there is always only one save in progress. I would not create a new MOC, I would just use the same MOC as in main thread. Now, this is not thread safe and when someone modifies the MOC in main thread while it is being saved in background thread, the results will probably be catastrophic. But, minus the thread safety, you can see what kind of solution I'd wish for. To be clear, the problem I need to fix is that if I just do the save in main thread, it blocks the UI for an unacceptably long period of time, I want to move the save to background thread. So, questions: what about the reasoning of an object ID changing during saving, and Core Data being able to resolve them to the same object? Would this be the right way of addressing this problem? any other good ways of doing this?

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Eclipse JUnit Plugin Test very slow to re-execute Test Suite on Windows

    - by soundasleepful
    I'm having an odd, and stressing, problem with running a large JUnit Plugin test suite in Eclipse. When I try to re-run a JUnit plugin suite that has just been executed, Eclipse hangs for quite some time before it eventually wakes up and launches. It can take up to 5 minutes sometimes, and increases with the size of the suite. Visually, it appears as a GC cleanup, except that I have plenty of GC space available (400 MB freely allocated). The size of the workspace that is has to delete is well under 1 GB, and there are not too many files - definitely less than 20,000. While I was waiting for a new run to start, I decided to manually kill explorer.exe to see if it had any effect. Surprisingly, Eclipse instantly fell out of its freeze and ran as normal. This makes me think that Windows is somehow interfering with the deletion of these workspace files. They're not being put into the Recycle Bin though. The workspace is in C: which I think is out of the range of any workspace/domain stuff. Any ideas?

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • fast algorithm implementation to sort very small set

    - by aaa
    hello. This is the problem I ran into long time ago. I thought I may ask your for your ideas. assume I have very small set of numbers (integers), 4 or 8 elements, that need to be sorted, fast. what would be the best approach/algorithm? my approach was to use the max/min functions. I guess my question pertains more to implementation, rather than type of algorithm. At this point it becomes somewhat hardware dependent , so let us assume Intel 64-bit processor with SSE3 . Thanks

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >