Search Results

Search found 6361 results on 255 pages for 'speed up'.

Page 236/255 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • difference equations in MATLAB - why the need to switch signs?

    - by jefflovejapan
    Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution. First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t": %% --------------------------[2] MODEL proc-----------------------------%% % Define endogenous vars ('a' denotes t+1 values) syms y2a pi2a ya pia va y2t pi2t yt pit vt ; % Monetary policy rule ia = q1*ya+q2*pia; % ia = q1*(ya-yt)+q2*pia; %%option speed limit policy % Model equations IS = rho*y2a+(1-rho)yt-sigma(ia-pi2a)-ya; AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va; dum1 = ya-y2t; dum2 = pia-pi2t; MPs = phi*vt-va; optcon = [IS ; AS ; dum1 ; dum2; MPs]; He then computes the matrix A: %% ------------------ [3] Linearization proc ------------------------%% % Differentiation xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars jopt = jacobian(optcon,xx); % Define Linear Coefficients coef = eval(jopt); B = [ -coef(:,1:5) ] ; C = [ coef(:,6:10) ] ; % B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)] A = inv(C)*B ; %(Linearized reduced form ) As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step: B = [ -coef(:,1:5) ] ; Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • Do you feel underappreciated or resent the geek/nerd stigma?

    - by dotnetdev
    At work we have a piece of A4 paper with the number of everyone in the office. The structure of this document is laid out in rectangles, by department. I work for the department that does all the technical stuff. That includes support—bear in mind that the support staff isn't educated in IT but just has experience in PC maintenance and providing support to a system we resell but don't have source code access to, project manager, team leader, a network administrator, a product manager, and me, a programmer. Anyway, on this paper, we are labelled as nerds and geeks. I did take a little offence to this, as much as it is light hearted (but annoying and old) humour. I have a vivid image that a geek is someone who doesn't go out but codes all day. I code all day at home and at work (when I have something to code...), but I keep balance by going out. I don't know why it is only people who work with computers that get such a stigma. No other profession really gets the same stigma—skilled, technical, or whatever. An account manager (and this is hardly a skilled job) says, "Perhaps [MY NAME HERE] could write some geeky code tomorrow to add this functionality to the website." It is funny how I get such an unfair stigma but I am so pivotal. In fact, if it wasn't for me, the company would have nothing to sell so the account managers would be redundant! I make systems, they get sold, and this is what pays the wages. It's funny how the account managers get a commission for how many systems they sell, or manage to make clients resubscribe to. Yet I built the thing in the first place! On top of that, my brother says all I do is type stuff on a keyboard all day. Surely if I did, I'd be typing at my normal typing speed of 100wpm+ as if I am writing a blog entry. Instead, I plan as I code along on the fly if commercial pressures and time prohibit proper planning. I never type as if I'm writing normal English. There is more to our jobs than just typing code. And my brother is a pipe fitter with no formal qualifications in his name. I could easily, and perhaps more justifiably, say he just manipulates a spanner or something. Does you feel underappreciated or that a geek/nerd stigma is undeserved or unfair?

    Read the article

  • Twisted: why is it that passing a deferred callback to a deferred thread makes the thread blocking a

    - by surtyaarthoughts
    I unsuccessfully tried using txredis (the non blocking twisted api for redis) for a persisting message queue I'm trying to set up with a scrapy project I am working on. I found that although the client was not blocking, it became much slower than it could have been because what should have been one event in the reactor loop was split up into thousands of steps. So instead, I tried making use of redis-py (the regular blocking twisted api) and wrapping the call in a deferred thread. It works great, however I want to perform an inner deferred when I make a call to redis as I would like to set up connection pooling in attempts to speed things up further. Below is my interpretation of some sample code taken from the twisted docs for a deferred thread to illustrate my use case: #!/usr/bin/env python from twisted.internet import reactor,threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(): print 'doing lookup... this may take a while' time.sleep(10) return 'results from redis' def result(res): print res def main(): lc = LoopingCall(main_loop) lc.start(2) d = threads.deferToThread(aBlockingRedisCall) d.addCallback(result) reactor.run() if __name__=='__main__': main() And here is my alteration for connection pooling that makes the code in the deferred thread blocking : #!/usr/bin/env python from twisted.internet import reactor,defer from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' time.sleep(10) # this is now blocking.. any ideas? d = defer.Deferred() d.addCallback(gotFinalResult) d.callback(x) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() So my question is, does anyone know why my alteration causes the deferred thread to be blocking and/or can anyone suggest a better solution?

    Read the article

  • Using multiple aggregate functions in an algebraic expression in (ANSI) SQL statement

    - by morpheous
    I have the following aggregate functions (AGG FUNCs): foo(), foobar(), fredstats(), barneystats(). I want to know if I can use multiple AGG FUNCs in an algebraic expression. This may seem a strange/simplistic question for seasoned SQL developers - however, the but the reason I ask is that so far, all AGG FUNCs examples I have seen are of the simplistic variety e.g. max(salary) < 100, rather than using the AGG FUNCs in an expression which involves using multiple AGG FUNCs in an expression (like agg_func1() agg_func2()). The information below should help clarify further. Given tables with the following schemas: CREATE TABLE item (id int, length float, weight float); CREATE TABLE item_info (item_id, name varchar(32)); # Is it legal (ANSI) SQL to write queries of this format ? SELECT id, name, foo, foobar, fredstats FROM A, B (SELECT id, foo(123) as foo, foobar('red') as foobar, fredstats('weight') as fredstats FROM item GROUP BY id HAVING [ALGEBRAIC EXPRESSION] ORDER BY id AS A), item_info AS B WHERE item.id = B.id Where: ALGEBRAIC EXPRESSION is the type of expression that can be used in a WHERE clause - for example: ((foo(x) < foobar(y)) AND foobar(y) IN (1,2,3)) OR (fredstats(x) <> 0)) I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible. Assuming it is legal to include AGG FUNCS in the way I have done above, I'd like to know: Is there a more efficient way to write the above query ? Is there any way I can speed up the query in terms of a judicious choice of indexes on the tables item and item_info ? Is there a performance hit of using AGG FUNCs in an algebraic expression like I am (i.e. an expression involving the output of aggregate functions rather than constants? Can the expression also include 'scaled' AGG FUNC? (for example: 2*foo(123) < -3*foobar(456) ) - will scaling (i.e. multiplying an AGG FUNC by a number have an effect on performance?) How can I write the query above using INNER JOINS instead?

    Read the article

  • drag to pan on an UserControl

    - by Matías
    Hello, I'm trying to build my own "PictureBox like" control adding some functionalities. For example, I want to be able to pan over a big image by simply clicking and dragging with the mouse. The problem seems to be on my OnMouseMove method. If I use the following code I get the drag speed and precision I want, but of course, when I release the mouse button and try to drag again the image is restored to its original position. using System.Drawing; using System.Windows.Forms; namespace Testing { public partial class ScrollablePictureBox : UserControl { private Image image; private bool centerImage; public Image Image { get { return image; } set { image = value; Invalidate(); } } public bool CenterImage { get { return centerImage; } set { centerImage = value; Invalidate(); } } public ScrollablePictureBox() { InitializeComponent(); SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true); Image = null; AutoScroll = true; AutoScrollMinSize = new Size(0, 0); } private Point clickPosition; private Point scrollPosition; protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); clickPosition.X = e.X; clickPosition.Y = e.Y; } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X = clickPosition.X - e.X; scrollPosition.Y = clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); e.Graphics.FillRectangle(new Pen(BackColor).Brush, 0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height); if (Image == null) return; int centeredX = AutoScrollPosition.X; int centeredY = AutoScrollPosition.Y; if (CenterImage) { //Something not relevant } AutoScrollMinSize = new Size(Image.Width, Image.Height); e.Graphics.DrawImage(Image, new RectangleF(centeredX, centeredY, Image.Width, Image.Height)); } } } But if I modify my OnMouseMove method to look like this: protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X += clickPosition.X - e.X; scrollPosition.Y += clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } ... you will see that the dragging is not smooth as before, and sometimes behaves weird (like with lag or something). What am I doing wrong? I've also tried removing all "base" calls on a desperate movement to solve this issue, haha, but again, it didn't work. Thanks for your time.

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • EF4 Import/Lookup thousands of records - my performance stinks!

    - by Dennis Ward
    I'm trying to setup something for a movie store website (using ASP.NET, EF4, SQL Server 2008), and in my scenario, I want to allow a "Member" store to import their catalog of movies stored in a text file containing ActorName, MovieTitle, and CatalogNumber as follows: Actor, Movie, CatalogNumber John Wayne, True Grit, 4577-12 (repeated for each record) This data will be used to lookup an actor and movie, and create a "MemberMovie" record, and my import speed is terrible if I import more than 100 or so records using these tables: Actor Table: Fields = {ID, Name, etc.} Movie Table: Fields = {ID, Title, ActorID, etc.} MemberMovie Table: Fields = {ID, CatalogNumber, MovieID, etc.} My methodology to import data into the MemberMovie table from a text file is as follows (after the file has been uploaded successfully): Create a context. For each line in the file, lookup the artist in the Actor table. For each Movie in the Artist table, lookup the matching title. If a matching Movie is found, add a new MemberMovie record to the context and call ctx.SaveChanges(). The performance of my implementation is terrible. My expectation is that this can be done with thousands of records in a few seconds (after the file has been uploaded), and I've got something that times out the browser. My question is this: What is the best approach for performing bulk lookups/inserts like this? Should I call SaveChanges only once rather than for each newly created MemberMovie? Would it be better to implement this using something like a stored procedure? A snippet of my loop is roughly this (edited for brevity): while ((fline = file.ReadLine()) != null) { string [] token = fline.Split(separator); string Actor = token[0]; string Movie = token[1]; string CatNumber = token[2]; Actor found_actor = ctx.Actors.Where(a => a.Name.Equals(actor)).FirstOrDefault(); if (found_actor == null) continue; Movie found_movie = found_actor.Movies.Where( s => s.Title.Equals(title, StringComparison.CurrentCultureIgnoreCase)).FirstOrDefault(); if (found_movie == null) continue; ctx.MemberMovies.AddObject(new MemberMovie() { MemberProfileID = profile_id, CatalogNumber = CatNumber, Movie = found_movie }); try { ctx.SaveChanges(); } catch { } } Any help is appreciated! Thanks, Dennis

    Read the article

  • What is your most productive shortcut with Vim?

    - by Olivier Pons
    I've heard a lot about Vim, both pros and cons. It really seems you should be (as a developer) faster with Vim than with any other editor. I'm using Vim to do some basic stuff and I'm at best 10 times less productive with Vim. The only two things you should care about when you talk about speed (you may not care enough about them, but you should) are: Using alternatively left and right hands is the fastest way to use the keyboard. Never touching the mouse is the second way to be as fast as possible. It takes ages for you to move your hand, grab the mouse, move it, and bring it back to the keyboard (and you often have to look at the keyboard to be sure you returned your hand properly to the right place) Here are two examples demonstrating why I'm far less productive with Vim. Copy/Cut & paste. I do it all the time. With all the classical editors you press Shift with the left hand, and you move the cursor with your right hand to select text. Then Ctrl+C copies, you move the cursor and Ctrl+V pastes. With Vim it's horrible: yy to copy one line (you almost never want the whole line!) [number xx]yy to copy xx lines into the buffer. But you never know exactly if you've selected what you wanted. I often have to do [number xx]dd then u to undo! Another example? Search & replace. In PSPad: Ctrl+f then type what you want you search for, then press Enter. In Vim: /, then type what you want to search for, then if there are some special characters put \ before each special character, then press Enter. And everything with Vim is like that: it seems I don't know how to handle it the right way. NB : I've already read the Vim cheat sheet :) My question is: What is the way you use Vim that makes you more productive than with a classical editor?

    Read the article

  • Need Help with Consolidating RoR Google Map Results

    - by Kevin
    I have a project that returns geocoded results within 20 miles of the user. I want these results grouped on the map by zip code, then within the info window show the individual results. The code posted below works, but for some reason it only displays the 1.png rather than looking at the results and using the correct .png icon associated with the number. When I look at the infowindows, it displays the correct png like "/images/2.png" or "/images/5.png" but the actual image is always 1. @ziptickets = Ticket.find(:all, :origin => coords, :select => 'DISTINCT zip, lat, lng', :within => @user.distance_to_travel, :conditions => "status_id = 1") for t in @ziptickets zips = Ticket.find(:all, :conditions => ["zip = ?", t.zip]) currentzip = t.zip.to_s tixinzip = zips.size.to_s imagelocation = "/images/" + tixinzip + ".png" shadowlocation = "/images/" + tixinzip + "s.png" @map.icon_global_init(GIcon.new(:image => imagelocation, :shadow => shadowlocation, :shadow_size => GSize.new(60,40), :icon_anchor => GPoint.new(20,20), :info_window_anchor => GPoint.new(9,2)), "test") newicon = Variable.new("test") new_marker = GMarker.new([t.lat, t.lng], :icon => newicon, :title => imagelocation, :info_window => currentzip) @map.overlay_init(new_marker) end I tried changing the last part of the mapicon from: :info_window_anchor => GPoint.new(9,2)), "test") newicon = Variable.new("test") to: :info_window_anchor => GPoint.new(9,2)), currentzip) newicon = Variable.new(currentzip) but the strangest thing is that any string that has numbers in it causes the map to fail to render in the view and just show a blank screen... same if I replace it with :info_window_anchor => GPoint.new(9,2)), "123") newicon = Variable.new("123") Any advice would be helpful... also it runs a bit slower than my previous code which just set up 4 standard icons and used them outside of the loop so any hints as to speed up execution would be appreciated greatly. Thanks!

    Read the article

  • How to remove music/videos DRM protection and convert to Mobile Devices such as iPod, iPhone, PSP, Z

    - by tonywesley
    The music/video files you purchased from online music stores like iTunes, Yahoo Music or Wal-Mart are under DRM protection. So you can't convert them to the formats supported by your own mobile devices such as Nokia phone, Creative Zen palyer, iPod, PSP, Walkman, Zune… You also can't share your purchased music/videos with your friends. The following step by step tutorial is dedicated to instructing music lovers to how to convert your DRM protected music/videos to mobile devices. Method 1: If you only want to remove DRM protection from your protected music, this method will not spend your money. Step 1: Burn your protected music files to CD-R/RW disc to make an audio CD Step 2: Find a free CD Ripper software to convert the audio CD track back to MP3, WAV, WMA, M4A, AAC, RA… Method 2: This guide will show you how to crack drm from protected wmv, wma, m4p, m4v, m4a, aac files and convert to unprotected WMV, MP4, MP3, WMA or any video and audio formats you like, such as AVI, MP4, Flv, MPEG, MOV, 3GP, m4a, aac, wmv, ogg, wav... I have been using Media Converter software, it is the quickest and easiest solution to remove drm from WMV, M4V, M4P, WMA, M4A, AAC, M4B, AA files by quick recording. It gets audio and video stream at the bottom of operating system, so the output quality is lossless and the conversion speed is fast . The process is as follows. Step 1: Download and install the software Step 2: Run the software and click "Add…" button to load WMA or M4A, M4B, AAC, WMV, M4P, M4V, ASF files Step 3: Choose output formats. If you want to convert protected audio files, please select "Convert audio to" list; If you want to convert protected video files, please select "Convert video to" list. Step 4: You can click "Settings" button to custom preference for output files. Click "Settings" button bellow "Convert audio to" list for protected audio files Click "Settings" button bellow "Convert video to" list for protected video files Step 5: Start remove DRM and convert your DRM protected music and videos by click on "Start" button. What is DRM? DRM, which is most commonly found in movies and music files, doesn't mean just basic copy-protection of video, audio and ebooks, but it basically means full protection for digital content, ranging from delivery to end user's ways to use the content. We can remove the Drm from video and audio files legally by quick recording.

    Read the article

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

  • jquery xml slideshow using ajax

    - by Codemaster Snake
    Hi all, I am trying to create a JQuery based slider using ajax to load images url from a xml file and then creating a html li list dynamically. Till now I am able to append and create DOM structure using Jquery. But I am not able to access the dynamically created list. I have also tried custom events using bind but not able to successfully implement it. Following is my jquery plugin code: (function($){ $.fn.genie = function(options) { var genie_dom = "<div class='genie_wrapper'><ul class='genie'></ul></div>"; var o, base; var genie_styles = "<style>.genie_wrapper{overflow:hidden}.genie{position: relative;margin:0;padding:0}.genie li {position: absolute;margin:0;padding:0}</style>"; var defaults = { width : '960px', height : '300px', background_color : '#000000', xml : 'genie.xml', speed : 1000, pause: 1000 }; base = $(this); o = $.extend(defaults, options); return this.each(function() { create_elements(); }); function create_elements() { $(base).html(genie_dom); $('head').append(genie_styles); $('.genie_wrapper').css({'background-color' : o.background_color, width : o.width, height: '300px', overflow: ''}); $.ajax({ type: "GET", url: o.xml, dataType: "xml", success: function(xml) { var slides = $(xml).find('slide'); var count = 0; $(slides).each(function(){ $('.genie').append('<li class="slide" id="slide'+count+'"><img src="' + $(this).text() + '" /></li>'); $('.genie li:last').css({'z-index' : count}); count++; }); } }); } } })(jQuery); In my html file: There is only one empty div to which I am calling my plugin like $(document).ready(function() { $('.slideshow').genie(); }); I have also tried using following everywhere in JS: $(".slide").bind("start_animation", function(e){ $(this).fadeOut(1000); alert($(this).html()); }); $(".slide").trigger("start_animation"); I want to animate li list using animate function, Can anyone please tell me how to implement it... It would be of great help... Regards, Neeraj Kumar EDIT: Can Anyone help me out please????

    Read the article

  • JSON or YAML encoding in GWT/Java on both client and server

    - by KennethJ
    I'm looking for a super simple JSON or YAML library (not particularly bothered which one) written in Java, and can be used in both GWT on the client, and in its original Java form on the server. What I'm trying to do is this: I have my models, which are shared between the client and the server, and these are the primary source of data interchange. I want to design the web service in between to be as simple as possible, and decided to take the RESTful approach. My problem is that I know our application will grow substantially in the future, and writing all the getters, setters, serialization, factories, etc. by hand fills me with absolute dread. So in order to avoid it, I decided to implement annotations to keep track of attributes on the models. The reason I can't just serialize everything directly, using GWT's own one, or one which works through reflection, is because we need a certain amount of logic going on in the serialization process. I.e. whether references to other models get serialized during the serialization of the original model, or whether an ID is just passed, and general simple things like that. I've then written an annotation processor to preprocess my shared models and generate an implementing class with all the getters, setters, serialization, lazy-loading, etc. To make a long story short, I need some type of simple YAML or JSON library, which allows me to encode and decode manually, so I can generate this code through my annotation processor. I have had a look around the interwebs, but every single one I ran into supported some reflection which, while all fine and dandy, make it pretty much useless for GWT. And in the case of GWT's own JSON library, it uses JSNI for speed purposes, making it useless server side. One solution I did think about involved writing writing two sets of serialization methods on the models, one for the client and one for the server, but I'd rather not do that. Also, I'm pretty new to GWT, and even though I have done a lot of Java, it was back in the 1.2 days, so it's a bit rusty. So if you think I'm going about this problem completely the wrong way, I'm open to suggestions.

    Read the article

  • Moving a unit precisely along a path in x,y coordinates

    - by Adam Eberbach
    I am playing around with a strategy game where squads move around a map. Each turn a certain amount of movement is allocated to a squad and if the squad has a destination the points are applied each turn until the destination is reached. Actual distance is used so if a squad moves one position in the x or y direction it uses one point, but moving diagonally takes ~1.4 points. The squad maintains actual position as float which is then rounded to allow drawing the position on the map. The path is described by touching the squad and dragging to the end position then lifting the pen or finger. (I'm doing this on an iPhone now but Android/Qt/Windows Mobile would work the same) As the pointer moves x, y points are recorded so that the squad gains a list of intermediate destinations on the way to the final destination. I'm finding that the destinations are not evenly spaced but can be further apart depending on the speed of the pointer movement. Following the path is important because obstacles or terrain matter in this game. I'm not trying to remake Flight Control but that's a similar mechanic. Here's what I've been doing, but it just seems too complicated (pseudocode): getDestination() { - self.nextDestination = remove_from_array(destinations) - self.gradient = delta y to destination / delta x to destination - self.angle = atan(self.gradient) - self.cosAngle = cos(self.angle) - self.sinAngle = sin(self.angle) } move() { - get movement allocation for this turn - if self.nextDestination not valid - - getNextDestination() - while(nextDestination valid) && (movement allocation remains) { - - find xStep and yStep using movement allocation and sinAngle/cosAngle calculated for current self.nextDestination - - if current position + xStep crosses the destination - - - find x movement remaining after self.nextDestination reached - - - calculate remaining direct path movement allocation (xStep remaining / cosAngle) - - - make self.position equal to self.nextDestination - - else - - - apply xStep and yStep to current position - } - round squad's float coordinates to integer screen coordinates - draw squad image on map } That's simplified of course, stuff like sign needs to be tweaked to ensure movement is in the right direction. If trig is the best way to do it then lookup tables can be used or maybe it doesn't matter on modern devices like it used to. Suggestions for a better way to do it? an update - iPhone has zero issues with trig and tracking tens of positions and tracks implemented as described above and it draws in floats anyway. The Bresenham method is more efficient, trig is more precise. If I was to use integer Bresenham I would want to multiply by ten or so to maintain a little more positional accuracy to benefit collisions/terrain detection.

    Read the article

  • Is "Systems Designer" the job title that best describes what I do? [closed]

    - by ivo-rossi
    After having worked as Java developer for almost 3 years in the same company that I currently work at, I moved to a new position associated with the development of the same application. I’m in this new position for more than 1 year now. My official job title is Systems Designer, but I’m not sure this is a title that expresses well what I do. So my question here is what would be the most appropriate job title for me? I see this question as important for my career development. After all, I should be able to explain in one word what I do. And it’s no longer “Java Developer”. Well, in more than one word, this is what I do: The business analysts gather requirements / business problems to be solved with the clients and then discuss these requirements with me. Given the requirements, I design the high level solutions to be implemented in our system (e.g. a new screen on the client application, modifications to existing reports, extension to the XML export format of some objects, etc). I base my decision on the current capabilities of the system, the overall impact that the solutions would have on the system and the estimated effort to implement them (as I was a developer of this same application for almost 3 years before I moved to this position, I’m confident in my estimates). The solutions are discussed iteratively with the business analysts until we agree that they are good. The outcome of this analysis is what we call the “requirements design” document, which is written by me, shared with clients for approval and then also with the team that is going to implement the solutions and test them. Note that there are a few problems that I need to find a solution for that are non-functional. If the users are unhappy with the performance of a certain tool, I will investigate what can be done to speed it up. I will do some research – often based in the Java code itself - to identify possibilities of optimizations. But in this new position I no longer code, the main outcome of my work is really the “requirements design”. Is “Systems Designer” really the most appropriate job title?

    Read the article

  • A better way to delete a list of elements from multiple tables

    - by manyxcxi
    I know this looks like a 'please write the code' request, but some basic pointer/principles for doing this the right way should be enough to get me going. I have the following stored procedure: CREATE PROCEDURE `TAA`.`runClean` (IN idlist varchar(1000)) BEGIN DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK; DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK; DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK; START TRANSACTION; DELETE FROM RunningReports WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_TEST WHERE run_id IN (idlist); DELETE FROM RunHistory WHERE id IN (idlist); COMMIT; END $$ It is called by a PHP script to clean out old run history. It is not particularly efficient as you can see and I would like to speed it up. The PHP script gathers the ids to remove from the tables with the following query: $query = "SELECT id, stop_time FROM RunHistory WHERE config_id = $configId AND save = 0 AND NOT(stop_time IS NULL) ORDER BY stop_time"; It keeps the last five run entries and deletes all the rest. So using this query to bring back all the IDs, it determines which ones to delete and keeps the 'newest' five. After gathering the IDs it sends them to the stored procedure to remove them from the associated tables. I'm not very good with SQL, but I ASSUME that using an IN statement and not joining these tables together is probably the least efficient way I can do this, but I don't know enough to ask anything but "how do I do this better?" If possible, I would like to do this all in my stored procedure using a query to gather all the IDs except for the five 'newest', then delete them. Another twist, run entries can be marked save (save = 1) and should not be deleted. The RunHistory table looks like this: CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `config_id` int(11) NOT NULL, [...] `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

    Read the article

  • SQL/Schema comparison and upgrade

    - by Workshop Alex
    I have a simple situation. A large organisation is using several different versions of some (desktop) application and each version has it's own database structure. There are about 200 offices and each office will have it's own version, which can be one of 7 different ones. The company wants to upgrade all applications to the latest versions, which will be version 8. The problem is that they don't have a separate database for each version. Nor do they have a separate database for each office. They have one single database which is handled by a dedicated server, thus keeping things like management and backups easier. Every office has it's own database schema and within the schema there's the whole database structure for their specific application version. As a result, I'm dealing with 200 different schema's which need to be upgraded, each with 7 possible versions. Fortunately, every schema knows the proper version so checking the version isn't difficult. But my problem is that I need to create upgrade scripts which can upgrade from version 1 to version 2 to version 3 to etc... Basically, all schema's need to be bumped up one version until they're all version 8. Writing the code that will do this is no problem. the challenge is how to create the upgrade script from one version to the other? Preferably with some automated tool. I've examined RedGate's SQL Compare and Altova's DatabaseSpy but they're not practical. Altova is way too slow. RedGate requires too much processing afterwards, since the generated SQL Script still has a few errors and it refers to the schema name. Furthermore, the code needs to become part of a stored procedure and the code generated by RedGate doesn't really fit inside a single procedure. (Plus, it's doing too much transaction-handling, while I need everything within a single transaction. I have been considering using another SQL Comparison tool but it seems to me that my case is just too different from what standard tools can deliver. So I'm going to write my own comparison tool. To do this, I'll be using ADOX with Delphi to read the catalogues for every schema version in the database, then use this to write the SQL Statements that will need to upgrade these schema's to their next version. (Comparing 1 with 2, 2 with 3, 3 with 4, etc.) I'm not unfamiliar with generating SQL-Script-Generators so I don't expect too many problems. And I'll only be upgrading the table structures, not any of the other database objects. So, does anyone have some good tips and tricks to apply when doing this kind of comparisons? Things to be aware of? Practical tips to increase speed?

    Read the article

  • web service filling gridview awfully slow, as is paging/sorting

    - by nat
    Hi I am making a page which calls a web service to fill a gridview this is returning alot of data, and is horribly slow. i ran the svcutil.exe on the wsdl page and it generated me the class and config so i have a load of strongly typed objects coming back from each request to the many service functions. i am then using LINQ to loop around the objects grabbing the necessary information as i go, but for each row in the grid i need to loop around an object, and grab another list of objects (from the same request) and loop around each of them.. 1 to many parent object child one.. all of this then gets dropped into a custom datatable a row at a time.. hope that makes sense.... im not sure there is any way to speed up the initial load. but surely i should be able to page/sort alot faster than it is doing. as at the moment, it appears to be taking as long to page/sort as it is to load initially. i thought if when i first loaded i put the datasource of the grid in the session, that i could whip it out of the session to deal with paging/sorting and the like. basically it is doing the below protected void Page_Load(object sender, EventArgs e) { //init the datatable //grab the filter vars (if there are any) WebServiceObj WS = WSClient.Method(args); //fill the datatable (around and around we go) foreach (ParentObject po in WS.ReturnedObj) { var COs = from ChildObject c in WS.AnotherReturnedObj where c.whatever.equals(...) ...etc foreach(ChildObject c in COs){ myDataTable.Rows.Add(tlo.this, tlo.that, c.thisthing, c.thatthing, etc......); } } grdListing.DataSource = myDataTable; Session["dt"] = myDataTable; grdListing.DataBind(); } protected void Listing_PageIndexChanging(object sender, GridViewPageEventArgs e) { grdListing.PageIndex = e.NewPageIndex; grdListing.DataSource = Session["dt"] as DataTable; grdListing.DataBind(); } protected void Listing_Sorting(object sender, GridViewSortEventArgs e) { DataTable dt = Session["dt"] as DataTable; DataView dv = new DataView(dt); string sortDirection = " ASC"; if (e.SortDirection == SortDirection.Descending) sortDirection = " DESC"; dv.Sort = e.SortExpression + sortDirection; grdListing.DataSource = dv.ToTable(); grdListing.DataBind(); } am i doing this totally wrongly? or is the slowness just coming from the amount of data being bound in/return from the Web Service.. there are maybe 15 columns(ish) and a whole load of rows.. with more being added to the data the webservice is querying from all the time any suggestions / tips happily received thanks

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • Just a small problem regarding javscript BOM question

    - by caramel1991
    The question is this: Create a page with a number of links. Then write code that fires on the window onload event, displaying the href of each of the links on the page. And this is my solution <html> <body language="Javascript" onload="displayLink()"> <a href="http://www.google.com/">First link</a> <a href="http://www.yahoo.com/">Second link</a> <a href="http://www.msn.com/">Third link</a> <script type="text/javascript" language="Javascript"> function displayLink() { for(var i = 0;document.links[i];i++) { alert(document.links[i].href); } } </script> </body> </html> This is the answer provided by the book <html> <head> <script language=”JavaScript” type=”text/javascript”> function displayLinks() { var linksCounter; for (linksCounter = 0; linksCounter < document.links.length; linksCounter++) { alert(document.links[linksCounter].href); } } </script> </head> <body onload=”displayLinks()”> <A href=”link0.htm” >Link 0</A> <A href=”link1.htm”>Link 2</A> <A href=”link2.htm”>Link 2</A> </body> </html> Before I get into the javascript tutorial on how to check user browser version or model,I was using the same method as the example,by acessing the length property of the links array for the loop,but after I read through the tutorial,I find out that I can also use this alternative ways,by using the method that the test condition will evalute to true only if the document.links[i] return a valid value,so does my code is written using the valid method??If it's not,any comment regarding how to write a better code??Correct me if I'm wrong,I heard some of the people say "a good code is not evaluate solely on whether it works or not,but in terms of speed,the ability to comprehend the code,and could posssibly let others to understand the code easily".Is is true??

    Read the article

  • which way is correct to retrive data from oracle??

    - by rima
    before answer me plz thinking about the futures of these kind of program and answer me plz. I wanna get some data from oracle server like: 1-get all the function,package,procedure and etc for showing them or drop them & etc... 2-compile my *.sql files,get the result if they have problem & etc... becuz I was beginner in oracle first of all I for solve the second problem I try to connect to sqlPlus by RUN sqlplus and trace the output(I mean,I change the output stream of shell and trace what happend and handle the assigned message to customer. NOW THIS PART SUCCEED. just a little bit I have problem with get all result because the output is asynchronous.any way... [in this case I log in to oracle Server by send argument to the sqlplus by make a process in c#] after that I try to get all function,package or procedure name,but I have problem in speed!so I try to use oracle.DataAccess.dll to connect the database. now I m so confusing about: which way is correct way to build a program that work like Oracle Developer! I do not have any experience for like these program how work. If Your answer is I must use the second way follow this part plz: I search a little bit the Golden,PLedit (Benthic software),I have little bit problem how I must create the connection string?because I thinking about how I can find the host name or port number that oracle work on them?? am I need read the TNSNames.Ora file? IF your answer is I must use the first way follow this part plz: do u have any Idea for how I parse the output?because for example the result of a table is so confusing...[i can handle & program it but I really need someone experience,because the important things to me learn how such software work so nice and with quick response?] All of the has different style in output... If you are not sure Can u help me which book can help me in this way i become expert? becuz for example all the C# write just about how u can connect to DB and the DB books write how u can use this DB program,I looking for a book that give me some Idea how develop an interface for do transaction between these two.not simple send and receive data,for example how write a compiler for them. the language of book is not different for me i know C#,java,VB,sql,Oracle Thanks.

    Read the article

  • java web application best practices

    - by Bruce
    Hi all I'm trying to figure out the optimum way to develop and release a fairly simple web application, and I'm running into several problems. I'll outline the decisions I've made, because somewhere I've clearly gone off the rails.. Hugely grateful for any help! I have what I think is a fairly simple web application. It contains a couple of jsps that reference a couple of java beans, and the usual static html, js, css and images. Decision 1) I wanted to have a clear and clean release procedure, such that I could develop on my local machine and then release reliably to a production machine. I therefore made the decision to package the application into a war file (including all the static resources), to minimize the separate bits and pieces I would need to release. So far so good? Decision 2) I wanted things on my local machine to be as similar as possible to the production environment. So in my html, for example, I may have a reference to a static file such as http://static.foo.com/file . To keep this code working seamlessly on dev and prod, I decided to put static.foo.com in my /etc/hosts when developing locally, so that all the urls work correctly without changing anything. Decision 3) I decided to use eclipse and maven to give me a best practice environment for administering and building my project. So I have a nice tight set up now, except that: Every time I want to change anything in development, like one line in an html file, I have to rebuild the entire project and then wait for tomcat to load the war before I can see if it's what I wanted. So my questions are: 1) Is there a way to connect up eclipse and tomcat so that I don't have to rebuild the war each time? ie tomcat is looking straight at my actual workspace to serve up the static files? 2)I think I'm maybe making things harder by using /etc/hosts to reflect production urls - is there a better way that doesn't involve manually changing over urls (relative urls are fine of course, but where you have many subdomains, say one for static files and one for dynamic, you have to write out the full path, surely?) 3) Is this really best practice?? How do people set things up so that they balance the requirement for an automated, all-encompassing build process on the one hand, and the speed and flexibility to be able to develop javascript and html and css quickly, as quickly as if one just pointed apache at the directory and developed live? What do people find works? Many thanks!

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >