Search Results

Search found 8344 results on 334 pages for 'fast vector highlighter'.

Page 298/334 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • jquery animate boxshadow

    - by mstef
    http://jsfiddle.net/mstefanko/w5aAn/877/ Below, i'm achieving the effect I wanted, but with a pending issue. Since i'm using a separate span and positioning it absolute over-top of a box with relative position. I can not access the inputs until after the animation is finished. I'm guessing the only way to alleviate this would be do something similar with just animating the border of the outer box? But nothing I was doing to animate box-shadow:inset was working. HTML <div id="wow"> <span id="pulse"></span> <input id="form-input"/> <input id="form-input"/> </div>? CSS #wow { width: 500px; height: 200px; display: inline-block; position: relative; border: 1px solid black; } #pulse { width: 100%; height: 100%; box-shadow:inset 0 0 20px #6c95c3; -moz-box-shadow:inset 0 0 10px #6c95c3; position: absolute; z-index: 20000; } JS $('#pulse').stop().animate({"opacity": 0}, "fast"); $('#pulse').effect("pulsate", { times:4 }, 500, function() { $(this).remove(); });

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • How to correctly pass a float from C# to C++ (dll)

    - by RavelT
    I'm getting huge differences when I pass a float from C# to C++. I'm passing a dynamic float wich changes over time. With a debuger I get this: c++ lonVel -0.036019072 float c# lonVel -0.029392920 float I did set my MSVC++2010 floating point model to /fp:fast wich should be the standard in .NET if I'm not mistaken, but this didnt help. Now I cant give out the code but I can show a fraction of it. From C# side it looks like this: namespace Example { public class Wheel { public bool loging = true; #region Members public IntPtr nativeWheelObject; #endregion Members public Wheel() { this.nativeWheelObject = Sim.Dll_Wheel_Add(); return; } #region Wrapper methods public void SetVelocity(float lonRoadVelocity,float latRoadVelocity){Sim.Dll_Wheel_SetVelocity(this.nativeWheelObject,lonRoadVelocity,latRoadVelocity);} #endregion Wrapper methods } internal class Sim { #region PInvokes [DllImport(pluginName, CallingConvention=CallingConvention.Cdecl)] public static extern void Dll_Wheel_SetVelocity(IntPtr wheel,float lonRoadVelocity,float latRoadVelocity); #endregion PInvokes } } And in C++ side @ exportFunctions.cpp: EXPORT_API void Dll_Wheel_SetVelocity(CarWheel* wheel,float lonRoadVelocity,float latRoadVelocity){ wheel->SetVelocity(lonRoadVelocity,latRoadVelocity);} So any sugestions on what I should do in order to get 1:1 results or atleast 99% correct results.

    Read the article

  • NHibernate: Using value tables for optimization AND dynamic join

    - by Kostya
    Hi all, My situation is next: there are to entities with many-to-many relation, f.e. Products and Categories. Also, categories has hierachial structure, like a tree. There is need to select all products that depends to some concrete category with all its childs (branch). So, I use following sql statement to do that: SELECT * FROM Products p WHERE p.ID IN ( SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 ) But with big set of data this query executes very long and I found some solution to optimize it, look at here: DECLARE @CatProducts TABLE ( ProductID int NOT NULL ) INSERT INTO @CatProducts SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 SELECT * FROM Products p INNER JOIN @CatProducts cp ON cp.ProductID = p.ID This query executes very fast but I don't know how to do that with NHIbernate. Note, that I need use only ICriteria because of dynamic filtering\ordering. If some one knows a solution for that, it will be fantastic. But I'll pleasure to any suggestions of course. Thank you ahead, Kostya

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • How to store and synchronize a big list of strings

    - by Joel
    I have a large database table in SQLExpress on Windows, with a particular field of interest 'code'. I have an Apache web server with MySQL on Linux. The web application on the Linux box needs access to the list of all codes. The only thing it will use the list for is checking for the existence of a given code. Having the Linux server call out to the Windows server is impractical as the Windows server is behind a NAT'ed office internet connection, and it may not always be accessible. I have set it so the Windows server will push the list of codes to the web server by means of a simple HTTP POST request. However, at this point I have not implemented the storage of the codes on the Linux box. Should I store them in a MySQL table with a single field 'code'? Then I get fast indexed lookups O(1), however I think synchronization will be an issue - given an updated list of codes, pushed from the Windows box, how would I optimally synchronize the list with the database? TRUNCATE, followed by INSERT? Should I instead store them in a flat file? Then I have O(n) look up time rather than O(1). Additionally an extra constant-time overhead too, as I will be processing the file in Ruby. However, synchronization is easy - simply replace the file.

    Read the article

  • PHP site scheduling Java execution?

    - by obfuscation
    I'm trying to get started on combining my (slightly limited) PHP experience with my (better) Java experience, in a project where I need to allow uploads of Java source files to the server, which the server then executes Javac on to compile it. Then, at a set time (e.g. specified on upload) I need to run that once on the server, which will generate some database info for the PHP site to display. To describe my current programming abilities- I have made many desktop Java programs, and am confident in 'pure' Java, but so far have only undertaken a couple of PHP projects (including using the CodeIgniter framework). My motivation for using PHP as the frontend is because I know it is very fast, lightweight and I will be able to display the results I need very easily with it (simple DB readout). Ideally, the technology used should be able to be developed on a localhost (e.g. WAMP, Tomcat etc..) Is there any advice which you could give on what technology I should consider to use to bridge this gap, and what resources could help in using that technology? I have looked at a few, but have struggled to find documentation helping in achieving what I need.

    Read the article

  • Replacing repetitively occuring loops with eval in Javascript - good or bad?

    - by Herc
    Hello stackoverflow! I have a certain loop occurring several times in various functions in my code. To illustrate with an example, it's pretty much along the lines of the following: for (var i=0;i<= 5; i++) { function1(function2(arr[i],i),$('div'+i)); $('span'+i).value = function3(arr[i]); } Where i is the loop counter of course. For the sake of reducing my code size and avoid repeating the loop declaration, I thought I should replace it with the following: function loop(s) { for (var i=0;i<= 5; i++) { eval(s); } } [...] loop("function1(function2(arr[i],i),$('div'+i));$('span'+i).value = function3(arr[i]);"); Or should I? I've heard a lot about eval() slowing code execution and I'd like it to work as fast as a proper loop even in the Nintendo DSi browser, but I'd also like to cut down on code. What would you suggest? Thank you in advance!

    Read the article

  • Processing JSON data with jQuery - strange results needing alert()

    - by James
    I have this code below. I randomly ran across that it will work if I have that alert message exactly where it is. If I take it out or move it to any other spot the tabs will not appear. What exactly is that alert doing that allows the code to work and how can I make it work without the alert? If I move the each loop into the success section it does not work even with the alert. $.ajax({ type: "GET", url: "../ajax.php", data: "action=tabs", dataType: "json", success: function(data){ Projects = data; } }); alert("yes"); $.each(Projects, function(i){ /* Sequentially creating the tabs and assigning a color from the array: */ var tmp = $('<li><a href="#" class="tab green">'+Projects[i].name+'<span class="left" /><span class="right" /></a></li>'); /* Setting the page data for each hyperlink: */ tmp.find('a').data('page','../ajax.php?action=lists&projectID='+Projects[i].project_id); /* Adding the tab to the UL container: */ $('ul.tabContainer').append(tmp); }); The ajax code is retuning json with this code $query = mysql_query("SELECT * FROM `projects` ORDER BY `position` ASC"); $projects = array(); // Filling the $projects array with new project objects: while($row = mysql_fetch_assoc($query)){ $projects[] = $row; } echo json_encode($projects); The returning data is very small and very fast so I don't think that is the problem.

    Read the article

  • Same function on multiple div classes doesn't work

    - by Sebass van Boxel
    I'm doing something terribly wrong and just can't find the solution for it. Situation: I've got a number of products with a number of quotes per product. Those quote automatically scroll in a div. If the scroll reaches the last quote is scroll back to the first one. What works: The function basically works when it's applied on 1 div, but when applied on multiple div it doesn't scroll back to the first one or keeps scrolling endlessly. This is the function i've written for this: function quoteSlide(divname){ $total = ($(divname+" > div").size()) $width = $total * 160; $(divname).css('width', ($width)); console.log ($totalleft *-1); if ($width - 160 > $totalleft *-1){ $currentleft = $(divname).css('left'); $step = -160; $totalleft = parseInt($currentleft)+$step; }else{ $totalleft = 0; } $(divname).animate(     { left: $totalleft }, // what we are animating     'slow', // how fast we are animating     'swing', // the type of easing     function() { // the callback }); } It's being executed by something like: quoteSlide('#quotecontainer_1'); in combination with a setInterval so it keeps scrolling automatically. This is the jsFiddle where it goes wrong (So applied on more than 1 div) http://jsfiddle.net/FsrbZ/. This is the jsFiddle where everything goes okay. (applied on 1 div) When changing the following: quoteSlide('#quotecontainer_1'); quoteSlide('#quotecontainer_2'); setInterval(function() { quoteSlide('#quotecontainer_1'); quoteSlide('#quotecontainer_2'); }, 3400);? to quoteSlide('#quotecontainer_1'); setInterval(function() { quoteSlide('#quotecontainer_1'); }, 3400);? it does work... but only on 1 quotecontainer.

    Read the article

  • Getting a `free()` error when deallocating with `delete` in the backtrace

    - by wonko
    I got the following error from gdb: *** glibc detected *** /.root0/autohome/u132/hsreekum/ipopt/ipopt/debug/Ipopt/examples/ex3/ex3: free(): invalid next size (fast): 0x0000000120052b60 *** Here's the backtrace: #0 0x000000555626b264 in raise () from /lib/libc.so.6 #1 0x000000555626cc6c in abort () from /lib/libc.so.6 #2 0x00000055562a7b9c in __libc_message () from /lib/libc.so.6 #3 0x00000055562aeabc in malloc_printerr () from /lib/libc.so.6 #4 0x00000055562b036c in free () from /lib/libc.so.6 #5 0x000000555561ddd0 in Ipopt::TNLPAdapter::~TNLPAdapter () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #6 0x00000055556a9910 in Ipopt::GradientScaling::~GradientScaling () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #7 0x00000055557241b8 in Ipopt::OrigIpoptNLP::~OrigIpoptNLP () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #8 0x00000055556ae7f0 in Ipopt::IpoptAlgorithm::~IpoptAlgorithm () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #9 0x0000005555602278 in Ipopt::IpoptApplication::~IpoptApplication () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #10 0x0000005555614428 in FreeIpoptProblem () from /home/ba01/u132/hsreekum/ipopt/ipopt/build/lib/libipopt.so.1 #11 0x0000000120001610 in main () at ex3.c:169` And here's the code for Ipopt::TNLPAdapter::~TNLPAdapter () TNLPAdapter::~TNLPAdapter() { delete [] full_x_; delete [] full_lambda_; delete [] full_g_; delete [] jac_g_; delete [] c_rhs_; delete [] jac_idx_map_; delete [] h_idx_map_; delete [] x_fixed_map_; delete [] findiff_jac_ia_; delete [] findiff_jac_ja_; delete [] findiff_jac_postriplet_; delete [] findiff_x_l_; delete [] findiff_x_u_; } My question is : why does free() throw an error when ~TNLPAdapter() uses delete[]? Also, I would like to step through ~TNLPAdapter() so I can see which deallocation causes the error. I believe the error occurs in the external library (IPOPT) but I have compiled it with debug flags on ; is this sufficient?

    Read the article

  • Python2.7: How can I speed up this bit of code (loop/lists/tuple optimization)?

    - by user89
    I repeat the following idiom again and again. I read from a large file (sometimes, up to 1.2 million records!) and store the output into an SQLite databse. Putting stuff into the SQLite DB seems to be fairly fast. def readerFunction(recordSize, recordFormat, connection, outputDirectory, outputFile, numObjects): insertString = "insert into NODE_DISP_INFO(node, analysis, timeStep, H1_translation, H2_translation, V_translation, H1_rotation, H2_rotation, V_rotation) values (?, ?, ?, ?, ?, ?, ?, ?, ?)" analysisNumber = int(outputPath[-3:]) outputFileObject = open(os.path.join(outputDirectory, outputFile), "rb") outputFileObject, numberOfRecordsInFileObject = determineNumberOfRecordsInFileObjectGivenRecordSize(recordSize, outputFileObject) numberOfRecordsPerObject = (numberOfRecordsInFileObject//numberOfObjects) loop1StartTime = time.time() for i in range(numberOfRecordsPerObject ): processedRecords = [] loop2StartTime = time.time() for j in range(numberOfObjects): fout = outputFileObject .read(recordSize) processedRecords.append(tuple([j+1, analysisNumber, i] + [x for x in list(struct.unpack(recordFormat, fout))])) loop2EndTime = time.time() print "Time taken to finish loop2: {}".format(loop2EndTime-loop2StartTime) dbInsertStartTime = time.time() connection.executemany(insertString, processedRecords) dbInsertEndTime = time.time() loop1EndTime = time.time() print "Time taken to finish loop1: {}".format(loop1EndTime-loop1StartTime) outputFileObject.close() print "Finished reading output file for analysis {}...".format(analysisNumber) When I run the code, it seems that "loop 2" and "inserting into the database" is where most execution time is spent. Average "loop 2" time is 0.003s, but it is run up to 50,000 times, in some analyses. The time spent putting stuff into the database is about the same: 0.004s. Currently, I am inserting into the database every time after loop2 finishes so that I don't have to deal with running out RAM. What could I do to speed up "loop 2"?

    Read the article

  • seriouosly elusive for loop (racking my brains!)

    - by user1693359
    I've got a loop issue in Python 2.72 that's really frustrating me. Basically the loop is not iterating fast the first index (j), and I've tried all sorts of ways to fix it with no luck. def learn(dataSet): for i in dataSet.getNext(): recall = raw_input("Enter all members of %s you are able to recall >>> (separated by commas) " % (i.getName())) missed = i.getMembers() missedString = [] for a in missed: missedString.append(a.getName()) Here is the loop I can't get to iterate. The first for loop only goes through the first iteration of 'j' in the split string list, then removes it from 'missedString'. I would like for all members of the split-string 'recall' to be removed from 'missedString'. for j in string.split(recall, ','): if j in missedString: missedString.remove(j) continue for b in missed: if b.getName() not in missedString: missed.remove(b) print 'You missed %d. ' % (len(missed)) if (len(missed)) > 0: print 'Maybe a hint or two will help...' for miss in missed: remind(miss.getSecs(), i.getName(), missed) I really have no clue, help would be appreciated!

    Read the article

  • F# why my recursion is faster than Seq.exists?

    - by user38397
    I am pretty new to F#. I'm trying to understand how I can get a fast code in F#. For this, I tried to write two methods (IsPrime1 and IsPrime2) for benchmarking. My code is: // Learn more about F# at http://fsharp.net open System open System.Diagnostics #light let isDivisible n d = n % d = 0 let IsPrime1 n = Array.init (n-2) ((+) 2) |> Array.exists (isDivisible n) |> not let rec hasDivisor n d = match d with | x when x < n -> (n % x = 0) || (hasDivisor n (d+1)) | _ -> false let IsPrime2 n = hasDivisor n 2 |> not let SumOfPrimes max = [|2..max|] |> Array.filter IsPrime1 |> Array.sum let maxVal = 20000 let s = new Stopwatch() s.Start() let valOfSum = SumOfPrimes maxVal s.Stop() Console.WriteLine valOfSum Console.WriteLine("IsPrime1: {0}", s.ElapsedMilliseconds) ////////////////////////////////// s.Reset() s.Start() let SumOfPrimes2 max = [|2..max|] |> Array.filter IsPrime2 |> Array.sum let valOfSum2 = SumOfPrimes2 maxVal s.Stop() Console.WriteLine valOfSum2 Console.WriteLine("IsPrime2: {0}", s.ElapsedMilliseconds) Console.ReadKey() IsPrime1 takes 760 ms while IsPrime2 takes 260ms for the same result. What's going on here and how I can make my code even faster?

    Read the article

  • Is it possible to submit data into a SQL database, wait for that to finish, and then return the ID g

    - by user322478
    I have an ASP form that needs to submit data to two different systems. First the data needs to go into an MS SQL database, which will get an ID. I then need to submit all that form data to an external system, along with that ID. Pretty much everything in the code works just fine, the data goes into the database, and the data will go to the external system. The problem is I am not getting my ID back from SQL when I execute that query. I am under the impression this is happening because of how fast everything occurs in the code. The database is adding it's row at the same time my post page runs it's query to get the ID back, I think. I need to know of a way to wait until SQL finished the insert or wait for a specific amount of time maybe. I already tried using the hacks to "sleep" with ASP, that did not help. I am sure I could accomplish this in .Net, my background is more .Net than ASP, but this is what I have to work with on my current project. Any ideas?

    Read the article

  • C non-trivial constants

    - by user525869
    I want to make several constants in C with #define to speed up computation. Two of them are not simply trivial numbers, where one is a right shift, the other is a power. math.h in C gives the function pow() for doubles, whereas I need powers for integers, so I wrote my own function, ipow, so I wouldn't need to be casting everytime. My question is this: One of the #define constants I want to make is a power, say ipow(M, T), where M and T were also #define constants. ipow is a function in the actual code, so this actually seems to slows things down when I run the code (is it running ipow everytime the constant is mentioned?). However, when I ues the built in pow function and just do (int)pow(M,T), the code is sped up. I'm confused as to why this is, since the ipow and pow functions are just as fast. On a more general note, can I define constants using #define using functions inside the actual code? The above example has me confused on whether this speeds things up or actually slows things down.

    Read the article

  • Duplicate values multi array

    - by BETA911
    As the title states I'm searching for a unique solution in multi arrays. PHP is not my world so I can't make up a good and fast solution. I basically get this from the database: http://pastebin.com/vYhFCuYw . I want to check on the 'id' key, and if the array contains a duplicate 'id', then the 'aantal' should be added to each other. So basically the output has to be this: http://pastebin.com/0TXRrwLs . Thanks in advance! EDIT As asked, attempt 1 out of many: function checkDuplicates($array) { $temp = array(); foreach($array as $k) { foreach ($array as $v) { $t_id = $k['id']; $t_naam = $k['naam']; $t_percentage = $k['percentage']; $t_aantal = $k['aantal']; if ($k['id'] == $v['id']) { $t_aantal += $k['aantal']; array_push($temp, array( 'id' => $t_id, 'naam' => $t_naam, 'percentage' => $t_percentage, 'aantal' => $t_aantal, ) ); } } } return $temp; }

    Read the article

  • Git development?production workflow – how to set up repo?

    - by Blixt
    I'm working on a relatively small, but fast-changing project (a web application) with a few other developers. We're using Git for source control. We started out creating a stable branch which is what is deployed to the live production web server. The master branch is what is deployed to a secondary "unstable" server for testing purposes. Whenever we felt that the master branch was ready to go live, we merged it into stable. However, we came to a point where we wanted one of the later master commits, but not some of the commits before it, so we used cherry-pick to pull that change into stable. This creates a new commit with the same change as the one in master, and it feels as if we're losing the nice history that Git otherwise provides. Are there better ways of handling this type of unstable/stable deployment model? One solution I thought of was using feature branches, and only ever merging a feature branch into master once we want it to go live. Then we'll tag every deployment instead of having a stable branch.

    Read the article

  • Openlayers - Redraw() layer. Add / Remove layer.

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • Python Memory leak - Solved, but still puzzled

    - by disappearedng
    Dear everyone, I have successfully debugged my own memory leak problems. However, I have noticed some very strange occurence. for fid, fv in freqDic.iteritems(): outf.write(fid+"\t") #ID for i, term in enumerate(domain): #Vector tfidf = self.tf(term, fv) * self.idf( term, docFreqDic) if i == len(domain) - 1: outf.write("%f\n" % tfidf) else: outf.write("%f\t" % tfidf) outf.flush() print "Memory increased by", int(self.memory_mon.usage()) - startMemory outf.close() def tf(self, term, freqVector): total = freqVector[TOTAL] if total == 0: return 0 if term not in freqVector: ## When you don't have these lines memory leaks occurs return 0 ## return float(freqVector[term]) / freqVector[TOTAL] def idf(self, term, docFrequencyPerTerm): if term not in docFrequencyPerTerm: return 0 return math.log( float(docFrequencyPerTerm[TOTAL])/docFrequencyPerTerm[term]) Basically let me describe my problem: 1) I am doing tfidf calculations 2) I traced that the source of memory leaks is coming from defaultdict. 3) I am using the memory_mon from http://stackoverflow.com/questions/276052/how-to-get-current-cpu-and-ram-usage-in-python 4) The reason for my memory leaks is as follows: a) in self.tf, if the lines: if term not in freqVector: return 0 are not added that will cause the memory leak. (I verified this myself using memory_mon and noticed a sharp increase in memory that kept on increasing) The solution to my problem was 1) since fv is a defaultdict, any reference to it that are not found in fv will create an entry. Over a very large domain, this will cause memory leaks. I decided to use dict instead of default dict and the memory problem did go away. My only puzzle is: since fv is created in "for fid, fv in freqDic.iteritems():" shouldn't fv be destroyed at the end of every for loop? I tried putting gc.collect() at the end of the for loop but gc was not able to collect everything (returns 0). Yes, the hypothesis is right, but the memory should stay fairly consistent with ever for loop if for loops do destroy all temp variables. This is what it looks like with that two line in self.tf: Memory increased by 12 Memory increased by 948 Memory increased by 28 Memory increased by 36 Memory increased by 36 Memory increased by 32 Memory increased by 28 Memory increased by 32 Memory increased by 32 Memory increased by 32 Memory increased by 40 Memory increased by 32 Memory increased by 32 Memory increased by 28 and without the the two line: Memory increased by 1652 Memory increased by 3576 Memory increased by 4220 Memory increased by 5760 Memory increased by 7296 Memory increased by 8840 Memory increased by 10456 Memory increased by 12824 Memory increased by 13460 Memory increased by 15000 Memory increased by 17448 Memory increased by 18084 Memory increased by 19628 Memory increased by 22080 Memory increased by 22708 Memory increased by 24248 Memory increased by 26704 Memory increased by 27332 Memory increased by 28864 Memory increased by 30404 Memory increased by 32856 Memory increased by 33552 Memory increased by 35024 Memory increased by 36564 Memory increased by 39016 Memory increased by 39924 Memory increased by 42104 Memory increased by 42724 Memory increased by 44268 Memory increased by 46720 Memory increased by 47352 Memory increased by 48952 Memory increased by 50428 Memory increased by 51964 Memory increased by 53508 Memory increased by 55960 Memory increased by 56584 Memory increased by 58404 Memory increased by 59668 Memory increased by 61208 Memory increased by 62744 Memory increased by 64400 I look forward to your answer

    Read the article

  • Using JAXB to unmarshal/marshal a List<String> - Inheritance

    - by gerry
    I've build the following case. An interface for all JAXBLists: public interface JaxbList<T> { public abstract List<T> getList(); } And an base implementation: @XmlRootElement(name="list") public class JaxbBaseList<T> implements JaxbList<T>{ protected List<T> list; public JaxbBaseList(){} public JaxbBaseList(List<T> list){ this.list=list; } @XmlElement(name="item" ) public List<T> getList(){ return list; } } As well as an implementation for a list of URIs: @XmlRootElement(name="uris") public class JaxbUriList2 extends JaxbBaseList<String> { public JaxbUriList2() { super(); } public JaxbUriList2(List<String> list){ super(list); } @Override @XmlElement(name="uri") public List<String> getList() { return list; } } And I'm using the List in the following way: public JaxbList<String> init(@QueryParam("amount") int amount){ List<String> entityList = new Vector<String>(); ... enityList.add("http://uri"); ... return new JaxbUriList2(entityList); } I thought the output should be: <uris> <uri> http://uri </uri> ... </uris> But it is something like this: <uris> <item xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string"> http://uri </item> ... <uri> http://uri </uri> ... </uris> I think it has something to do with the inheritance, but I don't get it... What's the problem? - How can I fix it? Thanks in advance!

    Read the article

  • 2d trajectory planning of a spaceship with physics.

    - by egarcia
    I'm implementing a 2D game with ships in space. In order to do it, I'm using LÖVE, which wraps Box2D with Lua. But I believe that my question can be answered by anyone with a greater understanding of physics than myself - so pseudo code is accepted as a response. My problem is that I don't know how to move my spaceships properly on a 2D physics-enabled world. More concretely: A ship of mass m is located at an initial position {x, y}. It has an initial velocity vector of {vx, vy} (can be {0,0}). The objective is a point in {xo,yo}. The ship has to reach the objective having a velocity of {vxo, vyo} (or near it), following the shortest trajectory. There's a function called update(dt) that is called frequently (i.e. 30 times per second). On this function, the ship can modify its position and trajectory, by applying "impulses" to itself. The magnitude of the impulses is binary: you can either apply it in a given direction, or not to apply it at all). In code, it looks like this: def Ship:update(dt) m = self:getMass() x,y = self:getPosition() vx,vy = self.getLinearVelocity() xo,yo = self:getTargetPosition() vxo,vyo = self:getTargetVelocity() thrust = self:getThrust() if(???) angle = ??? self:applyImpulse(math.sin(angle)*thrust, math.cos(angle)*thrust)) end end The first ??? is there to indicate that in some occasions (I guess) it would be better to "not to impulse" and leave the ship "drift". The second ??? part consists on how to calculate the impulse angle on a given dt. We are in space, so we can ignore things like air friction. Although it would be very nice, I'm not looking for someone to code this for me; I put the code there so my problem is clearly understood. What I need is an strategy - a way of attacking this. I know some basic physics, but I'm no expert. For example, does this problem have a name? That sort of thing. Thanks a lot.

    Read the article

  • How to optimize my PageRank calculation?

    - by asmaier
    In the book Programming Collective Intelligence I found the following function to compute the PageRank: def calculatepagerank(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() for i in range(iterations): print "Iteration %d" % i for (urlid,) in self.con.execute("select rowid from urllist"): pr=0.15 # Loop through all the pages that link to this one for (linker,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): # Get the PageRank of the linker linkingpr=self.con.execute("select score from pagerank where urlid=%d" % linker).fetchone()[0] # Get the total number of links from the linker linkingcount=self.con.execute("select count(*) from link where fromid=%d" % linker).fetchone()[0] pr+=0.85*(linkingpr/linkingcount) self.con.execute("update pagerank set score=%f where urlid=%d" % (pr,urlid)) self.dbcommit() However, this function is very slow, because of all the SQL queries in every iteration >>> import cProfile >>> cProfile.run("crawler.calculatepagerank()") 2262510 function calls in 136.006 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 136.006 136.006 <string>:1(<module>) 1 20.826 20.826 136.006 136.006 searchengine.py:179(calculatepagerank) 21 0.000 0.000 0.528 0.025 searchengine.py:27(dbcommit) 21 0.528 0.025 0.528 0.025 {method 'commit' of 'sqlite3.Connecti 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler 1339864 112.602 0.000 112.602 0.000 {method 'execute' of 'sqlite3.Connec 922600 2.050 0.000 2.050 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} So I optimized the function and came up with this: def calculatepagerank2(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() inlinks={} numoutlinks={} pagerank={} for (urlid,) in self.con.execute("select rowid from urllist"): inlinks[urlid]=[] numoutlinks[urlid]=0 # Initialize pagerank vector with 1.0 pagerank[urlid]=1.0 # Loop through all the pages that link to this one for (inlink,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): inlinks[urlid].append(inlink) # get number of outgoing links from a page numoutlinks[urlid]=self.con.execute("select count(*) from link where fromid=%d" % urlid).fetchone()[0] for i in range(iterations): print "Iteration %d" % i for urlid in pagerank: pr=0.15 for link in inlinks[urlid]: linkpr=pagerank[link] linkcount=numoutlinks[link] pr+=0.85*(linkpr/linkcount) pagerank[urlid]=pr for urlid in pagerank: self.con.execute("update pagerank set score=%f where urlid=%d" % (pagerank[urlid],urlid)) self.dbcommit() This function is 20 times faster (but uses a lot more memory for all the temporary dictionaries) because it avoids the unnecessary SQL queries in every iteration: >>> cProfile.run("crawler.calculatepagerank2()") 64802 function calls in 6.950 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.004 0.004 6.950 6.950 <string>:1(<module>) 1 1.004 1.004 6.946 6.946 searchengine.py:207(calculatepagerank2 2 0.000 0.000 0.104 0.052 searchengine.py:27(dbcommit) 23065 0.012 0.000 0.012 0.000 {meth 'append' of 'list' objects} 2 0.104 0.052 0.104 0.052 {meth 'commit' of 'sqlite3.Connection 1 0.000 0.000 0.000 0.000 {meth 'disable' of '_lsprof.Profiler' 31298 5.809 0.000 5.809 0.000 {meth 'execute' of 'sqlite3.Connectio 10431 0.018 0.000 0.018 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} But is it possible to further reduce the number of SQL queries to speed up the function even more?

    Read the article

  • C# file Decryption - Bad Data

    - by Jon
    Hi all, I am in the process of rewriting an old application. The old app stored data in a scoreboard file that was encrypted with the following code: private const String SSecretKey = @"?B?n?Mj?"; public DataTable GetScoreboardFromFile() { FileInfo f = new FileInfo(scoreBoardLocation); if (!f.Exists) { return setupNewScoreBoard(); } DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); //A 64 bit key and IV is required for this provider. //Set secret key For DES algorithm. DES.Key = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Set initialization vector. DES.IV = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Create a file stream to read the encrypted file back. FileStream fsread = new FileStream(scoreBoardLocation, FileMode.Open, FileAccess.Read); //Create a DES decryptor from the DES instance. ICryptoTransform desdecrypt = DES.CreateDecryptor(); //Create crypto stream set to read and do a //DES decryption transform on incoming bytes. CryptoStream cryptostreamDecr = new CryptoStream(fsread, desdecrypt, CryptoStreamMode.Read); DataTable dTable = new DataTable("scoreboard"); dTable.ReadXml(new StreamReader(cryptostreamDecr)); cryptostreamDecr.Close(); fsread.Close(); return dTable; } This works fine. I have copied the code into my new app so that I can create a legacy loader and convert the data into the new format. The problem is I get a "Bad Data" error: System.Security.Cryptography.CryptographicException was unhandled Message="Bad Data.\r\n" Source="mscorlib" The error fires at this line: dTable.ReadXml(new StreamReader(cryptostreamDecr)); The encrypted file was created today on the same machine with the old code. I guess that maybe the encryption / decryption process uses the application name / file or something and therefore means I can not open it. Does anyone have an idea as to: A) Be able explain why this isn't working? B) Offer a solution that would allow me to be able to open files that were created with the legacy application and be able to convert them please? Thank you

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >