Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 576/717 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • QuadTrees - how to update when internal items are moving

    - by egarcia
    I've implemented a working QuadTree. It subdivides 2-d space in order to accomodate items, identified by their bounding box (x,y,width,height) on the smallest possible quad (up to a minimum area). My code is based on this implementation (mine is in Lua instead of C#) : http://www.codeproject.com/KB/recipes/QuadTree.aspx I've been able to successfully implement insertions and deletions successfully. I've turn now my attention to the update() function, since my items' position and dimensions change over time. My first implementation works, but it is quite naïve: function QuadTree:update(item) self:remove(item) return self.root:insert(item) end Yup, I basically remove and reinsert every item every time they move. This works, but I'd like to optimize it a bit more; after all, most of the time, moving items still remain on the same quadTree node most of the iterations. Is there any standard way to deal with this kind of update? In case it helps, my code is here: http://github.com/kikito/passion/blob/master/ai/QuadTree.lua I'm not looking for someone to implement it for me; pointers to an existing working implementation (even in other languages) would suffice.

    Read the article

  • Submit form with JS

    - by Thomas
    Im working with a shopping cart plugin and that uses a basic form on the product page to submit values like price, product name, etc.. The problem is that the plugin uses a standard submit button to send the values and I would like to replace said button with a prettier custom jquery rollover. In order to make this happen I used some JS and tossed a link around my custom submit button: <a href="#" onclick="document.forms[0].submit()" value="Add to Cart" onsubmit="return ReadForm(this, true);"> <img class="fade" src="style/img/add_btn.jpg" style="background: url(style/img/add_ro.png);" alt=""/> </a> The form submits and the user is redirected to the homepage but the data in the form doesn't seem to get submitted and the product never gets added to the 'cart' page. I suspect that the form is getting submitted but I am failing to fire some additional JS function that submits the data. The plugin adds some JS to the top of the page: <!-- // function ReadForm (obj1, tst) { // Read the user form var i,j,pos; val_total="";val_combo=""; for (i=0; i<obj1.length; i++) { // run entire form obj = obj1.elements[i]; // a form element if (obj.type == "select-one") { // just selects if (obj.name == "quantity" || obj.name == "amount") continue; pos = obj.selectedIndex; // which option selected val = obj.options[pos].value; // selected value val_combo = val_combo + "(" + val + ")"; } } // Now summarize everything we have processed above val_total = obj1.product_tmp.value + val_combo; obj1.product.value = val_total; } //--> If you want to check out the the site in question, take a look here and click 'add to cart button': http://hardtopdepot.com/?p=34 You can see the cart at the top: http://hardtopdepot.com/?page_id=50 Any help would be super appreciated- thanks!

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • How do I write unescaped XML outside of a CDATA

    - by kazanaki
    Hello I am trying to write XML data using Stax where the content itself is HTML If I try xtw.writeStartElement("contents"); xtw.writeCharacters("<b>here</b>"); xtw.writeEndElement(); I get this <contents>&lt;b&gt;here&lt;/b&gt;</contents> Then I notice the CDATA method and change my code to: xtw.writeStartElement("contents"); xtw.writeCData("<b>here</b>"); xtw.writeEndElement(); and this time the result is <contents><![CDATA[<b>here</b>]]></contents> which is still not good. What I really want is <contents><b>here</b></contents> So is there an XML API/Library that allows me to write raw text without being in a CDATA section? So far I have looked at Stax and JDom and they do not seem to offer this. In the end I might resort to good old StringBuilder but this would not be elegant. Update I agree mostly with the answers so far. However instead of <b>here</b> I could have a 1MB HTML document that I want to embed in a bigger XML document. What you suggest means that I have to parse this HTML document in order to understand its structure. I would like to avoid this if possible.

    Read the article

  • Switching textstorage of NSTextViews back and forth

    - by Jakob Dam Jensen
    I'm trying to make a feature in a product which gives the user the ability to split a textview into two. The way this is done is by removing the textview from it's superview, making a NSSplitView and adding the textview as well as a new NSTextView instance to this splitview. Lastly I make these two textviews share the same textstorage in order to make them share the same content. It works great. But the problem is when I want to make one of the two textviews change textstorage. The replaceTextStorage method in NSLayoutManager causes both NSTextView to change textStorage. The API documentation states: replaceTextStorage: All NSLayoutManager objects sharing the original NSTextStorage object then share the new one. This method makes all the adjustments necessary to keep these relationships intact, unlike setTextStorage:. So it makes sense that it would do this. But the question is how do I make it possible to have two (or more) textviews first share the same storage and after that having them using their own? I've tried replacing the layoutManager and even making new instances of NSTextViews but no luck... Any suggestions?

    Read the article

  • How to name multiple versioned ServiceContracts in the same WCF service?

    - by Tor Hovland
    When you have to introduce a breaking change in a ServiceContract, a best practice is to keep the old one and create a new one, and use some version identifier in the namespace. If I understand this correctly, I should be able to do the following: [ServiceContract(Namespace = "http://foo.com/2010/01/14")] public interface IVersionedService { [OperationContract] string WriteGreeting(Person person); } [ServiceContract(Name = "IVersionedService", Namespace = "http://foo.com/2010/02/21")] public interface IVersionedService2 { [OperationContract(Name = "WriteGreeting")] Greeting WriteGreeting2(Person2 person); } With this I can create a service that supports both versions. This actually works, and it looks fine when testing from soapUI. However, when I create a client in Visual Studio using "Add Service Reference", VS disregards the namespaces and simply sees two interfaces with the same name. In order to differentiate them, VS adds "1" to the name of one of them. I end up with proxies called ServiceReference.VersionedServiceClient and ServiceReference.VersionedService1Client Now it's not easy for anybody to see which is the newer version. Should I give the interfaces different names? E.g IVersionedService1 IVersionedService2 or IVersionedService/2010/01/14 IVersionedService/2010/02/21 Doesn't this defeat the purpose of the namespace? Should I put them in different service classes and get a unique URL for each version?

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • memory leak in php script

    - by Jasper De Bruijn
    Hi, I have a php script that runs a mysql query, then loops the result, and in that loop also runs several queries: $sqlstr = "SELECT * FROM user_pred WHERE uprType != 2 AND uprTurn=$turn ORDER BY uprUserTeamIdFK"; $utmres = mysql_query($sqlstr) or trigger_error($termerror = __FILE__." - ".__LINE__.": ".mysql_error()); while($utmrow = mysql_fetch_array($utmres, MYSQL_ASSOC)) { // some stuff happens here // echo memory_get_usage() . " - 1241<br/>\n"; $sqlstr = "UPDATE user_roundscores SET ursUpdDate=NOW(),ursScore=$score WHERE ursUserTeamIdFK=$userteamid"; if(!mysql_query($sqlstr)) { $err_crit++; $cLog->WriteLogFile("Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid\n"); echo "Failed to UPDATE user_roundscores record for user $userid - teamuserid: $userteamid<br>\n"; break; } unset($sqlstr); // echo memory_get_usage() . " - 1253<br/>\n"; // some stuff happens here too } The update query never fails. For some reason, between the two calls of memory_get_usage, there is some memory added. Because the big loop runs about 500.000 or more times, in the end it really adds up to alot of memory. Is there anything I'm missing here? could it herhaps be that the memory is not actually added between the two calls, but at another point in the script? Edit: some extra info: Before the loop it's at about 5mb, after the loop about 440mb, and every update query adds about 250 bytes. (the rest of the memory gets added at other places in the loop). The reason I didn't post more of the "other stuff" is because its about 300 lines of code. I posted this part because it looks to be where the most memory is added.

    Read the article

  • Using JavaCC to infer semantics from a Composite tree

    - by Skice
    Hi all, I am programming (in Java) a very limited symbolic calculus library that manages polynomials, exponentials and expolinomials (sums of elements like "x^n * e^(c x)"). I want the library to be extensible in the sense of new analytic forms (trigonometric, etc.) or new kinds of operations (logarithm, domain transformations, etc.), so a Composite pattern that represent the syntactic structure of an expression, together with a bunch of Visitors for the operations, does the job quite well. My problem arise when I try to implement operations that depends on the semantics more than on the syntax of the Expression (like integrals, for instance: there are a lot of resolution methods for specific classes of functions, but these same classes can be represented with more than a single syntax). So I thought I need something to "parse" the Composite tree to infer its semantics in order to invoke the right integration method (if any). Someone pointed me to JavaCC, but all the examples I've seen deal only with string parsing; so, I don't know if I'm digging in the right direction. Some suggestions? (I hope to have been clear enough!)

    Read the article

  • Are MEF's ComposableParts contracts instance-based?

    - by Dave
    I didn't really know how to phrase the title of my questions, so my apologies in advance. I read through parts of the MEF documentation to try to find the answer to my question, but couldn't find it. I'm using ImportMany to allow MEF to create multiple instances of a specific plugin. That plugin Imports several parts, and within calls to a specific instance, it wants these Imports to be singletons. However, what I don't want is for all instances of this plugin to use the same singleton. For example, let's say my application ImportManys Blender appliances. Every time I ask for one, I want a different Blender. However, each Blender Imports a ControlPanel. I want each Blender to have its own ControlPanel. To make things a little more interesting, each Blender can load BlendPrograms, which are also contained within their own assemblies, and MEF takes care of this loading. A BlendProgram might need to access the ControlPanel to get the speed, but I want to ensure that it is accessing the correct ControlPanel (i.e. the one that is associated with the Blender that is associated with the program!) This diagram might clear things up a little bit: As the note shows, I believe that the confusion could come from an inherently-poor design. The BlendProgram shouldn't touch the ControlPanel directly, and instead perhaps the BlendProgram should get the speed via the Blender, which will then delegate the request to its ControlPanel. If this is the case, then I assume the BlendProgram needs to have a reference to a specific Blender. In order to do this, is the right way to leverage MEF and use an ImportingConstructor for BlendProgram, i.e. [ImportingConstructor] public class BlendProgram : IBlendProgram { public BlendProgram( Blender blender) {} } And if this is the case, how do I know that MEF will use the intended Blender plugin?

    Read the article

  • .NET: Is it possible to get HttpWebRequest to automatically decompress gzip'd responses?

    - by Cheeso
    In this answer, I described how I resorted to wrappnig a GZipStream around the response stream in a HttpWebResponse, in order to decompress it. The relevant code looks like this: HttpWebRequest hwr = (HttpWebRequest) WebRequest.Create(url); hwr.CookieContainer = PersistentCookies.GetCookieContainerForUrl(url); hwr.Accept = "text/xml, */*"; hwr.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip, deflate"); hwr.Headers.Add(HttpRequestHeader.AcceptLanguage, "en-us"); hwr.UserAgent = "My special app"; hwr.KeepAlive = true; var resp = (HttpWebResponse) hwr.GetResponse(); using(Stream s = resp.GetResponseStream()) { Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); ... use s2 ... } Is there a way to get HttpWebResponse to provide a de-compressing stream, automatically? In other words, a way for me to eliminate the following from the above code: Stream s2 = s; if (resp.ContentEncoding.ToLower().Contains("gzip")) s2 = new GZipStream(s2, CompressionMode.Decompress); else if (resp.ContentEncoding.ToLower().Contains("deflate")) s2 = new DeflateStream(s2, CompressionMode.Decompress); Thanks.

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • Paypal Encrypted Website payments

    - by John Isaacks
    I am trying to integrate a PayPal Website Payments Standard Cart Upload payment type into my shopping cart. I integrated Google Checkout a while back and I did not find it overly confusing as I do paypal. I am getting info on how to encrypt it from here: https://cms.paypal.com/us/cgi-bin/?&cmd=_render-content&content_ID=developer/e_howto_html_encryptedwebpayments#id08A3I0P017Q Paypal says I need to generate a private key and a public certificate using OpenSSL. I went to OpenSSL and downloaded the latest release, which is just a folder containing various files but I see no application I can use, not sure what to do here. Even if I were to get OpenSSL to generate me a private key and public cert, the next step is to download either an MS or Java command line tool to create the encrypted cart ahead of time with the cart-total, tax, etc. which sounds crazy to me, like I am supposed to manually do this prior to every order?? Obviously I do not know the items in the cart the customer is going to buy before hand so I need this to be done on the fly on my website using PHP. But I am completely lost. There has to be a way to setup dynamic secure cart uploads to paypal. Can someone please point me in the right direction?

    Read the article

  • How to create a fully lazy singleton for generics

    - by Brendan Vogt
    I have the following code implementation of my generic singleton provider: public sealed class Singleton<T> where T : class, new() { Singleton() { } public static T Instance { get { return SingletonCreator.instance; } } class SingletonCreator { static SingletonCreator() { } internal static readonly T instance = new T(); } } This sample was taken from 2 articles and I merged the code to get me what I wanted: http://www.yoda.arachsys.com/csharp/singleton.html and http://www.codeproject.com/Articles/11111/Generic-Singleton-Provider. This is how I tried to use the code above: public class MyClass { public static IMyInterface Initialize() { if (Singleton<IMyInterface>.Instance == null // Error 1 { Singleton<IMyInterface>.Instance = CreateEngineInstance(); // Error 2 Singleton<IMyInterface>.Instance.Initialize(); } return Singleton<IMyInterface>.Instance; } } And the interface: public interface IMyInterface { } The error at Error 1 is: 'MyProject.IMyInterace' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method 'MyProject.Singleton<T>' The error at Error 2 is: Property or indexer 'MyProject.Singleton<MyProject.IMyInterface>.Instance' cannot be assigned to -- it is read only How can I fix this so that it is in line with the 2 articles mentioned above? Any other ideas or suggestions are appreciated.

    Read the article

  • Why does stored procedure invalidate SQL Cache Dependency?

    - by Fabio Milheiro
    After many hours, I finally realize that I am working correctly with the Cache object in my ASP.NET application but my stored procedures stops it from working correctly. This stored procedure works correctly: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC' AS BEGIN SELECT ID, [Name], Flag, IsDefault FROM dbo.Languages END But this (the one I wanted) doesn't: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC', @TotalRecords INT OUTPUT AS BEGIN SET @TotalRecords = 10 EXEC('SELECT ID, Name, Flag, IsDefault FROM ( SELECT ROW_NUMBER() OVER (ORDER BY ' + @OrderBy + ' ' + @OrderDirection + ') as Row, ID, Name, Flag, IsDefault FROM dbo.Languages) results WHERE Row BETWEEN ((' + @Page + '-1)*' + @ItemsPerPage + '+1) AND (' + @Page + '*' + @ItemsPerPage + ')') END I gave the @TotalRecords parameter the value 10 so you can be sure that the problem is not from the COUNT(*) function which I know is not supported well. Also, when I run it from SQL Server Management Studio, it does exactly what it should do. In the ASP.NET application the results are retrieved correctly, only the cache is somehow unable to work! Can you please help? Maybe a hint I believe that the reason why the dependency HasChanged property is related to the fact that the column Row generated from the ROW_NUMBER is only temporary and, therefore, the SQL SERVER is not able to to say whether the results are changed or not. That's why HasChanged is always set to true. Does anyone know how to paginate results from SQL SERVER without using COUNT or ROW_NUMBER functions?

    Read the article

  • ruby on rails-Problem with the selection form helper

    - by winter sun
    Hello I have a form in witch users can add their working hours view them and edit them (All in one page). When adding working hours the user must select a project from a dropdown list. In case the action is adding a new hour record the dropdown field should remain empty (not selected) in case the action is edit the dropdown field should be selected with the appropriate value. In order to overcome this challenge I wrote the following code <% if params[:id].blank?%> <select name="hour[project_id]" id="hour_project_id"> <option value="nil">Select Project</option> <% @projects.each do|project|%> <option value="<%=project.id %>"><%=project.name%></option> <% end%> </select> <% else %> <%= select('hour','project_id', @projects.collect{|project|[project.name,project.id]},{:prompt => 'Select Project'})%> <% end %> So in case of save action I did the dropdown list only with html, and in case of edit action I did it with the collect method. It works fine until I tried to code the errors. The problem is that when I use the error method: validates_presence_of :project_id it didn't recognize it in the html form of the dropdown list and don’t display the error message (its working only for the dropdown with the collect method). I will deeply appreciate your instructions and help in this matter

    Read the article

  • How to get the equivalent of the accuracy in Google Map Geocoder V3

    - by Scorpi0
    Hi, I want to get geocode from google, and I used to do it with the V2 of the API. Google send in the json a pretty good information, the accuracy, reference here : http://code.google.com/intl/fr-FR/apis/maps/documentation/javascript/v2/reference.html#GGeoAddressAccuracy In V3, Google doesn't seem to send me exactly the same information. There is the array "adresse_component", which seem bigger if the accuracy is better, but not exactly. For example, I have a request accuracy to the street number, the array is of size 8. Another query is accuracy to the route, so less accuracy, but the array is style of size 8, as there is a row 'sublocality', which not appear in the first case. Ok, for a result, Google send a data 'types', which have the 'best' accuracy. This types are here : http://code.google.com/intl/fr-FR/apis/maps/documentation/geocoding/#Types But, there is no real order, and if I wan't the result better than postal_code, I have no clue to how to do that. So, how can I get this equivalent of the V2 accuracy, whithout some dumb and horrible code ?

    Read the article

  • ActionScript Tweening Matrix Transform (big problem)

    - by TheDarkIn1978
    i'm attempting to tween the position and angle of a sprite. if i call the functions without tweening, to appear in one step, it's properly set at the correct coordinates and angle. however, tweening it makes it all go crazy. i'm using an rotateAroundInternalPoint matrix, and assume tweening this along with coordinate positions is messing up the results. works fine (without tweening): public function curl():void { imageWidth = 400; imageHeight = 600; parameters.distance = 0.5; parameters.angle = 45; backCanvas.x = imageWidth - imageHeight * parameters.distance; backCanvas.y = imageHeight - imageHeight * parameters.distance; var internalPointMatrix:Matrix = backCanvas.transform.matrix; MatrixTransformer.rotateAroundInternalPoint(internalPointMatrix, backCanvas.width * parameters.distance, 0, parameters.angle); backCanvas.transform.matrix = internalPointMatrix; } doesn't work properly (with tweening): public function curlUp():void { imageWidth = 400; imageHeight = 600; parameters.distance = 0.5; parameters.angle = 45; distanceTween = new Tween(parameters, "distance", None.easeNone, 0, distance, 1, true); angleTween = new Tween(parameters, "angle", None.easeNone, 0, angle, 1, true); angleTween.addEventListener(TweenEvent.MOTION_CHANGE, animateCurl); } private function animateCurl(evt:TweenEvent):void { backCanvas.x = imageWidth - imageHeight * parameters.distance; backCanvas.y = imageHeight - imageHeight * parameters.distance; var internalPointMatrix:Matrix = backCanvas.transform.matrix; MatrixTransformer.rotateAroundInternalPoint(internalPointMatrix, backCanvas.width * parameters.distance, 0, parameters.angle - previousAngle); backCanvas.transform.matrix = internalPointMatrix; previousAngle = parameters.angle; } in order for the angle to tween properly, i had to add a variable that would track it's last angle setting and subtract it from the new one. however, i still can not get this tween to return the same end position and angle as is without tweening. i've been stuck on this problem for a day now, so any help would be greatly appreciated.

    Read the article

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • how to make sure the non disclosure agreement is read

    - by Xin Qian Ch'ang
    Every time a user registers to our site we need to show a non-disclosure agreement. In order to continue the user has to accept it. My issue is that I have the NDA in all one page and the user does not really read it and accept (like we all do). What I want is to make sure the user reads the NDA and accepts it one he "read" it? What I have now is a simple jqeury validation if the user checks a box and click on accept. then it goes to the next page. Here's what i have <script> $(document).ready(function() { $('#go').click(function() { // check if checkbox is check and go to next page }); }); </script> <div> full nda <hr> <input type=checkbox> <input type=button value=go id=go> </div> Thanks

    Read the article

  • Method for defining simultaneous has-many and has-one associations between two models in CakePHP?

    - by Hobonium
    One thing with which I have long had problems, within the CakePHP framework, is defining simultaneous hasOne and hasMany relationships between two models. For example: BlogEntry hasMany Comment BlogEntry hasOne MostRecentComment (where MostRecentComment is the Comment with the most recent created field) Defining these relationships in the BlogEntry model properties is problematic. CakePHP's ORM implements a has-one relationship as an INNER JOIN, so as soon as there is more than one Comment, BlogEntry::find('all') calls return multiple results per BlogEntry. I've worked around these situations in the past in a few ways: Using a model callback (or, sometimes, even in the controller or view!), I've simulated a MostRecentComment with: $this->data['MostRecentComment'] = $this->data['Comment'][0]; This gets ugly fast if, say, I need to order the Comments any way other than by Comment.created. It also doesn't Cake's in-built pagination features to sort by MostRecentComment fields (e.g. sort BlogEntry results reverse-chronologically by MostRecentComment.created. Maintaining an additional foreign key, BlogEntry.most_recent_comment_id. This is annoying to maintain, and breaks Cake's ORM: the implication is BlogEntry belongsTo MostRecentComment. It works, but just looks...wrong. These solutions left much to be desired, so I sat down with this problem the other day, and worked on a better solution. I've posted my eventual solution below, but I'd be thrilled (and maybe just a little mortified) to discover there is some mind-blowingly simple solution that has escaped me this whole time. Or any other solution that meets my criteria: it must be able to sort by MostRecentComment fields at the Model::find level (ie. not just a massage of the results); it shouldn't require additional fields in the comments or blog_entries tables; it should respect the 'spirit' of the CakePHP ORM. (I'm also not sure the title of this question is as concise/informative as it could be.)

    Read the article

  • Algorithm for scoring user activity

    - by ManBugra
    I have an application where users can: Write reviews about products Add comments to products Up / Down vote reviews Up / Down vote comments Every Up/Down vote is recorded in a db table. What i want to do now is to create a ranking of the most active users in the last 4 weeks. Of course good reviews should be weighted more than good comments. But also e.g. 10 good comments should be weighted more than just one good review. Example: // reviews created in recent 4 weeks //format: [ upVoteCount, downVoteCount ] var reviews = [ [120,23], [32,12], [12,0], [23,45] ]; // comments created in recent 4 weeks // format: [ upVoteCount, downVoteCount ] var comments = [ [1,2], [322,1], [0,0], [0,45] ]; // create weight vector // format: [ reviewWeight, commentsWeight ] var weight = [0.60, 0.40]; // signature: activties..., activityWeight var userActivityScore = score(reviews, comments, weight); ... update user table ... List<Users> users = "from users u order by u.userActivityScore desc"; How would a fair scoring function look like? How could an implementation of the score() function look like? How to add a weight g to the function so that reviews are weighted heavier? How would such a function look like if, for example, votes for pictures would be added?

    Read the article

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • Can I copy an entire Magento application to a new server?

    - by Gapton
    I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server. If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right? So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs. But basically, if I setup the environment right, it should run. Is my assumption correct? Thanks for answering!

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >