Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 5/869 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • Managing large binary files with git

    - by pi
    Hi there. I am looking for opinions of how to handle large binary files on which my source code (web application) is dependent. We are currently discussing several alternatives: Copy the binary files by hand. Pro: Not sure. Contra: I am strongly against this, as it increases the likelihood of errors when setting up a new site/migrating the old one. Builds up another hurdle to take. Manage them all with git. Pro: Removes the possibility to 'forget' to copy a important file Contra: Bloats the repository and decreases flexibility to manage the code-base and checkouts/clones/etc will take quite a while. Separate repositories. Pro: Checking out/cloning the source code is fast as ever, and the images are properly archived in their own repository. Contra: Removes the simpleness of having the one and only git repository on the project. Surely introduces some other things I haven't thought about. What are your experiences/thoughts regarding this? Also: Does anybody have experience with multiple git repositories and managing them in one project? Update: The files are images for a program which generates PDFs with those files in it. The files will not change very often(as in years) but are very relevant to a program. The program will not work without the files. Update2: I found a really nice screencast on using git-submodule at GitCasts.

    Read the article

  • How can I quickly parse large (>10GB) files?

    - by Andrew
    Hi - I have to process text files 10-20GB in size of the format: field1 field2 field3 field4 field5 I would like to parse the data from each line of field2 into one of several files; the file this gets pushed into is determined line-by-line by the value in field4. There are 25 different possible values in field2 and hence 25 different files the data can get parsed into. I have tried using Perl (slow) and awk (faster but still slow) - does anyone have any suggestions or pointers toward alternative approaches? FYI here is the awk code I was trying to use; note I had to revert to going through the large file 25 times because I wasn't able to keep 25 files open at once in awk: chromosomes=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25) for chr in ${chromosomes[@]} do awk < my_in_file_here -v pat="$chr" '{if ($4 == pat) for (i = $2; i <= $2+52; i++) print i}' >> my_out_file_"$chr".query done

    Read the article

  • Accessing Members of Containing Objects from Contained Objects.

    - by Bunkai.Satori
    If I have several levels of object containment (one object defines and instantiates another object which define and instantiate another object..), is it possible to get access to upper, containing - object variables and functions, please? Example: class CObjectOne { public: CObjectOne::CObjectOne() { Create(); }; void Create(); std::vector<ObjectTwo>vObejctsTwo; int nVariableOne; } bool CObjectOne::Create() { CObjectTwo ObjectTwo(this); vObjectsTwo.push_back(ObjectTwo); } class CObjectTwo { public: CObjectTwo::CObjectTwo(CObjectOne* pObject) { pObjectOne = pObject; Create(); }; void Create(); CObjectOne* GetObjectOne(){return pObjectOne;}; std::vector<CObjectTrhee>vObjectsTrhee; CObjectOne* pObjectOne; int nVariableTwo; } bool CObjectTwo::Create() { CObjectThree ObjectThree(this); vObjectsThree.push_back(ObjectThree); } class CObjectThree { public: CObjectThree::CObjectThree(CObjectTwo* pObject) { pObjectTwo = pObject; Create(); }; void Create(); CObjectTwo* GetObjectTwo(){return pObjectTwo;}; std::vector<CObjectsFour>vObjectsFour; CObjectTwo* pObjectTwo; int nVariableThree; } bool CObjectThree::Create() { CObjectFour ObjectFour(this); vObjectsFour.push_back(ObjectFour); } main() { CObjectOne myObject1; } Say, that from within CObjectThree I need to access nVariableOne in CObjectOne. I would like to do it as follows: int nValue = vObjectThree[index].GetObjectTwo()->GetObjectOne()->nVariable1; However, after compiling and running my application, I get Memory Access Violation error. What is wrong with the code above(it is example, and might contain spelling mistakes)? Do I have to create the objects dynamically instead of statically? Is there any other way how to achieve variables stored in containing objects from withing contained objects?

    Read the article

  • Debugging XSLT with extension objects in Visual Studio 2010

    - by Alex Ciminian
    I'm currently working on a project that involves a lot of XSLT transformations and I really need a debugger (I have XSLTs that are 1000+ lines long and I didn't write them :-). The project is written in C# and makes use of extension objects: xslArg.AddExtensionObject("urn:<obj>", new <Obj>()); From my knowledge, in this situation Visual Studio is the only tool that can help me debug the transformations step-by-step. The static debugger is no use because of the extension objects (it throws an error when it reaches elements that reference their namespace). Fortunately, I've found this thread which gave me a starting point (at least I know it can be done). After searching MSDN, I found the criteria that makes stepping into the transform possible. They are listed here. In short: the XML and the XSLT must be loaded via a class that has the IXmlLineInfo interface (XmlReader & co.) the XML resolver used in the XSLTCompiledTransform constructor is file-based (XmlUriResolver should work). the stylesheet should be on the local machine or on the intranet (?) From what I can tell, I fit all these criteria, but it still doesn't work. The relevant code samples are posted below: // [...] xslTransform = new XslCompiledTransform(true); xslTransform.Load(XmlReader.Create(new StringReader(contents)), null, new BaseUriXmlResolver(xslLocalPath)); // [...] // I already had the xml loaded in an xmlDocument // so I have to convert to an XmlReader XmlTextReader r = new XmlTextReader(new StringReader(xmlDoc.OuterXml)); XsltArgumentList xslArg = new XsltArgumentList(); xslArg.AddExtensionObject("urn:[...]", new [...]()); xslTransform.Transform(r, xslArg, context.Response.Output); I really don't get what I'm doing wrong. I've checked the interfaces on both XmlReader objects and they implement the required one. Also, BaseUriXmlResolver inherits from XmlUriResolver and the stylesheet is stored locally. The screenshot below is what I get when stepping into the Transform function. First I can see the stylesheet code after stepping through the parameters (on template-match), I get this: If anyone has any idea why it doesn't work or has an alternative way of getting it to work I'd be much obliged :). Thanks, Alex

    Read the article

  • Converting formCollection array to objects in the controller

    - by bergin
    in my view I have several [n].propertyName array fields I want to turn the formCollection fields into objects myobject[n].propertyName when it goes to the controller. so for example, the context: View: foreach (var item in Model.SSSubjobs.AsEnumerable()) <%: Html.Hidden("["+c+"].sssj_id", item.sssj_id ) %> <%: Html.Hidden("["+c+"].order_id", item.order_id ) %> <%: Html.TextBox("["+c+"].farm", item.farm %> <%: Html.TextBox("["+c+"].field", item.field %> c++; Controller: I want to take the above [0].sssj_id and turn into sssj[0].sssj_id or a list of sssj objects My first idea was to look in the form collection for things starting with "[" but I have a feeling this isnt right... this is as far as I got: public IList<SoilSamplingSubJob> extractSSSJ(FormCollection c) { IList<SoilSamplingSubJob> sssj_list=null; SoilSamplingSubJob sssj; var n=0; foreach (var key in c.AllKeys) // iterate through the formcollection { var value = c[key]; if(key.StartsWith("[")) // ie turn [0].gps_pk_chx into sssj.gps_pk_chx ??? } return sssj_list; }

    Read the article

  • c# Most efficient way to combine two objects

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

  • Most efficient way to combine two objects in C#

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

  • Optional Member Objects

    - by David Relihan
    Okay, so you have a load of methods sprinkled around your systems main class. So you do the right thing and refactor by creating a new class and perform move method(s) into a new class. The new class has a single responsibility and all is right with the world again: class Feature { public: Feature(){}; void doSomething(); void doSomething1(); void doSomething2(); }; So now your original class has a member variable of type object: Feature _feature; Which you will call in the main class. Now if you do this many times, you will have many member-objects in your main class. Now these features may or not be required based on configuration so in a way it's costly having all these objects that may or not be needed. Can anyone suggest a way of improving this? At the moment I plan to test in the newly created class if the feature is enabled - so the when a call is made to method I will return if it is not enabled. I could have a pointer to the object and then only call new if feature is enabled - but this means I will have to test before I call a method on it which would be potentially dangerous and not very readable. Would having an auto_ptr to the object improve things: auto_ptr<Feature> feature; Or am I still paying the cost of object invokation even though the object may\or may not be required. BTW - I don't think this is premeature optimisation - I just want to consider the possibilites.

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Core Data Model Design Question - Changing "Live" Objects also Changes Saved Objects

    - by mwt
    I'm working on my first Core Data project (on iPhone) and am really liking it. Core Data is cool stuff. I am, however, running into a design difficulty that I'm not sure how to solve, although I imagine it's a fairly common situation. It concerns the data model. For the sake of clarity, I'll use an imaginary football game app as an example to illustrate my question. Say that there are NSMO's called Downs and Plays. Plays function like templates to be used by Downs. The user creates Plays (for example, Bootleg, Button Hook, Slant Route, Sweep, etc.) and fills in the various properties. Plays have a to-many relationship with Downs. For each Down, the user decides which Play to use. When the Down is executed, it uses the Play as its template. After each down is run, it is stored in history. The program remembers all the Downs ever played. So far, so good. This is all working fine. The question I have concerns what happens when the user wants to change the details of a Play. Let's say it originally involved a pass to the left, but the user now wants it to be a pass to the right. Making that change, however, not only affects all the future executions of that Play, but also changes the details of the Plays stored in history. The record of Downs gets "polluted," in effect, because the Play template has been changed. I have been rolling around several possible fixes to this situation, but I imagine the geniuses of SO know much more about how to handle this than I do. Still, the potential fixes I've come up with are: 1) "Versioning" of Plays. Each change to a Play template actually creates a new, separate Play object with the same name (as far as the user can tell). Underneath the hood, however, it is actually a different Play. This would work, AFAICT, but seems like it could potentially lead to a wild proliferation of Play objects, esp. if the user keeps switching back and forth between several versions of the same Play (creating object after object each time the user switches). Yes, the app could check for pre-existing, identical Plays, but... it just seems like a mess. 2) Have Downs, upon saving, record the details of the Play they used, but not as a Play object. This just seems ridiculous, given that the Play object is there to hold those just those details. 3) Recognize that Play objects are actually fulfilling 2 functions: one to be a template for a Down, and the other to record what template was used. These 2 functions have a different relationship with a Down. The first (template) has a to-many relationship. But the second (record) has a one-to-one relationship. This would mean creating a second object, something like "Play-Template" which would retain the to-many relationship with Downs. Play objects would get reconfigured to have a one-to-one relationship with Downs. A Down would use a Play-Template object for execution, but use the new kind of Play object to store what template was used. It is this change from a to-many relationship to a one-to-one relationship that represents the crux of the problem. Even writing this question out has helped me get clearer. I think something like solution 3 is the answer. However if anyone has a better idea or even just a confirmation that I'm on the right track, that would be helpful. (Remember, I'm not really making a football game, it's just faster/easier to use a metaphor everyone understands.) Thanks.

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • C++ Pointers, objects, etc

    - by Zeee
    It may be a bit confusing, but... Let's say I have a vector type in a class to store objects, something like vector, and I have methods on my class that will later return Operators from this vector. Now if any of my methods receives an Operator, will I have any trouble to insert it directly into the vector? Or should I use the copy constructor to create a new Operator and put this new one on the vector?

    Read the article

  • Python: slicing a very large binary file

    - by Duncan Tait
    Say I have a binary file of 12GB and I want to slice 8GB out of the middle of it. I know the position indices I want to cut between. How do I do this? Obviously 12GB won't fit into memory, that's fine, but 8GB won't either... Which I thought was fine, but it appears binary doesn't seem to like it if you do it in chunks! I was appending 10MB at a time to a new binary file and there are discontinuities on the edges of each 10MB chunk in the new file. Is there a Pythonic way of doing this easily?

    Read the article

  • Use php to zip large files

    - by Joseph
    Hi, I have a php form that has a bunch of checkboxes that all contain links to files. Once a user clicks on which checkboxes (files) they want, it then zips up the files and forces a download. I got a simple php zip force download to work, but when one of the files is huge or if someone lets say selects the whole list to zip up and download, my server errors out. I understand that I can increase the server size, but are there any other ways? Thanks!

    Read the article

  • Why do transfer objects need to implement Serializable?

    - by smaye81
    I realized today that I have blindly just followed this requirement for years without ever really asking why. Today, I ran across a NotSerializableException with a model object I created from scratch and I realized enough is enough. I was told this was because of session replication between load-balanced servers, but I know I've seen other objects at session scope that do not implement Serializable. Is this the real reason?

    Read the article

  • Reading a large file into Perl array of arrays and manipulating the output for different purposes

    - by Brian D.
    Hello, I am relatively new to Perl and have only used it for converting small files into different formats and feeding data between programs. Now, I need to step it up a little. I have a file of DNA data that is 5,905 lines long, with 32 fields per line. The fields are not delimited by anything and vary in length within the line, but each field is the same size on all 5905 lines. I need each line fed into a separate array from the file, and each field within the line stored as its own variable. I am having no problems storing one line, but I am having difficulties storing each line successively through the entire file. This is how I separate the first line of the full array into individual variables: my $SampleID = substr("@HorseArray", 0, 7); my $PopulationID = substr("@HorseArray", 9, 4); my $Allele1A = substr("@HorseArray", 14, 3); my $Allele1B = substr("@HorseArray", 17, 3); my $Allele2A = substr("@HorseArray", 21, 3); my $Allele2B = substr("@HorseArray", 24, 3); ...etc. My issues are: 1) I need to store each of the 5905 lines as a separate array. 2) I need to be able to reference each line based on the sample ID, or a group of lines based on population ID and sort them. I can sort and manipulate the data fine once it is defined in variables, I am just having trouble constructing a multidimensional array with each of these fields so I can reference each line at will. Any help or direction is much appreciated. I've poured over the Q&A sections on here, but have not found the answer to my questions yet. Thanks!! -Brian

    Read the article

  • Auto Generate Objects in DBIx::Class ORM in Perl

    - by Sam
    Hello, I started learning DBIx::class and I reach the point where you have to create the Objects that represents tables. Should this classes be created manually ( hard coding all the fields and relationships.....) or there is a way to generate them automatically using the database schema. I read something about loaders, but i did not know where they are really used. Please advice. Thanks

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • Are large include files like iostream efficient? (C++)

    - by Keand64
    Iostream, when all of the files it includes, the files that those include, and so on and so forth, adds up to about 3000 lines. Consider the hello world program, which needs no more functionality than to print something to the screen: #include <iostream> //+3000 lines right there. int main() { std::cout << "Hello, World!"; return 0; } this should be a very simple piece of code, but iostream adds 3000+ lines to a marginal piece of code. So, are these 3000+ lines of code really needed to simply display a single line to the screen, and if not, do they create a less efficient program than if I simply copied the relevant lines into the code?

    Read the article

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • Managing many objects at once.

    - by Jeff
    Hi, I want to find a way to efficiently keep track of a lot of objects at once. One practical example I can think of would be a particle system. How are hundreds of particles kept track of? I think I'm on the right track, I found the term 'instancing' and I also learned about flyweights. Hopefully somebody can shed some light on this and share some techniques with me. Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >