Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 221/1477 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Linking to a Large address aware DLL.

    - by Canopus
    Suppose I have a DLL which is built with LARGEADDRESSAWARE linker flag set. Now I have an application dynamically linking to this DLL. Does this make my application LARGEADDRESSAWARE? If not then, does it make sense to have this flag set for any DLL?

    Read the article

  • Generate Spring bean definition from a Java object

    - by joeslice
    Let's suggest that I have a bean defined in Spring: <bean id="neatBean" class="com..." abstract="true">...</bean> Then we have many clients, each of which have slightly different configuration for their 'neatBean'. The old way we would do it was to have a new file for each client (e.g., clientX_NeatFeature.xml) that contained a bunch of beans for this client (these are hand-edited and part of the code base): <bean id="clientXNeatBean" parent="neatBean"> <property id="whatever" value="something"/> </bean> Now, I want to have a UI where we can edit and redefine a client's neatBean on the fly. My question is: given a neatBean, and a UI that can 'override' properties of this bean, what would be a straightforward way to serialize this to an XML file as we do [manually] today? For example, if the user set property whatever to be "17" for client Y, I'd want to generate: <bean id="clientYNeatBean" parent="neatBean"> <property id="whatever" value="17"/> </bean> Note that moving this configuration to a different format (e.g., database, other-schema'd-xml) is an option, but not really an answer to the question at hand.

    Read the article

  • Question creating PDF document in Zend Framework

    - by deaddancer
    I need to take a ZF rendered view and create a PDF that should look pretty much exactly the same, and email it. The major issue I have right now is getting the HTML created by the view into a string that I can then process with the Zend_PDF::parse method. The view I need to turn into a PDF is the result of a posted form. I've tried grabbing the contents of ob_get_contents into a string after a successful post, but for some reason its not in there. Should I press on with this angle? Any help would be greatly appreciated!

    Read the article

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

  • Prevent RegEx Hang on Large Matches...

    - by developerjay
    This is a great regular expression for dates... However it hangs indefinitely on this one page I tried... I wanted to try this page ( http://pleac.sourceforge.net/pleac%5Fpython/datesandtimes.html ) for the fact that it does have lots of dates on it and I want to grab all of them. I don't understand why it is hanging when it doesn't on other pages... Why is my regexp hanging and/or how could I clean it up to make it better/efficient ? Python Code: monthnames = "(?:Jan\w*|Feb\w*|Mar\w*|Apr\w*|May|Jun\w?|Jul\w?|Aug\w*|Sep\w*|Oct\w*|Nov(?:ember)?|Dec\w*)" pattern1 = re.compile(r"(\d{1,4}[\/\\\-]+\d{1,2}[\/\\\-]+\d{2,4})") pattern4 = re.compile(r"(?:[\d]*[\,\.\ \-]+)*%s(?:[\,\.\ \-]+[\d]+[stndrh]*)+[:\d]*[\ ]?(PM)?(AM)?([\ \-\+\d]{4,7}|[UTCESTGMT\ ]{2,4})*"%monthnames, re.I) patterns = [pattern4, pattern1] for pattern in patterns: print re.findall(pattern, s) btw... when i say im trying it against this site.. I'm trying it against the webpage source.

    Read the article

  • how can i execute large mysql queries fast

    - by testkhan
    I have 4 mysql tables and have a single query with JOIN on multiple tables and I am requesting it via jquery ajax, but it takes to much long time from about 1-3 minutes while I want to execute them on average 2-5 seconds. is there any special way to execute the quries fast

    Read the article

  • Permissions for Large Variables to Be Sent Via Stored Procedures (SQL Server)

    - by Joe Majewski
    I can't figure out a way to allow more than 4000 bytes to be received at once via a call to a stored procedure. I am storing images in the table that are around 15 - 20 kilobytes each, but upon getting them and displaying them to the page, they are always exactly 3.91 KB in size (or 4000 bytes). Do stored procedures have a limit on how much data can be sent at once? I double-checked my data, and I am indeed only receiving the first 4000 characters from the varbinary(MAX) field. Is there a permission setting to allow more than 4k bytes at once?

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Empty R environment becomes large file when saved

    - by user1052019
    I'm getting behaviour I don't understand when saving environments. The code below demonstrates the problem. I would have expected the two files (far-too-big.RData, and right-size.RData) to be the same size, and also very small because the environments they contain are empty. In fact, far-too-big.R ends up the same size as bigfile.RData. I get the same results using 2.14.1 and 2.15.2, both on WinXP 5.1 SP3. Can anyone explain why this is happening? Thanks. a <- matrix(runif(1000000, 0, 1), ncol=1000) save(a, file="c:/temp/bigfile.RData") test <- function() { load("c:/temp/bigfile.RData") test <- new.env() save(test, file="c:/temp/far-too-big.RData") test1 <- new.env(parent=globalenv()) save(test1, file="c:/temp/right-size.RData") } test()

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • Best way to support large dropdowns

    - by JustAProgrammer
    Say I have a report that can be restricted by specifying some value in a dropdown. This dropdown list references a table with 30,000 records. I don't think this would be feasible to populate a dropdown with! So, what is the best way to provide the user the ability to select a value given this situation? These values do not really have categories and even if I subdivided (by having some nesting dropdown situation) by the first letter of the value, that may still leave a few thousand entries. What's the best way to deal with this?

    Read the article

  • Read a large result set in chunks from mysql

    - by ripper234
    I am trying to read a huge result set from mysql. Reading them in a straight-forward manner didn't work, as mysql tries to return all results together, which times out. I found the following piece of code which tells mysql to read the results back one at a time: stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE); Can I read a chunk at a time instead of one by one? I've tried setting fetch size to a different value, but it doesn't work.

    Read the article

  • Ignore document style rules in one element.

    - by panzi
    I write a greasemonkey script that adds sticky notes to websites. Because there sometimes are pretty strange style rules used in some websites the sticky notes sometimes turn up messed up (or at least not looking like I want them to look). Is there a way to say "under this element do not apply any generic stylerules"? So that rules associated with tag names are not applied, but rules associated with certain classes and ids still are. Or does anyone have a better idea on how to ensure that only my styles are applied to the sticky notes?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • Deserializing child elements as attributes of parent

    - by LloydPickering
    I have XML files which I need to deserialize. I used the XSD tool from Visual Studio to create c# object files. the generated classes do deserialize the files except not in the way which I need. I would appreciate any help figuring out how to solve this problem. The child elements named 'data' should be attributes of the parent element 'task'. A shortened example of the XML is below: <task type="Nothing" id="2" taskOnFail="false" > <data value="" name="prerequisiteTasks" /> <data value="" name="exclusionTasks" /> <data value="" name="allowRepeats" /> <task type="Wait for Tasks" id="10" taskOnFail="false" > <data value="" name="prerequisiteTasks" /> <data value="" name="exclusionTasks" /> <data value="" name="allowRepeats" /> </task> <task type="Wait for Tasks" id="10" taskOnFail="false" > <data value="" name="prerequisiteTasks" /> <data value="" name="exclusionTasks" /> <data value="" name="allowRepeats" /> </task> </task> The Class definition I am trying to deserialize to is in the form: public class task { public string prerequisiteTasks {get;set;} public string exclusionTasks {get;set;} public string allowRepeats {get;set;} [System.Xml.Serialization.XmlElementAttribute("task")] public List<task> ChildTasks {get;set;} } The child 'task's are fine, but the generated files put the 'data' elements into an array of data[] rather than as named members of the task class as I need.

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

  • Querying Postgresql with a very large result set

    - by sanity
    In an application I need to query a Postgres DB where I expect tens or even hundreds of millions of rows in the result set. I might do this query once a day, or even more frequently. The query itself is relatively simple, although may involve a few JOINs. My question is: How smart is Postgres with respect to avoiding having to seek around the disk for each row of the result set? Given the time required for a hard disk seek, this could be extremely expensive. If this isn't an issue, how does Postgres avoid it? How does it know how to lay out data on the disk such that it can be streamed out in an efficient manner in response to this query?

    Read the article

  • return only one document for each filter defined in the query

    - by Garytxo
    Hi all, In one of my latest projects I use Solr 1.4 for searching products.However I have ran into a slight problem, which I aint sure if its possible to do using Solr. All products are indexed by "country" and "category" and the "id", "class" and "description" are stored values. I now have been requested to extract a sample list of products that we have for a give "category" and "ONLY RETURNING ONE" product for each country where the product is available. In my current implementation, I have a dismax query to get a list of all the countries that correspond to the catergory, then I call again solr to extract all products for each country, limiting the no. rows by the size of the countries found in the previous query. The problem I have with this current implementation is I can not be certain that I have one product for each country in the list. Therefore would anyone know if it possible to tell solr that you want only one product per country provided in the query? Any guidance would be useful.

    Read the article

  • Why is my SAX handler returning an object with no values? I am setting it just fine

    - by Blankman
    I'm writing a SAX parser for an xml, and the object it returns doesn't have the values that I am setting in the events. My classes structure is like this: public class ProductSAXHandler extends DefaultHandler { private Product product; public ProductSAXHandler() { product = new Product(); } public Product ParseXmlFile(String xml) { SAXParserFactory spf = new ... XMLReader parser = .... parser.parse(xml); return product; } public void StartElement(....) { for(int ...) { // looping through attributes if(qName == "description" && name == "sku") { product.setSKU(value); } } } } When I am in debug mode, the value of product does get set, and I can see that the product's sku field has the correct value. But for some reason the product object returned is just a new Product object with no values set during the parsing. What am I doing wrong here? It must be me not understanding how these events are fired etc.

    Read the article

  • Rendering CALayer in context uses large amounts of memory

    - by Otium
    I am taking a snapshot of a UIWebView layer, but when I render the webview's layer in the current context my app uses 10mb more memory, and I don't think that should be right. Here is my current code: CGSize imageSize = self.bounds.size; UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0); CGContextRef context = UIGraphicsGetCurrentContext(); [self.layer renderInContext:context]; _snapshot = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();

    Read the article

  • Handling large numbers of sockets with .NET

    - by Dreaddan
    I'm looking at writing a application that need to be able to handle in the region of 200 connections / sec and was wondering if C# and .NET will handle this or if I need to really be looking at C++ to do this? It looks like SocketAsyncEventArgs may be the way to go but thought id check before I plough in to it. Each transaction should only last less than a second but could take up to 15 seconds each if that makes any difference.

    Read the article

  • handling large number

    - by klw
    Hello, this is actually an problem question from project euler site http://projecteuler.net/index.php?section=problems&id=3 Anyway I'm not out after the solution, but I probably guess you will know what my approach is. To my question now, how do I handle numbers exceeding unsigned int? Is there a mathematical approach for this, if so where can I read about it?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >