Search Results

Search found 14679 results on 588 pages for 'index merge'.

Page 71/588 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Disable Plone Archetypes index/convert doc/pdf files

    - by hosting-schuppen
    Hi Stackoverflowers, if i rebuild my catalog in plone i get many of this infos: 2010-02-18T11:26:09 INFO Archetypes Error while trying to convert file contents to 'text/plain' in .getIndexable() of : Unable to find binary "wvHtml" in /sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/jvm/jre/bin This happens to doc and pdf files. I don't wanna convert docs or pdfs. How can i disable it completly? Thanks for the help!

    Read the article

  • How Do You Get the bufspec While Using Vimdiff Through Git

    - by Elizabeth Buckwalter
    I've read Vimdiff and Viewing differences with Vimdiff plus doing various google searches using things like "vimdiff multiple", "vimdiff git", "vimdiff commands" etc. When using do or diffg I get the error "More than two buffers in diff mode, don't know which one to use". When using diffg v:fname_in I get "No matching buffer for v:fname_in". From the vimdiff documentation: :[range]diffg[et] [bufspec] Modify the current buffer to undo difference with another buffer. If [bufspec] is given, that buffer is used. If [bufspec] refers to the current buffer then nothing happens. Otherwise this only works if there is one other buffer in diff mode. and more: When 'diffexpr' is not empty, Vim evaluates to obtain a diff file in the format mentioned. These variables are set to the file names used: v:fname_in original file v:fname_new new version of the same file v:fname_out resulting diff file So, I need to get the name of bufspec, but the default variables (fname_in, fname_new, and fname_out) aren't set. I ran the command git mergetool on a linux box through a terminal. [Edit] A partial solution that bred more questions. I used the "filename" at the bottom of the buffer. It's only a half answer, because occasionally I get a file does not exist error. I believe it's consistently the remote version of the file that "does not exist". I suspect this has something to do with git and indexing. How do you get the bufspec value consistently while using vimdiff through git-mergetool?

    Read the article

  • Synchronising tables across remote Access databases

    - by Paul H
    Hi folks, I'm helping out a business by providing an Access DB to manage requests of various types. As they are a construction company, they have one machine in an 'office' on the building site, plus 3 based in their main office. The machine on site has no internet connectivity. Is there any (reasonably simple) way to synchronise these databases every so often? I realise the tables could be merged, but each has an autoincrement field which must be synced between instances (i.e. when merging two tables the autoincrement should be reassigned based on the combination of records). Cheers in advance, Paul

    Read the article

  • text indexes vs integer indexes in mysql

    - by imanc
    Hey, I have always tried to have an integer primary key on a table no matter what. But now I am questioning if this is always necessary. Let's say I have a product table and each product has a globally unique SKU number - that would be a string of say 8-16 characters. Why not make this the PK? Typically I would make this field a unique index but then have an auto incrementing int field as the PK, as I assumed it would be faster, easier to maintain, and would allow me to do things like get the last 5 records added with ease. But in terms of optimisation, assuming I'd only ever be matching the full text field and next doing text matching queries (e.g. like %%) can you guys think of any reasons not to use a text based primary key, most likely of type varchar()? Cheers, imanc

    Read the article

  • Optimize SQL databases by adding index columns

    - by Viktor Sehr
    This might be implementation specific so the question regards how SQL databases is generally implemented. Say I have a database looking like this; Product with columns [ProductName] [Price] [Misc] [Etc] Order with columns [OrderID] [ProductName] [Quantity] [Misc] [Etc] ProductName is primary key of Product, of some string type and unique. OrderID is primary key and of some integer type, and ProductName being a foreign key. Say I change the primary key of Product to a new column of integer type ie [ProductID] Would this reduce the database size and optimize lookups joining these two tables (and likewise operations), or are these optimizations performed automatically by (most/general/main) SQL database implementations?

    Read the article

  • SVN problems after migration with CVS2SVN

    - by Bjorn C
    We´ve migrated from CVS on AIX to SVN on Linux via CVS2SVN. The migration seems to have went well but when working in SVN we get a lot of Tree Conflicts that doesn´t seem to be conflicts at all? Looking at the revision graphs, one can see that the graph for e.g. trunk and a branch isn´t the same, i.e. they contain different sets of revisions of the file. Either of the 3 ways to resolve this conflict when merging in TortoiseSVN leaves the revision graphs separate, they cannot be "melted" together. Could it be that CVS2SVN didn´t understand that a file in different branches is the same even if the file system path is the same? Anyone who has experienced this? Thanks, Bjorn

    Read the article

  • How to "interleave" two DataTables.

    - by Brent
    Take these two lists: List 1 Red Green Blue List 2 Brown Red Blue Purple Orange I'm looking for a way to combine these lists together to produce: List 3 Brown Red Green Blue Purple Orange I think the basic rules are these: 1) Insert on top the list any row falling before the first common row (e.g., Brown comes before the first common row, Red); 2) Insert items between rows if both lists have two items (e.g., List 1 inserts Green between Red and Blue); and 3) Insert rows on the bottom if the there's no "between-ness" found in 2 (e.g., List 2 inserts Orange at the bottom). The lists are stored in a DataTable. I'm guessing I'll have to switch between them while iterating, but I'm having a hard time figuring out a method of combining the rows. Thanks for any help. --Brent

    Read the article

  • C# Setting Properties using Index

    - by Guazz
    I have a business class that contains many properties for various stock-exchange price types. This is a sample of the class: public class Prices { public decimal Today {get; set} public decimal OneDay {get; set} public decimal SixDay {get; set} public decimal TenDay {get; set} public decimal TwelveDay {get; set} public decimal OneDayAdjusted {get; set;} public decimal SixDayAdjusted {get; set;} public decimal TenDayAdjusted {get; set;} public decimal OneHundredDayAdjusted {get; set;} } I have a legacy system that supplies the prices using string ids to identify the price type. E.g. Today = "0D" OneDay = "1D" SixDay = "6D" //..., etc. Firstly, I load all the values to an IDictionary() collection so we have: [KEY] VALUE [0D] = 1.23456 [1D] = 1.23456 [6D] = 1.23456 ...., etc. Secondly, I set the properties of the Prices class using a method that takes the above collection as a parameter like so: SetPricesValues(IDictionary<string, decimal> pricesDictionary) { // TODAY'S PRICE string TODAY = "D0"; if (true == pricesDictionary.ContainsKey(TODAY)) { this.Today = pricesDictionary[TODAY]; } // OneDay PRICE string ONE_DAY = "D1"; if (true == pricesDictionary.ContainsKey(ONE_DAY)) { this.OneDay = pricesDictionary[ONE_DAY]; } //..., ..., etc., for each other property } Is there a more elegant technique to set a large amount of properties? Thanks, j

    Read the article

  • Doxygen groups and modules index

    - by cppdev
    Hi I am creating a doxygen document for my project. Recently, I have grouped related classes using \addtogroup tag. After this, I have got a module tab in my documentation. It shows all modules. I want to add some description right below module name below the module name on the same page. How can I do it using doxygen ? Here's my tag /*! \addtogroup test test * Test Testing a group in doxygen * @{ */

    Read the article

  • 2 large databases - worth merging into 1?

    - by Ardman
    I have 2 large databases that were sharded before. I now have removed the sharding and have created a new database with all of the data except for the tables that were originally sharded. Is it worth importing this data into the new database, or keeping them as seperate entities that I can just scan through? We are talking around 60million records in each sharded table, of which there are 2 tables. Also, whilst I have an empty table, should I be adding indexes which weren't thought of when the database was originally constructed and now too large to add them?

    Read the article

  • Merging two XML files into one XML file using Java

    - by dmurali
    I am stuck with how to proceed with combining two different XML files(which has the same structure). When I was doing some research on it, people say that XML parsers like DOM or StAX will have to be used. But cant I do it with the regular IOStream? I am currently trying to do with the help of IOStream but this is not solving my purpose, its being more complex. For example, What I have tried is; public class GUI { public static void main(String[] args) throws Exception { // Creates file to write to Writer output = null; output = new BufferedWriter(new FileWriter("C:\\merged.xml")); String newline = System.getProperty("line.separator"); output.write(""); // Read in xml file 1 FileInputStream in = new FileInputStream("C:\\1.xml"); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String strLine; while ((strLine = br.readLine()) != null) { if (strLine.contains("<MemoryDump>")){ strLine = strLine.replace("<MemoryDump>", "xmlns:xsi"); } if (strLine.contains("</MemoryDump>")){ strLine = strLine.replace("</MemoryDump>", "xmlns:xsd"); } output.write(newline); output.write(strLine); System.out.println(strLine); } // Read in xml file 2 FileInputStream in = new FileInputStream("C:\\2.xml"); BufferedReader br1 = new BufferedReader(new InputStreamReader(in)); String strLine1; while ((strLine1 = br1.readLine()) != null) { if (strLine1.contains("<MemoryDump>")){ strLine1 = strLine1.replace("<MemoryDump>", ""); } if (strLine1.contains("</MemoryDump>")){ strLine1 = strLine1.replace("</MemoryDump>", ""); } output.write(newline); output.write(strLine1); I request you to kindly let me know how do I proceed with merging two XML files by adding additional content as well. It would be great if you could provide me some example links as well..! Thank You in Advance..! System.out.println(strLine1); } }

    Read the article

  • Atlas style map index for static google map

    - by Ben Holland
    Hello, I'm using a static google map, but really this problem could apply to any maps project. I want to divide a map into multiple quadrants (of say 50x50 pixels) and label the columns as A, B, C.... and the rows as 1, 2, 3... Next I plan to do something like, 1) Find the markers which are the farthest north, east, south, and west 2) Use that info to to define the bounding boxes of each row and column box 3) Classify each marker by its row and column (Example Marker 1 = [A,2]) A few requirements, I don't know the zoom level because I let Google set the zoom level appropriately for me and I would rather not use an algorithm that is dependent on a zoom level. I do however know the locations of all of the markers that are shown on the map. Here is an example of a map that I would like to classify the markers for, static map example link. I found these which look like a good start, Resource 1, Resource 2 But I think I'm still in need of some help getting started. Can anyone help write out some pseudo code or post a few more resources? I'm kind of in a rut at the moment. Thanks! Much appreciated of any help!

    Read the article

  • (C#) Get index of current foreach iteration

    - by Graphain
    Hi, Is there some rare language construct I haven't encountered (like the few I've learned recently, some on Stack Overflow) in C# to get a value representing the current iteration of a foreach loop? For instance, I currently do something like this depending on the circumstances: int i=0; foreach (Object o in collection) { ... i++; } Answers: @bryansh: I am setting the class of an element in a view page based on the position in the list. I guess I could add a method that gets the CSSClass for the Objects I am iterating through but that almost feels like a violation of the interface of that class. @Brad Wilson: I really like that - I've often thought about something like that when using the ternary operator but never really given it enough thought. As a bit of food for thought it would be nice if you could do something similar to somehow add (generically to all IEnumerable objects) a handle on the enumerator to increment the value that an extension method returns i.e. inject a method into the IEnumerable interface that returns an iterationindex. Of course this would be blatant hacks and witchcraft... Cool though... @crucible: Awesome I totally forgot to check the LINQ methods. Hmm appears to be a terrible library implementation though. I don't see why people are downvoting you though. You'd expect the method to either use some sort of HashTable of indices or even another SQL call, not an O(N) iteration... (@Jonathan Holland yes you are right, expecting SQL was wrong) @Joseph Daigle: The difficulty is that I assume the foreach casting/retrieval is optimised more than my own code would be. @Jonathan Holland: Ah, cheers for explaining how it works and ha at firing someone for using it.

    Read the article

  • Good hash function for a 2d index

    - by rlbond
    I have a struct called Point. Point is pretty simple: struct Point { Row row; Column column; // some other code for addition and subtraction of points is there too } Row and Column are basically glorified ints, but I got sick of accidentally transposing the input arguments to functions and gave them each a wrapper class. Right now I use a set of points, but repeated lookups are really slowing things down. I want to switch to an unordered_set. So, I want to have an unordered_set of Points. Typically this set might contain, for example, every point on a 80x24 terminal = 1920 points. I need a good hash function. I just came up with the following: struct PointHash : public std::unary_function<Point, std::size_t> { result_type operator()(const argument_type& val) const { return val.row.value() * 1000 + val.col.value(); } }; However, I'm not sure that this is really a good hash function. I wanted something fast, since I need to do many lookups very quickly. Is there a better hash function I can use, or is this OK?

    Read the article

  • Deleting index from Solr using solrj as a client

    - by Azhar
    Hello, I am using solrj as client for indexing documents on the solr server. I am having problem while deleting the indexes by 'id' from the solr server. I am using following code to delete the indexes: server.deleteById("id:20"); server.commit(true,true); After this when i again search for the documents, the search result contains the above document also. Dont know what is going wrong with this code. Please help me out with issue. Thanks!

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

  • file reading in python

    - by Jagdev
    So my whole problem is that I have two files one with following format(for Python 2.6): #comments config = { #comments 'name': 'hello', 'see?': 'world':'ABC',CLASS=3 } This file has number of sections like this. Second file has format: [23] [config] 'name'='abc' 'see?'= [23] Now the requirement is that I need to compare both files and generate file as: #comments config = { #comments 'name': 'abc', 'see?': 'world':'ABC',CLASS=3 } So the result file will contain the values from the first file, unless the value for same attribute is there in second file, which will overwrite the value. Now my problem is how to manipulate these files using Python. Thanks in advance and for your previous answers in short time ,I need to use python 2.6

    Read the article

  • DeltaXML Diff like library for .Net?

    - by Jörg Battermann
    We are currently using DeltaXML in our .Net application to analyse two versions of .xml files regarding their differences, but since DeltaXML is a java application/library, we're looking for a more homogeneous way to accomplish that task. Does anyone know a .Net diff library similiar to DeltaXML?

    Read the article

  • Branching and Merging Strategies

    - by benPearce
    I have been tasked with coming up with a strategy for branching, merging and releasing over the next 6 months. The complication comes from the fact the we will be running multiple projects all with different code changes and different release dates but approximately the same development start dates. At present we are using VSS for code management, but are aware that it will probably cause some issues and will be migrating to TFS before new development starts. What strategies should I be employing and what things should I be considering before setting a plan down? Sorry if this is vague, feel free to ask questions and I will update with more information if required.

    Read the article

  • Memory Efficient file append

    - by lboregard
    i have several files whose content need to be merged into a single file. i have the following code that does this ... but it seems rather inefficient in terms of memory usage ... would you suggest a better way to do it ? the Util.MoveFile function simply accounts for moving files across volumes private void Compose(string[] files) { string inFile = ""; string outFile = "c:\final.txt"; using (FileStream fsOut = new FileStream(outFile + ".tmp", FileMode.Create)) { foreach (string inFile in files) { if (!File.Exists(inFile)) { continue; } byte[] bytes; using (FileStream fsIn = new FileStream(inFile, FileMode.Open)) { bytes = new byte[fsIn.Length]; fsIn.Read(bytes, 0, bytes.Length); } //using (StreamReader sr = new StreamReader(inFile)) //{ // text = sr.ReadToEnd(); //} // write the segment to final file fsOut.Write(bytes, 0, bytes.Length); File.Delete(inFile); } } Util.MoveFile(outFile + ".tmp", outFile); }

    Read the article

  • How to create nonclustered index in Create Table.

    - by isthatacode
    Create table FavoriteDish ( FavID int identity (1,1) primary key not null, DishID int references Dishes(DishID) not null , CelebrityName nvarchar(100) nonclustered not null ) This results in - Incorrect syntax near the keyword 'nonclustered'. I referred the MSDN help for create table syntax. I am not sure whats wrong here? Thanks for reading.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >