Search Results

Search found 69200 results on 2768 pages for 'file indexing'.

Page 83/2768 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • JavaScript file changes deployment

    - by Balaji
    Hi, we are having MVC web application. some of the code is in javascript. when we deploy any changes to the javascript the changes are not reflected on the client side. We have to ask clients to do CTRL+F5 to get the changes. Is there a standard way of pushing javascript changes to the clientside?

    Read the article

  • How to use a variable to specify filegroup in MSSQL

    - by gt
    I want to alter a table to add a constraint during upgrade on a MSSQL database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • How to manage transaction for database and file system in jee environment?

    - by Michael Lu
    I store file’s attributes (size, update time…) in database. So the problem is how to manage transaction for database and file. In jee environment, JTA is just able to manage database transaction. In case, updating database is successful but file operation fails, should I write file-rollback method for this? Moreover, file operation in EJB container violates EJB spec. What’s your opinion? Thank!

    Read the article

  • How to use a variable to specify filegroup in SQL Server

    - by gt
    I want to alter a table to add a constraint during upgrade on a SQL Server database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • Indexing SET field

    - by Dienow
    I have two entities A and B. They are related with many to many relation. Entity A can be related up to 100 B entities. Entity B can be related up to 10000 A entities. I need quick way to select for example 30 A entities, that have relation with specified B entities, filtered and sorted by different attributes. Here how I see ideal solution: I put all information I know about A entities, including their relations with B entities into single row (Special table with SET field) then add all necessary indexes. The problem is that you can't use index while querying by SET field. What should I do? I can replace database with something different, if that'll help.

    Read the article

  • Use of bit-torrent for large file download

    - by questzen
    The company I work for procures large volumes of data and does this by subscribing to FTP. I was wondering if it is possible to download the same using a tracker, the major challenge is authentication of the users IMO. Most ftp servers we subscribe to have a restriction of the number of ftp connection attempts. Does any one here have any experience with this? Any advice is welcome

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • reading excel file in vb.net

    - by Mark
    can anyone help me on how to know EOF of excel using vb.net? i have this code but it crash when i try delete the proceeding rows from row 6 to downward. my problem is, my code was still reading a null values of rows that i deleted in excel.. this is my code: Dim xlsConn As New OleDbConnection Dim xlsAdapter As New OleDbDataAdapter Dim xlsDataSet As New DataSet xlsConn = New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & pathName & " ; Extended Properties=Excel 8.0") strSQL = "SELECT * FROM [Sheet1$]" xlsAdapter.SelectCommand = New OleDbCommand(strSQL, xlsConn) xlsDataSet.Clear() xlsAdapter.Fill(xlsDataSet) ListView1.Items.Clear() Dim listItem As ListViewItem For ctr As Integer = 0 To xlsDataSet.Tables(0).Rows.Count - 1 listItem = ListView1.Items.Add(xlsDataSet.Tables(0).Rows(ctr).Item("EmpNo").ToString) listItem.SubItems.Add(xlsDataSet.Tables(0).Rows(ctr).Item("EmpName").ToString) Next Can anyone help me to fix this bugs!

    Read the article

  • Compact matlab matrix indexing notation

    - by AnnaR
    I've got an nxk sized matrix, containing k numbers per row. I want to use these k number as indexes to k-dimensional matrix. Is there any compact way of doing so in matlab or must I use a for-loop? This is what I want to do (in matlab-pseudo code), but in a more matlabish way. for row=1:1:n finalTable(row) = kDimensionalMatrix(indexmatrix(row, 1),... indexmatrix(row, 2),...,indexmatrix(row, k)) end

    Read the article

  • file output in python giving me garbage

    - by Richard
    When I write the following code I get garbage for an output. It is just a simple program to find prime numbers. It works when the first for loops range only goes up to 1000 but once the range becomes large the program fail's to output meaningful data output = open("output.dat", 'w') for i in range(2, 10000): prime = 1 for j in range(2, i-1): if i%j == 0: prime = 0 j = i-1 if prime == 1: output.write(str(i) + " " ) output.close() print "writing finished"

    Read the article

  • How to append text into text file dynamically

    - by niraj deshmukh
    [12] key1=val1 key2=val2 key3=val3 key4=val4 key5=val5 [13] key1=val1 key2=val2 key3=val3 key4=val4 key5=xyz [14] key1=val1 key2=val2 key3=val3 key4=val4 key5=val5 I want to update key5=val5 where [13]. try { br = new BufferedReader(new FileReader(oldFileName)); bw = new BufferedWriter(new FileWriter(tmpFileName)); String line; while ((line = br.readLine()) != null) { System.out.println(line); if (line.contains("[13]")) { while (line.contains("key5")) { if (line.contains("key5")) { line = line.replace("key5", "key5= Val5"); bw.write(line+"\n"); } } } } } catch (Exception e) { return; } finally { try { if(br != null) br.close(); } catch (IOException e) { // } try { if(bw != null) bw.close(); } catch (IOException e) { // } }

    Read the article

  • Windows batch file timing bug

    - by elbillaf
    I've used %time% for timing previously - at least I think I have. I have this weird IF NOT "%1" == "" ( echo Testing: %1 echo Start: %time% sqlcmd -S MYSERVER -i test_import_%1.sql -o "test_%1%.log" sleep 3 echo End: %time% ) I run this, and it prints: Testing: pm Start: 13:29:45.30 End: 13:29:45.30 In this case, my sql code is failing (different reason), but I figure the sleep 3 should make the time increment by 3 at least. Any ideas? tx, tff

    Read the article

  • Explain please download file iphone

    - by Floo
    hi all ! i want to download plist from my server usibg http:// i have searched the web and everybody says use that class or that one ... but no example on how to do this. may be it is really simple but i'm a noob so if someone can explain or give a link he'll be my heroe thanks to all !!!

    Read the article

  • copy file into another file in prolog

    - by smile
    Good morning/evening how can I write something in a file and then copy its content into the current file? for example I consult file1.pro then I have rule write something in file2.pro , after this rule finish its job I want append the content of the file2.pro int file1.pro . when I tried to append into file1.pro directly , the data appear like undefined symbols ,I don't know why please hellp me thank you.

    Read the article

  • What data is actually stored in a B-tree database in CouchDB?

    - by Andrey Vlasovskikh
    I'm wondering what is actually stored in a CouchDB database B-tree? The CouchDB: The Definitive Guide tells that a database B-tree is used for append-only operations and that a database is stored in a single B-tree (besides per-view B-trees). So I guess the data items that are appended to the database file are revisions of documents, not the whole documents: +---------|### ... | | +------|###|------+ ... ---+ | | | | +------+ +------+ +------+ +------+ | doc1 | | doc2 | | doc1 | ... | doc1 | | rev1 | | rev1 | | rev2 | | rev7 | +------+ +------+ +------+ +------+ Is it true? If it is true, then how the current revision of a document is determined based on such a B-tree? Doesn't it mean, that CouchDB needs a separate "view" database for indexing current revisions of documents to preserve O(log n) access? Wouldn't it lead to race conditions while building such an index? (as far as I know, CouchDB uses no write locks).

    Read the article

  • Bash: how to process variables from an input file?

    - by gilgongo
    I've got a bash script that reads input from a file like this: while IFS="|" read -r a b do echo "$a something $b somethingelse" done < "$FILE" The file it reads looketh like this: http://someurl1.com|label1 http://someurl2.com|label2 However, I'd like to be able to insert the names of variables into that file when it suits me, and have the script process them when it sees them, so the file might look like this: http://someurl1.com?$VAR|label1 http://someurl2.com|label2 So $VAR could be, for example, today's date, producing an output like this: http://someurl1.com something label1 somethingelse http://someurl2.com?20100320 something label2 somethingelse

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >