Search Results

Search found 3771 results on 151 pages for 'doc brown'.

Page 19/151 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to set document.domain for a dynamically generated IFRAME?

    - by Paras Chopra
    I am implementing CodeMirror (http://marijn.haverbeke.nl/codemirror/) on a page where document.domain needs to be declared (because of other IFRAMES on the page). CodeMirror generates a dynamic IFRAME to provide syntax highlighted code editing. The problem is that IE throws up 'Access Denied' (other browsers are fine) at the following piece of code mirror code: this.win = frame.contentWindow; ... var doc = this.win.document; <-- ERROR doc.open(); doc.write(html.join("")); doc.close(); It turns out IE doesn't inherit document.domain from parent IE. I can set document.domain in the IFRAME contents but IE throws up the error before I can even set the contents. Any ideas how to tackle this problem?

    Read the article

  • Apache 2.2.14: SSLCARevocation location

    - by Doc
    I am installing a .crl in my apache config. It looks like this: VirtualHost default DocumentRoot "web" ServerName example.com SSLEngine on SSLCertificateFile "cert.crt" SSLCertificateKeyFile "key.key" SSLCertificateChainFile "cert.ca-bundle" SSLProtocol -all +SSLv3 SSLCipherSuite SSLv3:+HIGH:+MEDIUM Directory Order deny,allow Allow from all SSLCACertificateFile "ClientRootCert.crt" SSLVerifyClient require SSLVerifyDepth 3 SSLCARevocationFile "CRLList.crl" Directory VirtualHost When Apache is started, I get the error: SSLCARevocationFile not allowed here When I place SSLCARevocationFile above the Directory tag, Apache starts, but all client certs are rejected with the message: ssl_error_expired_cert_alert (both revoked and active certs) How to solve this?

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Injecting an XML fragment into the current document from an external file

    - by makenai
    I'm currently parsing an XML file using REXML and trying to come up with a way of inserting an XML fragment from an internal file. Currently, I'm using some logic like the following: doc.elements.each('//include') do |element| handleInclude( element ) end def handleInclude( element ) if filename = element.attributes['file'] data = File.open( filename ).read doc = REXML::Document.new( data ) element.parent.replace_child( element, doc ) end end Where my XML looks like the following: <include file="test.xml" /> But this seems a little bit clunky, and I'm worried that REXML might not always parse XML fragments correctly due to absence of a proper root node in some cases. Is there a better way of doing this? Concern #2: REXML seems not to pick up my changes after I replace elements. For example, after making a change: doc.elements.each('rootNode/*') do |element| end ..picks up neither the original element I replaced, nor the one I replaced it with. Is there some trick to getting REXML to rescan its' tree?

    Read the article

  • Nhibernate Exception - Return types of SQL query were not specified

    - by Muhammad Akhtar
    I am executing SQL in hibernate and getting exception Return types of SQL query were not specified public ArrayList get(string Release, int DocId) { string query = string.Format("select ti.Id, (' Defect ' + cast(ti.onTimeId as varchar) + ' - ' + ti.Name) as Name from TrackingItems ti inner join DocumentTrackingItems dti on ti.Id = dti.ItemStepId inner join Documents doc on dti.DocumentId = doc.Id where ti.ReleaseId = '{0}' AND doc.TypeId = {1} and Doc.Name is null AND ti.Type = 'Defect'", Release, DocId); ISession session = NHibernateHelper.GetCurrentSession(); ArrayList arList = (ArrayList)session.CreateSQLQuery(query).List(); return arList; } When I run this query in SQL, it working fine. any idea what could be the issue? -------- Thanks.........

    Read the article

  • Entity Framework - Insert/Update new entity with child-entities

    - by Christina Mayers
    I have found many questions here on SO and articles all over the internet but none really tackled my problem. My model looks like this (I striped all non essential Properties): Everyday or so "Play" gets updated (via a XML-file containing the information). internal Play ParsePlayInfo(XDocument doc) { Play play = (from p in doc.Descendants("Play") select new Play { Theatre = new Theatre() { //Properties }, //Properties LastUpdate = DateTime.Now }).SingleOrDefault(); var actors = (from a in doc.XPathSelectElement(".//Play//Actors").Nodes() select new Lecturer() { //Properties }); var parts = (from p in doc.XPathSelectElement(".//Play//Parts").Nodes() select new Part() { //Properties }).ToList(); foreach (var item in parts) { play.Parts.Add(item); } var reviews = (from r in doc.XPathSelectElement(".//Play//Reviews").Nodes() select new Review { //Properties }).ToList(); for (int i = 0; i < reviews.Count(); i++) { PlayReviews pR = new PlayReviews() { Review = reviews[i], Play = play, //Properties }; play.PlayReviews.Add(pR); } return play; } If I add this "play" via Add() every Childobject of Play will be inserted - regardless if some exist already. Since I need to update existing entries I have to do something about that. As far as I can tell I have the following options: add/update the child entities in my PlayRepositories Add-Method restructure and rewrite ParsePlayInfo() so that get all the child entities first, add or update them and then create a new Play. The only problem I have here is that I wanted ParsePlayInfo() to be persistence ignorant, I could work around this by creating multiple parse methods (eg ParseActors() ) and assign them to play in my controller (I'm using ASP.net MVC) after everything was parsed and added Currently I am implementing option 1 - but it feels wrong. I'd appreciate it if someone could guide me in the right direction on this one.

    Read the article

  • How to use multifieldquery and filters in Lucene.net

    - by Khotu Nam
    I want to perform a multi field search on a lucene.net index but filter the results based on one of the fields. Here's what I'm currently doing: To index the fields the definitions are: doc.Add(new Field("id", id.ToString(), Field.Store.YES, Field.Index.UN_TOKENIZED)); doc.Add(new Field("title", title, Field.Store.NO, Field.Index.TOKENIZED)); doc.Add(new Field("summary", summary, Field.Store.NO, Field.Index.TOKENIZED, Field.TermVector.YES)); doc.Add(new Field("description", description, Field.Store.NO, Field.Index.TOKENIZED, Field.TermVector.YES)); doc.Add(new Field("distribution", distribution, Field.Store.NO, Field.Index.UN_TOKENIZED)); When I perform the search I do the following: MultiFieldQueryParser parser = new MultiFieldQueryParser(new string[]{"title", "summary", "description"}, analyzer); parser.SetDefaultOperator(QueryParser.Operator.AND); Query query = parser.Parse(text); BooleanQuery bq = new BooleanQuery(); TermQuery tq = new TermQuery(new Term("distribution", distribution)); bq.Add(tq, BooleanClause.Occur.MUST); Filter filter = new QueryFilter(bq); Hits hits = searcher.Search(query, filter); However, the result is always 0 hits. What am I doing wrong?

    Read the article

  • symbolic link and filezilla over sftp

    - by Doc
    I'm pretty new to debian, and I'm trying to set up a server. I created a user which can only access to his folder /home/username (and its subdirectory). Now I want to use that user for the webserver I set up, and I gave him access to /var/www but I can't see /var/www through sftp and i did a symbolic link like this root@server:/home/username# ln -s /var/www www root@server:/home/username# cd www root@server:/home/username/www# chown username:username * now, with filezilla, I can see www folder like this - but when I try to open it I get this - Where am I going wrong? sorry for my awful english, i hope you can understand my problem...

    Read the article

  • SQL Server Update with left join and group by having

    - by Marty Trenouth
    I'm making an update to our datbase and would like to update rows that do not have existing items in another table. I can join the tables together, but am having trouble grouping the table to get a count of the number of rows UPDATE dpt SET dpt.active = 0 FROM DEPARTMENT dpt LEFT JOIN DOCUMENTS doc on dpt.ID = doc.DepartmentID GROUP BY dpt.ID HAVING COUNT(doc.ID) = 0 What should I be doing?

    Read the article

  • In terminal, merging multiple folders into one.

    - by Josh Pinter
    I have a backup directory created by WDBackup (western digital external HD backup util) that contains a directory for each day that it backed up and the incremental contents of just what was backed up. So the hierarchy looks like this: 20100101 My Documents Letter1.doc My Music Best Songs Every First Songs.mp3 My song.mp3 # modified 20100101 20100102 My Documents Important Docs Taxes.doc My Music My Song.mp3 # modified 20100102 ...etc... Only what has changed is backed up and the first backup that was ever made contains all the files selected for backup. What I'm trying to do now is incrementally copy, while keeping the folder structure, from oldest to newest, each of these dated folders into a 'merged' folder so that it overrides the older content and keeps the new stuff. As an example, if just using these two example folders, the final merged folder would look like this: Merged My Documents Important Docs Taxes.doc Letter1.doc My Music Best Songs Every First Songs.mp3 My Song.mp3 # modified 20100102 Hope that makes sense. Thanks, Josh

    Read the article

  • Can GIT, Mercurial, SVN, or other version control tools work well when project tree has binary files

    - by Jian Lin
    Sometimes our project tree can have binary files, such as jpg, png, doc, xls, or pdf. Can GIT, Mercurial, SVN, or other tools do a good job when only part of a binary file is changed? For example, if the spec is written in .doc and it is part of the repository, then if it is 4MB, and edited 100 times but just for 1 or 2 lines, and checked in 100 times during the year, then it is 400MB. If it is 100 different .doc and .xls files, then it is 40GB... not a size that is easy to manage. I have tried GIT and Mercurial and see that they both seem to add a big size of data even when 1 line is changed in a .doc or .pdf. Is there other way inside of GIT or Mercurial or SVN that can do the job?

    Read the article

  • How can I delete a file created and opened with Perl's IO::File and XML::Writer?

    - by Sho Minamimoto
    So I'm running through a list of things and have code that creates an .xml files with IO::File called $doc, then I make a new writer with XML::Writer(OUTPUT => $doc). More code runs and I build a big XML file with XML::Writer. Then, near the end of the file, I find out if I need this file at all. If I do need it, I just: $writer->end(); $doc->close(); but if I don't need it, what should I enter to just delete all data I've stored/saved and move onto the next file? I tried unlink($docpath) (before and after $doc->close()), the file was not deleted.

    Read the article

  • My Lucene queries only ever find one hit

    - by Bob
    I'm getting started with Lucene.Net (stuck on version 2.3.1). I add sample documents with this: Dim indexWriter = New IndexWriter(indexDir, New Standard.StandardAnalyzer(), True) Dim doc = Document() doc.Add(New Field("Title", "foo", Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.NO)) doc.Add(New Field("Date", DateTime.UtcNow.ToString, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.NO)) indexWriter.AddDocument(doc) indexWriter.Close() I search for documents matching "foo" with this: Dim searcher = New IndexSearcher(indexDir) Dim parser = New QueryParser("Title", New StandardAnalyzer()) Dim Query = parser.Parse("foo") Dim hits = searcher.Search(Query) Console.WriteLine("Number of hits = " + hits.Length.ToString) No matter how many times I run this, I only ever get one result. Any ideas?

    Read the article

  • Lucene DuplicateFilter question

    - by chardex
    Hi, Why DuplicateFilter doesn't work together with other filters? For example, if a little remake of the test DuplicateFilterTest, then the impression that the filter is not applied to other filters and first trims results: public void testKeepsLastFilter() throws Throwable { DuplicateFilter df = new DuplicateFilter(KEY_FIELD); df.setKeepMode(DuplicateFilter.KM_USE_LAST_OCCURRENCE); Query q = new ConstantScoreQuery(new ChainedFilter(new Filter[]{ new QueryWrapperFilter(tq), // new QueryWrapperFilter(new TermQuery(new Term("text", "out"))), // works right, it is the last document. new QueryWrapperFilter(new TermQuery(new Term("text", "now"))) // why it doesn't work? It is the third document. }, ChainedFilter.AND)); ScoreDoc[] hits = searcher.search(q, df, 1000).scoreDocs; assertTrue("Filtered searching should have found some matches", hits.length > 0); for (int i = 0; i < hits.length; i++) { Document d = searcher.doc(hits[i].doc); String url = d.get(KEY_FIELD); TermDocs td = reader.termDocs(new Term(KEY_FIELD, url)); int lastDoc = 0; while (td.next()) { lastDoc = td.doc(); } assertEquals("Duplicate urls should return last doc", lastDoc, hits[i].doc); } }

    Read the article

  • Domain migration - 301 Redirect of all contentes of directory)

    - by Trufa
    Hi, I would like to know if it is possible to do the following considering that I would like to migrate domains. I have lets say: one.com/files/one.html one.com/files/two.php one.com/other/three.html one.com/other/four.doc one.com/other/subdirectory/five.doc I am migrating to two.com So I would like to make RESPECTIVE 301 redirects to the following: two.com/old/files/one.html two.com/old/files/two.php two.com/old/other/three.html two.com/old/other/four.doc two.com/old/other/subdirectory/five.doc I've tried with cPanel and although I come "close" with the redirects option I can't seem to make it happen. The folders are not much (10 -12) the file are a lot, and obviously impossible to make it manually. How would you proceed? Can this/ should this be done with regex from the .htaccess?? Can you direct all the elements of a subdirectory in the manner expressed above? I hope the question is clear enough, if not please ask for any clarification needed!! Thanks in advance!!

    Read the article

  • Absolute Xpath to get list of childnodes?

    - by Googler
    Hi this my xml file, <?xml version="1.0"?> <worldpatentdata xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <meta name="elapsed-time" value="329" xmlns="http://ops.epo.org"/> <exchange-documents xmlns="http://www.epo.org/exchange"> <exchange-document country="AT" doc-number="380509" family-id="38826527" kind="T" system="ops.epo.org"> <bibliographic-data> <publication-reference data-format="docdb"> <document-id> <country>AT</country> <doc-number>380509</doc-number> <kind>T</kind> <date>20071215</date> </document-id> </publication-reference> <parties> <applicants> </applicants> <inventors> </inventors> </parties> </bibliographic-data> </exchange-document> </exchange-documents> </worldpatentdata> For the above xml file, i need the xpath to receive the childnodes below it: Output i need is : <exchange-documents xmlns="http://www.epo.org/exchange"> <exchange-document country="AT" doc-number="380509" family-id="38826527" kind="T" system="ops.epo.org"> <bibliographic-data> <publication-reference data-format="docdb"> <document-id> <country>AT</country> <doc-number>380509</doc-number> <kind>T</kind> <date>20071215</date> </document-id> </publication-reference> <parties> <applicants> </applicants> <inventors> </inventors> </parties> </bibliographic-data> </exchange-document> I using Linq-Xml to get the following data: This is my Xpath and code: var list = doc1.XPathSelectElement("exchange-document"); I couldnt retreive the needed output.It returns null for the above code. Can anyone pls help on this by providing the correct xpath to retieve the child nodes. Else is there any other way to retrieve it.

    Read the article

  • Perl, deleting an .xml file created and open with IO::File and XML::Writer?

    - by Sho Minamimoto
    So I'm running through a list of things and have code that creates an .xml file with IO::File called $doc, then I make a new writer with XML::Writer(OUTPUT = $doc). More code runs and I build a big xml file with XML::Writer. Then, near the end of the file, I find out if I need this file at all. If I do need it, I just $writer-end(); $doc-close(); but if I don't need it, what should I enter to just delete all data I've stored/saved and move onto the next file? I tried unlink($docpath) (before and after $doc-close()), the file was not deleted.

    Read the article

  • xml error: Object reference not set to an instance of an object after SelectSingleNode

    - by every_answer_gets_a_point
    here's my code: XmlDocument doc = new XmlDocument(); foreach (string c in colorList) { doc.Load(@"http://whoisxmlapi.com/whoisserver/WhoisService?domainName=" + c + @"&username=user&password=pass"); textBox1.Text += doc.SelectSingleNode("WhoisRecord/registrant/email").InnerText + ","; } for the second line of code (textbox1...) is generating this error what am i doing wrong?

    Read the article

  • How to check whether given file is in PROPER word file format?

    - by shekhar
    Hi, I am developing one application using C# for processing MSWord files. My application gets hang when I pass invalid .doc file as an input. For example, if I have one foo.pdf file and I pass it to my application after changing its extension (foo.doc). Is it possible to check whether file is valid doc file before trying to open it? Please enlighten !!!! Thanks in advance

    Read the article

  • Different analyzers for each field

    - by user72185
    Hi, How can I enable different analyzers for each field in a document I'm indexing with Lucene? Example: RAMDirectory dir = new RAMDirectory(); IndexWriter iw = new IndexWriter(dir, new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_CURRENT), true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); Field field1 = new Field("field1", someText1, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS); Field field2 = new Field("field2", someText2, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS); doc.Add(field1); doc.Add(field2); iw.AddDocument(doc); iw.Commit(); The analyzer is an argument to the IndexWriter, but I want to use StandardAnalyzer for field1 and SimpleAnalyzer for field2, how can I do that? The same applies when searching, of course. The correct analyzer must be applied for each field.

    Read the article

  • Prevent ASP.NET from encoding strings on output

    - by Darkwater23
    How can I stop ASP.Net from encoding anchor tags in List Items when the page renders? I have a collection of objects. Each object has a link property. I did a foreach and tried to output the links in a BulletedList, but ASP encoded all the links. Any idea? Thanks! Here's the offending snippet of code. When the user picks a specialty, I use the SelectedIndexChange event to clear and add links to the BulletedList: if (SpecialtyList.SelectedIndex > 0) { PhysicianLinks.Items.Clear(); foreach (Physician doc in docs) { if (doc.Specialties.Contains(SpecialtyList.SelectedValue)) { PhysicianLinks.Items.Add(new ListItem("<a href=\"" + doc.Link + "\">" + doc.FullName + "</a>")); } } }

    Read the article

  • Using variable for tag in getElementsByTagName() for PHP and XML?

    - by Jared
    See my PHP: $file = "routingConfig.xml"; global $doc; $doc = new DOMDocument(); $doc->load( $file ); $ElTag = "Route"; $tag = $doc->getElementsByTagName($ElTag); XML is: <Routes> <Route></Route> <Route></Route> <Routes> Error returned is: Fatal error: Call to a member function getElementsByTagName() on a non-object I'm not sure how to do this?

    Read the article

  • Python Logic in searching String

    - by Mahmoud A. Raouf
    filtered=[] text="any.pdf" if "doc" and "pdf" and "xls" and "jpg" not in text: filtered.append(text) print(filtered) This is my first Post in Stack Overflow, so excuse if there's something annoying in Question, The Code suppose to append text if text doesn't include any of these words:doc,pdf,xls,jpg. It works fine if Its like: if "doc" in text: elif "jpg" in text: elif "pdf" in text: elif "xls" in text: else: filtered.append(text)

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >