Search Results

Search found 143 results on 6 pages for 'indexer'.

Page 2/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • How do I automatically rebuild the Sphinx index under django-sphinx?

    - by Apreche
    I just setup django-sphinx, and it is working beautifully. I am now able to search my model and get amazing results. The one problem is that I have to build the index by hand using the indexer command. That means every time I add new content, I have to manually hit the command line to rebuild the search index. That is just not acceptable. I could make a cron job that automatically runs the indexer command every so often, but that's far from optimal. New data won't be indexed until the cron runs again. In addition, the indexer will run unnecessarily most times as my site doesn't have data being added very often. How do I set it up so that the Sphinx index will automatically rebuild itself whenever data is added to or modified in a searchable django model?

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

  • how to uninstall sphinx server

    - by misterjinx
    I installed (from sources) yesterday sphinx server and didn't think of configuring it properly, so I left the default installation directory (/usr/local). Today I realized that it created all of it's directories inside that directory, instead of creating its own (as I wrongly thought). So today I started again the installation and specified explicitly the location where to install. After that I removed the previous directories from /usr/local that were created when the first installation was done (bin, etc, var). So I thought of giving it a test. I created the .conf file and wanted to run the indexer. But now, the indexer always tries to search the .conf file in the old location (obviously does not find anything) and when I specify where to find the .conf file it gives me errors for each of the configuration settings present. I know I did something wrong when I deleted manually those files and that's why I'm asking you if have any ideas how to correctly uninstall sphinx. Thank you.

    Read the article

  • Hosting a magnet link site which could possibly infringe copyrighted material?

    - by Griff
    I have for the last 3 months built a crawler, indexer and alot of other things for what started out to be a home project for indexing magnet links on the internet. As my project grew I have thought about releasing my collected data (which at the minute is on a public domain but with no access) to the public. Whatever the crawler sucks in goes in, and whatever the indexer decides to index gets indexed as it is a fully automated process. My question is as follows; Considering that most of the data that is collected from what I have built points to illegal copyrighted material (as most magnet links do) where would it be best to host such a site. I notice all of the already public torrent sites are hosted in India is this because there laws are less strict on copyright infringement? Have any of you hosted such a site, and if so what problems have you ran into? And as always any advice on being a webmaster for this type website?

    Read the article

  • Exception when indexing text documents with Lucene, using SnowballAnalyzer for cleaning up

    - by Julia
    Hello!!! I am indexing the documents with Lucene and am trying to apply the SnowballAnalyzer for punctuation and stopword removal from text .. I keep getting the following error :( IllegalAccessError: tried to access method org.apache.lucene.analysis.Tokenizer.(Ljava/io/Reader;)V from class org.apache.lucene.analysis.snowball.SnowballAnalyzer Here is the code, I would very much appreciate help!!!! I am new with this.. public class Indexer { private Indexer(){}; private String[] stopWords = {....}; private String indexName; private IndexWriter iWriter; private static String FILES_TO_INDEX = "/Users/ssi/forindexing"; public static void main(String[] args) throws Exception { Indexer m = new Indexer(); m.index("./newindex"); } public void index(String indexName) throws Exception { this.indexName = indexName; final File docDir = new File(FILES_TO_INDEX); if(!docDir.exists() || !docDir.canRead()){ System.err.println("Something wrong... " + docDir.getPath()); System.exit(1); } Date start = new Date(); PerFieldAnalyzerWrapper analyzers = new PerFieldAnalyzerWrapper(new SimpleAnalyzer()); analyzers.addAnalyzer("text", new SnowballAnalyzer("English", stopWords)); Directory directory = FSDirectory.open(new File(this.indexName)); IndexWriter.MaxFieldLength maxLength = IndexWriter.MaxFieldLength.UNLIMITED; iWriter = new IndexWriter(directory, analyzers, true, maxLength); System.out.println("Indexing to dir..........." + indexName); if(docDir.isDirectory()){ File[] files = docDir.listFiles(); if(files != null){ for (int i = 0; i < files.length; i++) { try { indexDocument(files[i]); }catch (FileNotFoundException fnfe){ fnfe.printStackTrace(); } } } } System.out.println("Optimizing...... "); iWriter.optimize(); iWriter.close(); Date end = new Date(); System.out.println("Time to index was" + (end.getTime()-start.getTime()) + "miliseconds"); } private void indexDocument(File someDoc) throws IOException { Document doc = new Document(); Field name = new Field("name", someDoc.getName(), Field.Store.YES, Field.Index.ANALYZED); Field text = new Field("text", new FileReader(someDoc), Field.TermVector.WITH_POSITIONS_OFFSETS); doc.add(name); doc.add(text); iWriter.addDocument(doc); } }

    Read the article

  • Django: DatabaseLockError exception with Djapian

    - by jul
    Hi, I've got the exception shown below when executing indexer.update(). I have no idea about what to do: it used to work and now index database seems "locked". Anybody can help? Thanks Environment: Request Method: POST Request URL: http://piem.org:8000/restaurant/add/ Django Version: 1.1.1 Python Version: 2.5.2 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.comments', 'django.contrib.sites', 'django.contrib.admin', 'registration', 'djapian', 'resto', 'multilingual'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.locale.LocaleMiddleware', 'multilingual.middleware.DefaultLanguageMiddleware') Traceback: File "/var/lib/python-support/python2.5/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/jul/atable/../atable/resto/views.py" in addRestaurant 639. Restaurant.indexer.update() File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/indexer.py" in update 181. database = self._db.open(write=True) File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/database.py" in open 20. xapian.DB_CREATE_OR_OPEN, File "/usr/lib/python2.5/site-packages/xapian.py" in __init__ 2804. _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) Exception Type: DatabaseLockError at /restaurant/add/ Exception Value: Unable to acquire database write lock on /home/jul/atable /djapian_spaces/resto/restaurant/resto.index.restaurantindexer: already locked

    Read the article

  • Windows Search Indexing SSD

    - by user654628
    I just bought a new SSD (Vertex4 from OCZ, 256G) and installed Windows 8 with it on my laptop. I am not using an external hard drive to keep extra data (paging, temp files etc) because I am using a laptop and do not want to carry it around with me. My question is, if I disable Windows indexer (Windows Search service), does Windows still search files under the search (Windows 7 Start menu search/Metro UI Windows 8 search)? Since the indexer was meant to search for files by indexing them, does this mean that Windows will not search new files? Thanks in advance.

    Read the article

  • .Net search engine architecture and technology choice

    - by shrivb
    I am in the process of designing a search engine for an asp.net site. The site currently uses Microsoft Indexing Server to index and search content which range from simple text files to MS documents to PDFs. MIS is also used to crawl File servers. MIS in tandem with Index Server Companion crawls for content from external sites. I intend to replace MIS with the indexer/crawler I am trying to build. Since my platform is completely on the Microsoft stack, I cant afford to have a Java application server. Thus, Solr, and effectively, SolrNet is ruled out. With this being the context, I have couple of questions. 1.Technology choice I had done my initial investigation and looked at Lucene.Net. There seemed to be 2 issues in using Lucene.Net. First being, it cant crawl external content. There doesn't seem to be a direct port of Nutch in .Net. Second, since it is just an indexer, it cant parse various document types. The parsing is left to the developer. So, what would be best technology choice on the .Net platform to achieve indexing & crawling? Are there any .Net open source libraries available for document parsing? 2.Architectural pattern Is there any general architectural pattern or best practice that needs to be followed in designing such a search engine? Thanks in advance.

    Read the article

  • C#: Accessing a Dictionary.Keys Key through a numeric index

    - by Michael Stum
    I'm using a Dictionary<string, int> where the int is a count of the key. Now, I need to access the last-inserted Key inside the Dictionary, but i do not know the name of it. The obvious attempt: int LastCount = mydict[mydict.keys[mydict.keys.Count]]; does not work, because Dictionary.Keys does not implement a []-indexer. I just wonder if there is any similar class? I thought about using a Stack, but that only stores a string. I could now create my own struct and then use a Stack<MyStruct>, but I wonder if there is another alternative, essentially a Dictionary that implements an []-indexer on the Keys?

    Read the article

  • Problem running Thinking Sphinx with Rails 2.3.5

    - by benoror
    Hi, I just installed Sphinx (distro: archlinux) downloading the source. Then I installed "Thinking Sphinx" plugin for Rails. I followed the official page setup and this Screencast from Ryan Bates, but when I try to index the models it gives me this error: $ rake thinking_sphinx:index (in /home/benoror/Dropbox/Proyectos/cotizahoy) Sphinx cannot be found on your system. You may need to configure the following settings in your config/sphinx.yml file: * bin_path * searchd_binary_name * indexer_binary_name For more information, read the documentation: http://freelancing-god.github.com/ts/en/advanced_config.html Generating Configuration to /home/benoror/Dropbox/Proyectos/cotizahoy/config/development.sphinx.conf sh: indexer: command not found I tried starting the daemon manually (/usr/bin/sphinx-searchd), changing the config/sphinx.yml file: devlopment: searchd_binary_name: sphinx-searchd indexer_binary_name: sphinx-indexer But it shows the same error, any ideas ?

    Read the article

  • Data denormalization and C# objects DB serialization

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem? Additional info My DB is highly normalised and I'm using Entity Framework, but for the purpose of having external super-fast fulltext search functionality I'm sacrificing a bit DB denormalisation. Just for the info I'm using SphinxSE on top of MySql. Sphinx would return row IDs that I would use to fast query my index optimised conglomerate table to get most important data from it much much faster than querying multiple tables all over my DB. My table would have columns like: RowID (auto increment) EntityID (of the actual entity - but not directly related because this would have to point to different tables) EntityType (so I would be able to get the actual entity if needed) DateAdded (record timestamp when it's been added into this table) Title Metadata (serialized data related to particular entity type) This table would be indexed with SPHINX indexer. When I would search for data using this indexer I would provide a series of EntityIDs and a limit date. Indexer would have to return a very limited paged amount of RowIDs ordered by DateAdded (descending). I would then just join these RowIDs to my table and get relevant results. So this won't actually be full text search but a filtering search. Getting RowIDs would be very fast this way and getting results back from the table would be much faster than comparing EntityIDs and DateAdded comparisons even though they would be properly indexed.

    Read the article

  • Bind ISet in ASP.NET MVC2

    - by Dmitriy Nagirnyak
    Hi, I am trying to find out what would be the best bind first element of ISet (Iesi.Collection) as a first element. So basically I only have to use some kind of collection that has an indexer (and ISet doesn't) then I can write code like this (which works perfectly well): <%: Html.EditorFor(x => x.Company.PrimaryUsers[0].Email) %> But as the ISet has no indexer I cannot use it. So how can I then bind the first element of ISet in MVC2? Thanks, Dmitriy.

    Read the article

  • App only spawns one thread

    - by tipu
    I have what I thought was a thread-friendly app, and after doing some output I've concluded that of the 15 threads I am attempting to run, only one does. I have if __name__ == "__main__": fhf = FileHandlerFactory() tweet_manager = TweetManager("C:/Documents and Settings/Administrator/My Documents/My Dropbox/workspace/trie/Tweet Search Engine/data/partitioned_raw_tweets/raw_tweets.txt.001") start = time.time() for i in range(15): Indexer(tweet_manager, fhf).start() Then in my thread-entry point, I do def run(self): print(threading.current_thread()) self.index() That results in this: <Indexer(Thread-3, started 1168)> So of 15 threads that I thought were running, I'm only running one. Any idea as to why? Edit: code

    Read the article

  • Problem with dependencies in m2eclipse

    - by MH
    Hi, I have a problem with m2eclipse and Nexus. Normally, when you create a new Maven Project in Eclipse, you can select an Archetype like maven-archetype-quickstart from the Nexus Indexer. Unfortunately, the Nexus Indexer doesn't show anything at all. But the worst part about all this is, that the "Add Dependency" menue doesn't work. (For a better understanding: By clicking on the "Dependencies" tab, you can usually click on a button in order to enter a groupId or artifactId. That's the menue what I am talking about. If I enter for example "junit", it shows no search results.) Does anybody know this issue? How could I fix it? Thanks a million in advance for any help.

    Read the article

  • How should I ReaderWriterLockSlim and Dictionary<MyKeyClass,MyValueClass>?

    - by DayOne
    So the question is when should I use EnterReadLock() and EnterWriteLock() when accessing the Dictionary? TryGetValue. I think a ReadLock should be ok here. Updating the Dictionary using the Indexer e.g. _dic[existingKey] = NewValue. I think a ReadLock should be OK here. Add a new item using the Indexer e.g. _dic[newKey] = NewValue. I think I need a WriteLock here. Thanks in advance!

    Read the article

  • What is a good stopword in full text indexation?

    - by Benoit
    When you go to the Appendix D in Oracle Text Reference they provide lists of stopwords used by Oracle Text when indexing table contents. When I see the English list, nothing puzzles me. But the reason why the French list includes moyennant (French for in view of which) for example is unclear. Oracle has probably thought it through more than once before including it. How would you constitute a list of appropriate stopwords if you were to design an indexer?

    Read the article

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

  • sphinx xmlpipe2 cassandra and ruby 1.9

    - by user369083
    Hi, I start to using cassandra and I want to index my db with sphinx. I wrote ruby script which is used as xmlpipe, and I configure sphinx to use it. source xmlsrc { type = xmlpipe2 xmlpipe_command = /usr/local/bin/ruby /home/httpd/html/app/script/sphinxpipe.rb } When I run script from console output looks fine, but when I run indexer sphinx return error $ indexer test_index Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff using config file '/usr/local/etc/sphinx.conf'... indexing index 'test_index'... ERROR: index 'test_index': source 'xmlsrc': attribute 'id' required in <sphinx:document> (line=10, pos=0, docid=0). total 0 docs, 0 bytes total 0.000 sec, 0 bytes/sec, 0.00 docs/sec total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg my script is very simple $stdout.sync = true puts %{<?xml version="1.0" encoding="utf-8"?>} puts %{<sphinx:docset>} puts %{<sphinx:schema>} puts %{<sphinx:field name="body"/>} puts %{</sphinx:schema>} puts %{<sphinx:document id="ba32c02e-79e2-11df-9815-af1b5f766459">} puts %{<body><![CDATA[aaa]]></body>} puts %{</sphinx:document>} puts %{</sphinx:docset>} I use ruby 1.9.2-head, ubuntu 10.04, sphinx 0.9.9 How can I get this to work?

    Read the article

  • Subsonic Access To App.Config Connection Strings From Referenced DLL in Powershell Script

    - by J Wynia
    I've got a DLL that contains Subsonic-generated and augmented code to access a data model. Actually, it is a merged DLL of that original assembly, Subsonic itself and a few other referenced DLL's into a single assembly, called "PowershellDataAccess.dll. However, it should be noted that I've also tried this referencing each assembly individually in the script as well and that doesn't work either. I am then attempting to use the objects and methods in that assembly. In this case, I'm accessing a class that uses Subsonic to load a bunch of records and creates a Lucene index from those records. The problem I'm running into is that the call into the Subsonic method to retrieve data from the database says it can't find the connection string. I'm pointing the AppDomain at the appropriate config file which does contain that connection string, by name. Here's the script. $ScriptDir = Get-Location [System.IO.Directory]::SetCurrentDirectory($ScriptDir) [Reflection.Assembly]::LoadFrom("PowershellDataAccess.dll") [System.AppDomain]::CurrentDomain.SetData("APP_CONFIG_FILE", "$ScriptDir\App.config") $indexer = New-Object LuceneIndexingEngine.LuceneIndexGenerator $indexer.GeneratePageTemplateIndex("PageTemplateIndex"); I went digging into Subsonic itself and the following line in Subsonic is what's looking for the connection string and throwing the exception: ConfigurationManager.ConnectionStrings[connectionStringName] So, out of curiosity, I created an assembly with a single class that has a single property that just runs that one line to retrieve the connection string name. I created a ps1 that called that assembly and hit that property. That prototype can find the connection string just fine. Anyone have any idea why Subsonic's portion can't seem to see the connection strings?

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • How do I overload the square-bracket operator in C#?

    - by Coderer
    DataGridView, for example, lets you do this: DataGridView dgv = ...; DataGridViewCell cell = dgv[1,5]; but for the life of me I can't find the documentation on the index/square-bracket operator. What do they call it? Where is it implemented? Can it throw? How can I do the same thing in my own classes? ETA: Thanks for all the quick answers. Briefly: the relevant documentation is under the "Item" property; the way to overload is by declaring a property like public object this[int x, int y]{ get{...}; set{...} }; the indexer for DataGridView does not throw, at least according to the documentation. It doesn't mention what happens if you supply invalid coordinates. ETA Again: OK, even though the documentation makes no mention of it (naughty Microsoft!), it turns out that the indexer for DataGridView will in fact throw an ArgumentOutOfRangeException if you supply it with invalid coordinates. Fair warning.

    Read the article

  • Outlook 2007 won't close

    - by Scott Weinstein
    I use Outlook 2007 at home as an IMAP client and RSS feed reader. I have a problem that when I close outlook, the window exits, but the process remains running. This prevents me from opening outlook again and on Win7 prevents rapid shutdown of my computer. How can I have Outlook 2007 exit for real? Edit: Here's what the addins dialog reports Active: None Inactive: MS Outlook Mobile Service, MS VBA for Outlook, OneNote Notes for Outlook Items, Outlook Change Notifier, Windows Search Email indexer.

    Read the article

  • How can I move this file away?

    - by Tim
    I have identified the file that prevent me from further shrinking C: partition for Windows 7 OS. By query shrinking and then checking Event Viewer, this is the file: The last unmovable file appears to be: \ProgramData\Microsoft\Search\Data\Applications\Wi ndows\Projects\SystemIndex\Indexer\CiFiles\0001001 5.wid::$DATA I was wondering how to move this file away? I guess I have to move the whole \ProgramData away, not just that file?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >