Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 34/45 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Excel Interop: Range.FormatConditions.Add throws MissingMethodException

    - by Zach Johnson
    I am writing an application which uses the Microsoft.Office.Interop.Excel assembly to export/import data from Excel spreadsheets. Everything was going fine (except for 1 based indexing and all those optional parameters!), until I tried to use conditional formatting. When I call Range.FormatConditions.Add I get a MissingMethodException telling me that no such method exists. This happens in both Vista and XP. Here's an example of the code that generates the exception: //1. Add a reference to Microsoft.Office.Interop.Excel (version 11.0.0.0) //2. Compile and run the following code: using Microsoft.Office.Interop.Excel; class Program { static void Main(string[] args) { Application app = new Application(); Workbook workbook = app.Workbooks[1]; Worksheet worksheet = (Worksheet)workbook.Worksheets[1]; Range range = worksheet.get_Range("A1", "A5"); FormatCondition condition = range.FormatConditions.Add( XlFormatConditionType.xlCellValue, XlFormatConditionOperator.xlBetween, 100, 200); } }

    Read the article

  • How can I use Lucene for personal name (first name, last name) search?

    - by os111
    I'm writing a search feature for a database of NFL players. The user enters a search string like "Jason Campbell" or "Campbell" or "Jason". I'm having trouble getting the appropriate results. Which Analyzer should I use when indexing? Which Query when querying? Should I distinguish between first name and last name or just index the full name string? I'd like the following behavior: Query: "Jason Campbell" - Result: exact match for 1 player, Jason Campbell Query: "Campbell" - Result: all players with Campbell in their name Query: "Jason" - Result: all players with Jason in their name Query: "Cambel" [misspelled] - Result: all players with Campbell in their name

    Read the article

  • DataReader - hardcode ordinals?

    - by David Neale
    When returning data from a DataReader I would typically use the ordinal reference on the DataReader to grab the relevant column: if (dr.HasRows) Console.WriteLine(dr[0].ToString()); (OR dr.GetString(0); OR (string)dr[0];)... I have always done this because I was advised at an early stage that using dr["ColumnName"] or a more elegant way of indexing causes a performance hit. However, whilst everything is becoming increasingly strongly-typed I feel more uncomfortable with this. I'm also aware that the above does not check for DBNull. How should data be returned from a DataReader?

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

  • excel vba moving non-contiguous range selection to an array

    - by Russ Urquhart
    In the situation where the user select two non-contiguous column ranges i wrote the following: Dim count long Dim points variant Dim i long Set user_range = ActiveWindow.RangeSelection count = user_range.count / 2 ReDim points(1 To count, 1 To 2) For i = 1 To count MsgBox "value is" & user_range.Areas.Item(1).Value(i,1) points(i, 1) = user_range.Areas.Item(1).Value(i,1) points(i, 2) = user_range.Areas.Item(2).Value(i,1) Next i But i get an object error when i try this. Am i indexing Value wrong? This should work right? Is there an easier way to do this? Any help is greatly appreciated! Thanks, Russ

    Read the article

  • SQL Server 2000 tables

    - by klork
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • (Oracle Performance) Will a query based on a view limit the view using the where clause?

    - by BestPractices
    In Oracle (10g), when I use a View (not Materialized View), does Oracle take into account the where clause when it executes the view? Let's say I have: MY_VIEW = SELECT * FROM PERSON P, ORDERS O WHERE P.P_ID = O.P_ID And I then execute the following: SELECT * FROM MY_VIEW WHERE MY_VIEW.P_ID = '1234' When this executes, does oracle first execute the query for the view and THEN filter it based on my where clause (where MY_VIEW.P_ID = '1234') or does it do this filtering as part of the execution of the view? If it does not do the latter, and P_ID had an index, would I also lose out on the indexing capability since Oracle would be executing my query against the view which doesn't have the index rather than the base table which has the index?

    Read the article

  • Is Lucern.net good choice for website search of 1M item product database? (giving up on SQL Server

    - by Pete Alvin
    We currently have in production SQL Server 2005 and we use it's full text search for a eCommerce site search of a million product database. I've optimized it as much as possible (I think) and we're still seeing search times of five seconds. (We don't need site scrawl or PDF (etc.) document indexing features... JUST "Google" speed for site search.) I was going to buy dtSearch but now I realize I can just use Lucerne.net and save the $2,500 for two server license. I read on a post that Lucerne.Net is not good for website searches. Has anyone else used Lucerne.Net from ASP.Net? Does it take a lot of memory? Any problems? Any comments?

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • Lucene.Net PrefixQuery

    - by Sole
    Hi, i´m development a suggest box for my site search service. I has to search fields like these: Visual Basic Enterprise Edition Visual C++ Visual J++ My code is: Directory dir = Lucene.Net.Store.FSDirectory.GetDirectory("Index", false); IndexSearcher searcher = new Lucene.Net.Search.IndexSearcher( dir,true); Term term = new Term("nombreAnalizado", _que); PrefixQuery query = new PrefixQuery(term); TopDocs topDocs = searcher.Search(query, 10000); This code works well in this case: "Enterprise" will match "Visual Basic Enterprise Edition" But "Enterprise E" doesn´t match anything. I removed white spaces at indexing time and when the user is searching. Thanks.

    Read the article

  • Creating a short unique string for each unique long string

    - by king.net
    I'm trying to create a url shortener system in c# and asp.net mvc. I know about hashtable and I know how to create a redirect system etc. The problem is indexing long urls in database. Some urls may have up to 4000 character length, and it seems it is a bad idea to index this kind of strings. The question is: How can I create a unique short string for each url? for example MD5 can help me? Is MD5 really unique for each string? NOTE: I see that Gravatar uses MD5 for emails, so if each email address is unique, then its MD5 hashed value is unique. Is it right? Can I use same solution for urls?

    Read the article

  • Deleting index from Solr using solrj as a client

    - by Azhar
    Hello, I am using solrj as client for indexing documents on the solr server. I am having problem while deleting the indexes by 'id' from the solr server. I am using following code to delete the indexes: server.deleteById("id:20"); server.commit(true,true); After this when i again search for the documents, the search result contains the above document also. Dont know what is going wrong with this code. Please help me out with issue. Thanks!

    Read the article

  • Which full-text search package should I use for SQLite3?

    - by Benjamin Pollack
    SQLite3 appears to come with three different full-text search engines, called FTS1, FTS2, and FTS3. The documentation available on the website mentions that FTS1 is stable, FTS2 is in development, and that you should use FTS2. Examples I find online use FTS3, which is in CVS, and not documented versus FTS2. None of the full-text search engines come with the amalgamated source, as near as I can tell. So, my question: which of these three engines, if any, should I use for full-text indexing in SQLite? Or should I simply use a third-party tool like Sphinx, or a custom solution in Lucene, instead?

    Read the article

  • Forcing a page to load itself in another page's iframe

    - by bufferz
    I'm a fairly inexperienced web designer learning css/html on the fly for a company's web site. I want to keep menus, banners, etc in one document so I don't have to repeat updates across many documents. My solution was to make an index.aspx file with the menus and headers, then have a simple iframe for content. This works quite well and updates easily. the format is: www.site.com/index.aspx?page=page1.htm www.site.com/index.aspx?page=page2.htm The problem -- Google is indexing my iframe'd pages and linking to them directly: www.site.com/page1.htm www.site.com/page2.htm How can I set up redirects to page1.htm forwards to index.aspx?page=page1.htm without creating a "hall of mirrors" effect inside the original iframe?

    Read the article

  • Understanding memory and cpu speed

    - by tipu
    Firstly, I am working on a windows xp 64 machine with 4gb ram and 2.29 ghz x4 I am indexing 220,000 lines of text that are more or less the same length. These are divided into 15 equally sized files. File 1/15 takes 1 minute to index. As the script indexes more files, it seems to take much longer with file 15/15 taking 40 minutes. My understanding is that the more I put in memory, the faster the script is. The dictionary is indexed in a hash, so fetch operations should be O(1). I am not sure where the script would be hanging the CPU. I have the script here.

    Read the article

  • Is it ever a bad idea to publish a sitemap for a blog?

    - by mipadi
    I have a blog, and I have been considering publishing a sitemap for it, which would include the index page, archives page, and an entry for each individual blog post. Is this ever a bad idea? Is it a good (or useful) idea? I'm particularly interested in the <changefreq> element: I edit posts from time to time, and while that's not a common occurrence, I don't want to set a particularly infrequent change frequency that prevents search engines like Google from indexing the edits. (The sitemaps protocol says that search engines may still crawl the pages more frequently, but has no further details on the matter.)

    Read the article

  • Which metadata I should save when downloading web-pages?

    - by Vojtech R.
    Hi, I'm going to download (for future purposes of language processing) some thousands webpages. Now I'm thinking, which metadata I should save. I explore this, but I do not wont to neglect something important. <title> <link> <publish_date> <date_downloaded> <source> // to this page <keyword> // for Solr indexing <text> // cleaned body of page Is there something important what I could miss in future?

    Read the article

  • Thinking Sphinx - sorting by a string attribute gets out of sync when changes are made

    - by Scott Brown
    I have a "restaurants" table with a "name" column. I've defined the following index: indexes "REPLACE(UPPER(restaurants.name), 'THE ', '')", :as => :restaurant_name, :sortable => true ... because I want to sort the restaurant names without respect to the prefix "The ". My problem is that whenever one of these records is updated (in any way) the new record jumps to the top of the sort order. If another record is updated, it also jumps ahead of the rest. I end up with two lists: a list of restaurants that have been updated since the last re-indexing and a list of those that haven't. Each respective list is in alphabetical order, but I don't understand why the overall list is getting segregated this way. I do have a delayed delta index set up, and I assume the issue is related to this.

    Read the article

  • Profiling statements inside a User-Defined Function

    - by Craig Walker
    I'm trying to use SQL Server Profiler (2005) to track down some application performance problems. One of the calls being made is to a table-valued user-defined function. This function wraps a select that joins several tables together. In SQL Server Profiler, the call to the UDF is logged. However, the select that underlies the UDF isn't being logged at all. Because of this, I'm not getting useful data on which tables & indexes are being hit. I'd like to feed this info into the Database Tuning Advisor for some indexing advice. Is there any way (short of unwrapping the queries themselves) to log the tables called by UDFs in Profiler?

    Read the article

  • haystack's RealTimeSearchIndex causes django to hang on data entry

    - by lsc
    I'm using django-haystack and a xapian backend with real time indexing (haystack.indexes.RealTimeSearchIndexing) of model data and it works fine on my Ubuntu server. However, it causes django to hang upon data entry when I deployed the app on a RHEL5 server. Everything is hunky dory if I switch to a standard SearchIndex. Running ./manage.py rebuild_index manually works fine too. The major differences between the two setups would be the versions of Python (2.4.3 vs 2.6.4) and the xapian (1.0.4-1 vs 1.0.15). Any suggestions on what may be the problem? Nothing interesting appears in the logs, and I've tried different databases (mysql, sqlite3) and deployment methods (mod_python, wsgi) with no luck yet. I have noted the warning on the haystack docs stating that RealTimeSearchIndex is only handled gracefully with a Solr backend, however I'm running a very traffic site with only occasional writes so I'm fine with some CPU overheads on writes.

    Read the article

  • Custom Lucene Sharding with Hibernate Search

    - by Timo Westkämper
    Has anyone experience with custom Lucene sharding / paritioning using Hibernate Search? The documentation of Hibernate Search says the following about Lucene Sharding : In some cases, it is necessary to split (shard) the indexing data of a given entity type into several Lucene indexes. This solution is not recommended unless there is a pressing need because by default, searches will be slower as all shards have to be opened for a single search. In other words don't do it until you have problems :) Has anyone implemented sharding in such a way for Hibernate Search that also queries can be target to one of the shards? In our case we have Lucene queries that should target only one shard per query.

    Read the article

  • C# ArrayList calling on a constructor class

    - by EvanRyan
    I'm aware that an ArrayList is probably not the way to go with this particular situation, but humor me and help me lose this headache. I have a constructor class like follows: class Peoples { public string LastName; public string FirstName; public Peoples(string lastName, string firstName) { LastName = lastName; FirstName = firstName; } } And I'm trying to build an ArrayList to build a collection by calling on this constructor. However, I can't seem to find a way to build the ArrayList properly when I use this constructor. I have figured it out with an Array, but not an ArrayList. I have been messing with this to try to build my ArrayList: ArrayList people = new ArrayList(); people[0] = new Peoples("Bar", "Foo"); people[1] = new Peoples("Quirk", "Baz"); people[2] = new Peopls("Get", "Gad"); My indexing is apparently out of range according to the exception I get.

    Read the article

  • Non-Relational DBMS Design Resources

    - by Matt Luongo
    Hey guys, As a personal project, I'm looking to build a rudimentary DBMS. I've read the relevant sections in Elmasri & Navathe (5ed), but could use a more focused text. The rub is that I want to play with novel non-relational data models. While a lot of E&N was great- indexing implementation details in particular- the more advanced DBMS implementation was only targeted to a relational model. I could also use something a bit more practical and detail-oriented, with real-world recommendations. I'd like to defer staring at DBMS source for a while if I can. Any ideas?

    Read the article

  • MySQL - how long to create an index?

    - by user293594
    Can anyone tell me how adding a key scales in MySQL? I have 500,000,000 rows in a database, trans, with columns i (INT UNSIGNED), j (INT UNSIGNED), nu (DOUBLE), A (DOUBLE). I try to index a column, e.g. ALTER TABLE trans ADD KEY idx_A (A); and I wait. For a table of 14,000,000 rows it took about 2 minutes to execute on my MacBook Pro, but for the whole half a billion, it's taking 15hrs and counting. Am I doing something wrong, or am I just being naive about how indexing a database scales with the number of rows?

    Read the article

  • SharePoint thinks new Server is offline?

    - by emalamisura
    I added an index server to the topology, and tried to switch it from the front end to the index server but for some reason I get an exception every time in Central Admin saying the server is offline. I checked everything... IIS is running and is serving out correctly The App Pool Identity is valid I restarted IIS I restarted the Indexing server I had a wierd problem when installing SP2 for SharePoint, (WSS sp2 installed fine) where it got stuck and "Upgrading SharePoint Products and Technologies" and I had to run the following command to get it to continue: psconfig -cmd upgrade -inplace b2b -wait -force After that it said it succeeded with no errors... The only thing I haven't done is reboot the Primary server cause I have to do it in off hours, but I want to make sure I have covered all my other bases first...Any ideas?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >