Search Results

Search found 21427 results on 858 pages for 'enterprise search'.

Page 168/858 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • How to Look Up Email by Full Name in Active Directory?

    - by Danny
    I want to search for a user's email by using Active Directory. Available is the user's full name (ex. "John Doe" for the email with an email "[email protected]"). From what I've searched, this comes close to what I'm looking to do -- except that the Filter is set to "SAMAccountName", which is not what I have. Unless I'm misunderstanding, I just need to pick the right attribute and toss in the full name name in the same way. Unfortunately, I don't know what this attribute is, apparently no one has had to ask about searching for information in this manner, and that is a pretty big list (msdn * microsoft * com/en-us/library/ms675090(v=VS.85) * aspx, stackoverflow didn't let me link 2 hyperlinks because I don't have 10 rep) of attributes. Does anyone know how to obtain the user's email address through an Active Directory lookup by using the user's full name?

    Read the article

  • SEO with image link alt text vs standard text-based link

    - by Infiniti Fizz
    Hi, I'm currently developing a website and the main navigation is made up of image links because the font used for them isn't standard. My client's only worry is will this mess up search engine optimization? Can I just add alt text to the images like "link 1" or use the name attribute of the anchor tag? Or would it be better to just have the navigation as anchor tags with the names of the links in them like: <a href="...">link 1</a>? I'm new to SEO so really don't know which to suggest to him, Thanks for your time, InfinitiFizz

    Read the article

  • Searching with Linq

    - by Phil
    I have a collection of objects, each with an int Frame property. Given an int, I want to find the object in the collection that has the closest Frame. Here is what I'm doing so far: public static void Search(int frameNumber) { var differences = (from rec in _records select new { FrameDiff = Math.Abs(rec.Frame - frameNumber), Record = rec }).OrderBy(x => x.FrameDiff); var closestRecord = differences.FirstOrDefault().Record; //continue work... } This is great and everything, except there are 200,000 items in my collection and I call this method very frequently. Is there a relatively easy, more efficient way to do this?

    Read the article

  • CONTAINSTABLE with wildcard works different in SQL Server 2005 and SQL Server 2008?

    - by musuk
    I have two same databases one on SQL Server 2005 and one on SQL Server 2008, it have same SQL_Latin1_General_CP1_CI_AS Collation, and full text search catalogs have the same settings. These two databases contains table with same data, NTEXT string: "...kræve en forklaring fra miljøminister Connie Hedegaard.." My problem is: CONTAINSTABLE on SQL Server 2008 finds nothing if query is: select * from ContainsTable(SearchIndex_7, Content, '"miljø*"') ct but SQL Server 2005 works perfectly and finds necessary record. SQL Server 2008 finds necessary record if query is: select * from ContainsTable(SearchIndex_7, Content, '"milj*"') ct or select * from ContainsTable(SearchIndex_7, Content, '"miljøminister"') What can be reason for so strange behavior?

    Read the article

  • CONTAINSTABLE with wildcard works different in SQLServer 2005 and SQLServer 2008?

    - by musuk
    I have two same databases one on SQLServer 2005 and one on SqlServer 2008, it have same SQL_Latin1_General_CP1_CI_AS Collation, and full text search catalogs have the same settings. These two databases contains table with same data, NTEXT string: "...kræve en forklaring fra miljøminister Connie Hedegaard.." My problem is: CONTAINSTABLE on SQLServer 2008 finds nothing if query is: select * from ContainsTable(SearchIndex_7, Content, '"miljø*"') ct but SQLServer 2005 works perfectly and finds necessary record. SQLServer 2008 finds necessary record if query is: select * from ContainsTable(SearchIndex_7, Content, '"milj*"') ct or select * from ContainsTable(SearchIndex_7, Content, '"miljøminister"') What can be reason for so strange behavior?

    Read the article

  • Do I assign different or the same class id to 32-bit and 64-bit versions of the same IFilter?

    - by sharptooth
    I've implemented my own Microsoft Search IFilter. I need two versions of it - 32-bit and 64-bit for deploying them on corresponding systems. In case of IFilters for any file extension I can only register one IFilter class id. Which means I can only use one version on any system. So having two class ids seems useless - it only makes the automatic installer more complex. Do I reuse the same COM class id for both or do I use different class ids?

    Read the article

  • drupal bootstrap script: how to get list of all nodes of type x?

    - by groovehunter
    hi. I create a custom import and export, at the moment as an external script (via bootstrap), i plan to create a module in a more generic fashion lateron. I am building a frontend for nagios and for our host management and nagios configuration btw. Maybe it might become useful for other environments (networkmanagement) Now i need to know how to get list of all nodes of type x? I want to avoid direct SQL. A suggestion i got was to make an rss and parse it but i acess the drupal db a dozen times to extract various nodes, so it feels strange to do a web request for one thing So what i am looking for as newbie drupal dev is just a pointer to basic search module api for this task TIA florian

    Read the article

  • How to index a string like "aaa.bbb.ddd-fff" in Lucene?

    - by user46703
    Hi, I have to index a lot documents that contain reference numbers like "aaa.bbb.ddd-fff". The structure can change but it's always some arbitrary numbers or characters combined with "/","-","_" or some other delimiter. The users want to be able to search for any of the substrings like "aaa" or "ddd" and also for combinations like "aaa.bbb" or "ddd-fff". The best I have been able to come up with is to create my own token filter modeled after the synonym filter in "Lucene in action" which spits out multiple terms for each input. In my case I return "aaa.bbb", "bbb.ddd","bbb.ddd-fff" and all other combinations of the substrings. This works pretty well but when I index large documents (100MB) that contain lots of such strings I tend to get out of memory exceptions because my filter returns multiple terms for each input string. Is there a better way to index these strings?

    Read the article

  • Reading directly from the Doctrine Searchable index table

    - by phidah
    I've got a Doctrine table with the Searchable behavior enabled. Whenever a record is created, an index is made in another table. I have a model called Entry and the behavior automatically created the table entry_index. My question now is: How can I - without using the search(...) methods of my model use the data from this table? I want to create a tag cloud of the words most used, and the data in the index table is exactly what I need.

    Read the article

  • Determine the folder of a SAS source file

    - by exhuma
    When I open a SAS file in enterprise guide and run it, it is executed on the server. The source file itself is located either on the production site or the development site. In both cases, it is executed the same server however. I want to be able to tell my script to store results in a relative folder. But if I write something like libname lib_out xport "..\tmp\foobar.xpt"; I get an error, because the working folder of the SAS Enterprise Guide process is not the location of my source file, but a folder on the server. And the folder ..\tmp does not exist there. Even if it would, the server process does not have write permission in that folder. I would like to determine from which folder the .sas file was loaded and set the working folder accordingly. In one case it's S:\Development\myproject\sas\foobar.sas and in the other case it's S:\Production\myproject\sas\foobar.sas It this possible at all? Or how would you do this?

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • Searching algorithmics: Parsing and processing a request

    - by James P.
    Say you were to create a search engine that can accept a query statement under the form of a String. The statement can be used to retrieve different types of objects with a given set of characteristics and possibly linked to other objects. In plain english or pseudo-code using an OOP approach, how would you go about parsing and processing statements as follows to get the series of desired objects ? get fruit with colour green get variety of apples, pears from Andy get strawberry with colour "deep red" and origin not Spain get total of sales of melons between 2010-10-10 and 2010-12-30 get last deliverydate of bananas from "Pete" and state not sold Hope the question is clear. If not I'll be more than happy to reformulate. P.S: This isn't homework ;)

    Read the article

  • Searching for empty methods

    - by Brian McCord
    I am currently working on a security audit/code review of our system. This requires me to check all pages in the system and make sure that the code behind contains two methods that are used to check security. Sometimes the code in these methods get commented out to make testing easier. So, my question is does anyone know an easy way to search code, make sure the methods are present, and to determine which ones have no code or have all the code commented out. It would make my job much easier if I can get a list instead of having to look at every file... I'm sure I could write this myself, but I thought someone may know of something that already exists. Thanks!

    Read the article

  • Solr Vs. Sphinx in a Ruby project

    - by Robert Ross
    I have a project that is being written on top of the Grape API framework in ruby. (https://github.com/intridea/grape) The problem I'm having is that Thinking-Sphinx vs. Sunspot (Gems used to interface with each search index) have worlds different benchmarks. View the Benchmark Here We're trying to develop something that is quick and easy to deploy (Solr needs Java). The issues we see right now is mainly that Solr is slower through Sunspot gem and Sphinx is faster through Thinking-Sphinx because Solr is HTTP REST calls where Sphinx is sockets. Anyone have any experience in either and can explain pitfalls / bonuses? Note: Needs to be deployable to Rails AND non-rails apps (Hence Sunspot). Thanks!

    Read the article

  • Lucene: Fastest way to return the document occurance of a phrase?

    - by dont say the kid's name
    Hi Guys, I am trying to use Lucene (actually PyLucene!) to find out how many documents contain my exact phrase. My code currently looks like this... but it runs rather slow. Does anyone know a faster way to return document counts? phraseList = ["some phrase 1", "some phrase 2"] #etc, a list of phrases... countsearcher = IndexSearcher(SimpleFSDirectory(File(STORE_DIR)), True) analyzer = StandardAnalyzer(Version.LUCENE_CURRENT) for phrase in phraseList: query = QueryParser(Version.LUCENE_CURRENT, "contents", analyzer).parse("\"" + phrase + "\"") scoreDocs = countsearcher.search(query, 200).scoreDocs print "count is: " + str(len(scoreDocs))

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • Fulltext for innoDB? or a good solution for php app

    - by Joshua
    I have a table I want to run a fulltext search on, but it is currently innoDB and is using a lot of foreign keys for other kinds of queries. Should I make like a 1:1 "meta-data" table that is myisam for fulltext? Also I am reading some things that say that fulltext corrupts MySQL tables pretty randomly? I dunno, the articles are a couple years old, maybe they've fixed that in 5+? If not what's a good solution for searching? Zend_Lucene seems cool but slow, even with caching, for the client's large tables and autocomplete functionality et al.

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • How to find thousands of company names?

    - by schefdev
    How can I find or generate thousands of company names for testing and demo purposes? (Address, phone number, and related information would be nice too.) I've got a system I'm building which includes business contact information. Pretty common no doubt. My test/demo database currently has randomly generated individual's names loaded (thanks to a handy IRS spreadsheet I found). This has worked great for internal testing and review purposes, but it looks really odd when shown to prospective customers. I've tried various online public information sources (e.g. EDGAR, and county based property records searches), but these all require me to manually stitch together the results in blocks of 50 names or so at a time. I could do this, but was really hoping for a search service or data store out there that had this type of information readily searchable and retrievable in very large batches.

    Read the article

  • What's a good algorithm for searching arrays N and M, in order to find elements in N that also exist

    - by GenTiradentes
    I have two arrays, N and M. they are both arbitrarily sized, though N is usually smaller than M. I want to find out what elements in N also exist in M, in the fastest way possible. To give you an example of one possible instance of the program, N is an array 12 units in size, and M is an array 1,000 units in size. I want to find which elements in N also exist in M. (There may not be any matches.) The more parallel the solution, the better. I used to use a hash map for this, but it's not quite as efficient as I'd like it to be. Typing this out, I just thought of running a binary search of M on sizeof(N) independent threads. (Using CUDA) I'll see how this works, though other suggestions are welcome.

    Read the article

  • PHP: Find element with certain property value in array

    - by Svish
    I'm sure there is an easy way to do this, but I can't think of it right now. Is there an array function or something that lets me search through an array and find the item that has a certain property value? For example: $people = array( array( 'name' => 'Alice', 'age' => 25, ), array( 'name' => 'Waldo', 'age' => 89, ), array( 'name' => 'Bob', 'age' => 27, ), ); How can I find and get Waldo?

    Read the article

  • Bash: any command to replace strings in text files?

    - by mikez302
    I have a hierarchy of directories containing many text files. I would like to search for a particular text string every time it comes up in one of the files, and replace it with another string. For example, I may want to replace every occurrence of the string "Coke" with "Pepsi". Does anyone know how to do this? I am wondering if there is some sort of Bash command that can do this without having to load all these files in an editor, or come up with a more complex script to do it. I found this page explaining a trick using sed, but it doesn't seem to work in files in subdirectories.

    Read the article

  • Find files in a folder using Java

    - by Sii
    Hello, I'm new here so be kind to my stupidity :) What I need to do if Search a folder say C:\example I then need to go through each file and check to see if it matches a few start characters so if files start temp****.txt tempONE.txt tempTWO.txt So if the file starts with temp and has an extension .txt I would like to then put that file name into a File file = new File("C:/example/temp***.txt); so I can then read in the file, the loop then needs to move onto the next file to check to see if it meets as above. I hope this makes sense, thanks for taking the time to view this I do apperciate it :)

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • In SQL Server 2008, when would I use a full text index that covered several tables?

    - by Suddy
    I wanted to do a full text search across several related tables in SQL Server 2008. From browsing this site I've realised the best option is via a view, but initially I thought I was meant to add several tables to the same full text index via Management Studio. I started to do this and realised the index would have no idea how they were related, so my question is: when would I want to have a full text index covering several tables in this way? Apologies for the vagueness, I am just trying to satisfy my curiosity after Google let me down.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >