Search Results

Search found 20088 results on 804 pages for 'binary search trees'.

Page 119/804 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Download binary file From SQL Server 2000

    - by kareemsaad
    I inserted binary files (images, PDF, videos..) and I want to retrieve this file to download it. I used generic handler page as this public void ProcessRequest (HttpContext context) { using (System.Data.SqlClient.SqlConnection con = Connection.GetConnection()) { String Sql = "Select BinaryData From ProductsDownload Where Product_Id = @Product_Id"; SqlCommand com = new SqlCommand(Sql, con); com.CommandType = System.Data.CommandType.Text; com.Parameters.Add(Parameter.NewInt("@Product_Id", context.Request.QueryString["Product_Id"].ToString())); SqlDataReader dr = com.ExecuteReader(); if (dr.Read() && dr != null) { Byte[] bytes; bytes = Encoding.UTF8.GetBytes(String.Empty); bytes = (Byte[])dr["BinaryData"]; context.Response.BinaryWrite(bytes); dr.Close(); } } } and this is my table CREATE TABLE [ProductsDownload] ( [ID] [bigint] IDENTITY (1, 1) NOT NULL , [Product_Id] [int] NULL , [Type_Id] [int] NULL , [Name] [nvarchar] (200) COLLATE Arabic_CI_AS NULL , [MIME] [varchar] (50) COLLATE Arabic_CI_AS NULL , [BinaryData] [varbinary] (4000) NULL , [Description] [nvarchar] (500) COLLATE Arabic_CI_AS NULL , [Add_Date] [datetime] NULL , CONSTRAINT [PK_ProductsDownload] PRIMARY KEY CLUSTERED ( [ID] ) ON [PRIMARY] , CONSTRAINT [FK_ProductsDownload_DownloadTypes] FOREIGN KEY ( [Type_Id] ) REFERENCES [DownloadTypes] ( [ID] ) ON DELETE CASCADE ON UPDATE CASCADE , CONSTRAINT [FK_ProductsDownload_Product] FOREIGN KEY ( [Product_Id] ) REFERENCES [Product] ( [Product_Id] ) ON DELETE CASCADE ON UPDATE CASCADE ) ON [PRIMARY] GO And use data list has label for file name and button to download file as <asp:DataList ID="DataList5" runat="server" DataSource='<%#GetData(Convert.ToString(Eval("Product_Id")))%>' RepeatColumns="1" RepeatLayout="Flow"> <ItemTemplate> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td class="spc_tab_hed_bg spc_hed_txt lm5 tm2 bm3"> <asp:Label ID="LblType" runat="server" Text='<%# Eval("TypeName", "{0}") %>'></asp:Label> </td> <td width="380" class="spc_tab_hed_bg"> &nbsp; </td> </tr> <tr> <td align="left" class="lm5 tm2 bm3"> <asp:Label ID="LblData" runat="server" Text='<%# Eval("Name", "{0}") %>'></asp:Label> </td> <td align="center" class=" tm2 bm3"> <a href='<%# "DownloadFile.aspx?Product_Id=" + DataBinder.Eval(Container.DataItem,"Product_Id") %>' > <img src="images/downloads_ht.jpg" width="11" height="11" border="0" /> </a> <%--<asp:ImageButton ID="ImageButton1" ImageUrl="images/downloads_ht.jpg" runat="server" OnClick="ImageButton1_Click1" />--%> </td> </tr> </table> </ItemTemplate> </asp:DataList> I tried more to solve this problem but I cannot please if any one has solve for this proplem please sent me thank you kareem saad programmer MCTS,MCPD Toshiba Company Egypt

    Read the article

  • Fulltext search on many tables

    - by Rob
    I have three tables, all of which have a column with a fulltext index. The user will enter search terms into a single text box, and then all three tables will be searched. This is better explained with an example: documents doc_id name FULLTEXT table2 id doc_id a_field FULLTEXT table3 id doc_id another_field FULLTEXT (I realise this looks stupid but that's because I've removed all the other fields and tables to simplify it). So basically I want to do a fulltext search on name, a_field and another_field, and then show the results as a list of documents, preferably with what caused that document to be found, e.g. if another_field matched, I would display what another_field is. I began working on a system whereby three fulltext search queries are performed and the results inserted into a table with a structure like: search_results table_name row_id score (This could later be made to cache results for a few days with e.g. a hash of the search terms). This idea has two problems. The first is that the same document can be in the search results up to three times with different scores. Instead of that, if the search term is matched in two tables, it should have one result, but a higher score. The second is that parsing the results is difficult. I want to display a list of documents, but I don't immediately know the doc_id without a join of some kind; however the table to join to is dependant on the table_name column, and I'm not sure how to accomplish that. Wanting to search multiple related tables like this must be a common thing, so I guess what I'm asking is am I approaching this in the right way? Can someone tell me the best way of doing it please.

    Read the article

  • Integrating Search Server 2008 Express with WSS 3.0

    - by Jason Kemp
    I'm setting up the environment for an intranet using WSS (Windows SharePoint Services) 3.0. The catch is getting the environment configured to work with MS Search Server 2008 Express. Here's the environment I'd like to setup: A: Web Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1 B: Search Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1; Search Server 2008 Express C: Database Server; Win Server 2003 SP2; SQL Server 2000 SP3 - Admin db, Content db, Config db, Search db The question is whether 3 servers can be used like the above configuration or if the Search Server (B) has to be combined with (A) since we're using the free Express version of the Search Server. The documentation from MS doesn't make it clear either way. I can attack this problem with trial and error but would rather not. The bigger question is: What is the best practice for a WSS / Search Server installation?

    Read the article

  • How do I grab and use a variables coming back through ValueList from an AJAX call?

    - by Mel
    I'm trying the following code to execute a search and it's not working. On the search.cfm page, the only value coming back is the value I input into the search field (even if I click an autosuggest value, it's not coming back; only the letters I actually type myself come back). <cfform class="titleSearchForm" id="searchForm" action="search.cfm?GameID=#cfautosuggestvalue.GameID#" method="post"> <fieldset> <cfinput type="text" class="titleSearchField" name="TitleName" onChange="form.submit()" autosuggest="cfc:gz.cfcomp.search.AutoSuggestSearch({cfautosuggestvalue})"> <input type="button" class="titleSearchButton" value=" " /> </fieldset> </cfform> Query in CFC: <cfquery name="SearchResult" datasource="myDSN"> SELECT CONCAT(titles.TitleName, ' on ', platforms.PlatformAbbreviation) AS sResult, games.GameID FROM games Inner Join platforms ON games.PlatformID = platforms.PlatformID Inner Join titles ON titles.TitleID = games.TitleID WHERE UCase(titleName) LIKE Ucase('#ARGUMENTS.SearchString#%') ORDER BY titleName ASC; </cfquery> Two things: First of all, I would like to get the GameID back to the page making the AJAX request; I know why it is not coming back: Because I'm only returning sResult var, which does not include the GameID. Is there a way to return the GameID value without displaying it? The second thing: How to I grab a value from the auto-suggest once it is returned? Say I want to grab the GameID, or if I can't do that, the "TitleName" to use that in my query? I tried passing it to the form this way: action="search.cfm?GameID=#cfautosuggestvalue.GameID#" - but that does not work. How do I reference the autosuggestionvalue varaibles for use? Thanks

    Read the article

  • Serializable object in intent returning as String

    - by B_
    In my application, I am trying to pass a serializable object through an intent to another activity. The intent is not entirely created by me, it is created and passed through a search suggestion. In the content provider for the search suggestion, the object is created and placed in the SUGGEST_COLUMN_INTENT_EXTRA_DATA column of the MatrixCursor. However, when in the receiving activity I call getIntent().getSerializableExtra(SearchManager.EXTRA_DATA_KEY), the returned object is of type String and I cannot cast it into the original object class. I tried making a parcelable wrapper for my object that calls out.writeSerializable(...) and use that instead but the same thing happened. The string that is returned is like a generic Object toString(), i.e. com.foo.yak.MyAwesomeClass@4350058, so I'm assuming that toString() is being called somewhere where I have no control. Hopefully I'm just missing something simple. Thanks for the help! Edit: Some of my code This is in the content provider that acts as the search authority: //These are the search suggestion columns private static final String[] COLUMNS = { "_id", // mandatory column SearchManager.SUGGEST_COLUMN_TEXT_1, SearchManager.SUGGEST_COLUMN_INTENT_EXTRA_DATA }; //This places the serializable or parcelable object (and other info) into the search suggestion private Cursor getSuggestions(String query, String[] projection) { List<Widget> widgets = WidgetLoader.getMatches(query); MatrixCursor cursor = new MatrixCursor(COLUMNS); for (Widget w : widgets) { cursor.addRow(new Object[] { w.id w.name w.data //This is the MyAwesomeClass object I'm trying to pass }); } return cursor; } This is in the activity that receives the search suggestion: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Object extra = getIntent().getSerializableExtra(SearchManager.EXTRA_DATA_KEY); //extra.getClass() returns String, when it should return MyAwesomeClass, so this next line throws a ClassCastException and causes a crash MyAwesomeClass mac = (MyAwesomeClass)extra; ... }

    Read the article

  • Typical Search, Result and Detail Workflow Staying Within an Android Tab

    - by Justin
    So, I've been banging my head looking for a good solution for a few days and am stuck. I have a search screen (Activity) in a tab, and after the user enters a value and clicks "search" I would like the results to come back in that same tab, and then if an item from the results is selected, to show more detailed results, in that same tab. I have it all working now in separate activities, and even the first step working in a tab, but as soon as I call the activity to process he search results... i.e. startActivity(i); for the results Activity, the results displayed are not in the tab! I am having a very difficult time getting this flow to work all under a tab. Any thoughts on how to make this happen? I keep hearing that Android views should be used instead of activities, but am I then to assume that all the logic I have right now for 3 activity needs to go inside 1 activity and then I need to handle setting the content and state for each of these cases? Plus, won't the history stack not work as pressing the back button will take the user out of the application, instead of taking them from say the search result to the search screen, or the details to the search results, etc. This seems like a mess. Can anyone show a more complex example of tabs or how one might have a simple search, result and detail workflow staying in a tab? I have seen a few questions on this concept of keeping activities "within a tab", but no good resolution. Please help.

    Read the article

  • Lucene Search Returning Extra, Undesired Records

    - by Brandon
    I have a Lucene index that contains a field called 'Name'. I escape all special characters before inserting a value into my index using QueryParser.Escape(value). In my example I have 2 documents with the following names respectively: Test Test (Test) They get inserted into my index as such (I can confirm this using Luke): [test] [test] [\(test\)] I insert these values as TOKENIZED and using the StandardAnalyzer. When I perform a search, I use the QueryParser.Escape(searchString) against my search string input to escape special characters and then use the QueryParser with my 'Name' field and the StandardAnalyzer to perform my search. When I perform a search for 'Test', I get back both documents in my index (as expected). However, when I perform a search for 'Test (Test)', I am getting back both documents still. I realize that in both examples it matches on the 'test' term in the index, but I am confused in my 2nd example why it would not just pull back the document with the value of 'Test (Test)' because my search should create two terms: [test] and [\(test\)] I would imagine it would perform some sort of boolean operator where BOTH terms must match in that situation so I would get back just one record. Is there something I am missing or a trick to make the search behave as desired?

    Read the article

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • Large Queries in Google

    - by marienbad
    I have a large query I want to do in google. It's just a string of OR's for the purpose of determining which search terms are ranked the highest compared to the others. It's not ABSURDLY large -- it's only 5,500 characters. But Google says: Request-URI Too Large The requested URL /search... is too large to process. Is there a way around?

    Read the article

  • How to mount a USB thumb drive as "Fixed" instead of "Removable"

    - by AMissico
    I have over 8 GB in my "Code Library" that I maintain on a 64 GB ScanDisk Ultra Backup USB Device. Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot, because the USB device identifies itself as a Removable drive and Windows 7 refuses to index removable drives. How can I mount the USB Thumb Drive as Fixed instead of Removable?

    Read the article

  • Enhanced Explorer for Windows XP? [closed]

    - by iceman
    Possible Duplicate: Replacement for Windows Explorer? Is there a better explorer for Windows XP with Konqueror-like features (like dual panes), etc. Specially an enhanced integrated search option like in Vista? There are different products which cumulatively present something similar like xplorer2, Google Desktop Search or Windows Grep. But what about one integrated product?

    Read the article

  • Searching within Gmail's nested labels [closed]

    - by Penang
    Consider this setup in my gmail inbox I have 3 lists mailing-lists/first-list mailing-lists/second-list mailing-lists/third-list if I want to search for all unread messages in any sublabel of mailing-lists, is there a better way to search than "is:unread label:mailing-list/first-list OR label:mailing-list/second-list OR label:mailing-list/third-list" something like "is:unread label:mailing-list/*" is what I'm looking for

    Read the article

  • Debugging iFilter plug-in (PDF indexing)

    - by Trevor Sullivan
    I have the official Adobe x64 iFilter PDF plug-in and the FoxIt Software iFilter PDF plug-in installed, and neither one seems to be allowing me to index the contents of PDF files. So far, I've: Added my data folder into the Indexing service configuration Ensured that PDF files are configured to index "file properties and contents" Rebuilt the index from scratch But, when I search, I can only search for PDF file names, not the contents of them. Any ideas on how to debug this issue?

    Read the article

  • How to Mount a USB "Thumb" Drive as "Fixed" in Windows (For Indexing)

    - by AMissico
    I have over 8GB in my "Code Library" that I maintain on a 64GB ScanDisk Ultra Backup USB Device. Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot because the USB device identifies itself as a Removable drive and Windows 7 refuses to index removable drives. How can I mount the USB Thumb Drive as Fixed instead of Removable? All suggestions welcome and greatly appreciated.

    Read the article

  • windows 7 start menu showing incorrect data

    - by madmik3
    Hi, I've tried to rebuild my search index but it does not seem to help. When i search for anything even command I either get an empty list or a list of short cuts with the names Programs Documents Files ... They all have the default white paper icon. If I click on them I get an error message that says Internet security settings prevented one or more files from being opened. Any ideas? thanks

    Read the article

  • Solr vs 'this' word

    - by s.arlashin
    There is a smal problem with solr. When I try to search text containing the word 'this' by issuing 'this' in the search console, solr doesn't find anything. However there are no problems with other words. Is it sort of reserved word or something like that?

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Returning recursive ternary freaks out

    - by David Titarenco
    Hi, assume this following function: int binaryTree::findHeight(node *n) { if (n == NULL) { return 0; } else { return 1 + max(findHeight(n->left), findHeight(n->right)); } } Pretty standard recursive treeHeight function for a given binary search tree binaryTree. Now, I was helping a friend (he's taking an algorithms course), and I ran into some weird issue with this function that I couldn't 100% explain to him. With max being defined as max(a,b) ((a)>(b)?(a):(b)) (which happens to be the max definition in windef.h), the recursive function freaks out (it runs something like n^n times where n is the tree height). This obviously makes checking the height of a tree with 3000 elements take very, very long. However, if max is defined via templating, like std does it, everything is okay. So using std::max fixed his problem. I just want to know why. Also, why does the countLeaves function work fine, using the same programmatic recursion? int binaryTree::countLeaves(node *n) { if (n == NULL) { return 0; } else if (n->left == NULL && n->right == NULL) { return 1; } else { return countLeaves(n->left) + countLeaves(n->right); } } Is it because in returning the ternary function, the values a => countLeaves(n->left) and b => countLeaves(n->right) were recursively double called simply because they were the resultants? Thank you!

    Read the article

  • Log 2 N generic comparison tree

    - by Morano88
    Hey! I'm working on an algorithm for Redundant Binary Representation (RBR) where every two bits represent a digit. I designed the comparator that takes 4 bits and gives out 2 bits. I want to make the comparison in log 2 n so If I have X and Y .. I compare every 2 bits of X with every 2 bits of Y. This is smooth if the number of bits of X or Y equals n where (n = 2^X) i.e n = 2,4,8,16,32,... etc. Like this : However, If my input let us say is 6 or 10 .. then it becomes not smooth and I have to take into account some odd situations like this : I have a shallow experience in algorithms .. is there a generic way to do it .. so at the end I get only 2 bits no matter what the input is ? I just need hints or pseudo-code. If my question is not appropriate here .. so feel free to flag it or tell me to remove it. I'm using VHDL by the way!

    Read the article

  • In-order tree traversal

    - by Chris S
    I have the following text from an academic course I took a while ago about in-order traversal (they also call it pancaking) of a binary tree (not BST): In-order tree traversal Draw a line around the outside of the tree. Start to the left of the root, and go around the outside of the tree, to end up to the right of the root. Stay as close to the tree as possible, but do not cross the tree. (Think of the tree — its branches and nodes — as a solid barrier.) The order of the nodes is the order in which this line passes underneath them. If you are unsure as to when you go “underneath” a node, remember that a node “to the left” always comes first. Here's the example used (slightly different tree from below) However when I do a search on google, I get a conflicting definition. For example the wikipedia example: Inorder traversal sequence: A, B, C, D, E, F, G, H, I (leftchild,rootnode,right node) But according to (my understanding of) definition #1, this should be A, B, D, C, E, F, G, I, H Can anyone clarify which definition is correct? They might be both describing different traversal methods, but happen to be using the same name. I'm having trouble believing the peer-reviewed academic text is wrong, but can't be certain.

    Read the article

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

  • warning: dict_ldap_lookup: Search error 1: Operations error

    - by drecute
    Please I need help with ldap search filter to use to retrieve the user email information from ldap. I'm running postfix_ldap of Ubuntu server 12.04. Everything seems to work fine, except getting the values returned from the search. Version 1 server_host = ldap://samba.example.com search_base = dc=company, dc=example, dc=com query_filter = mail=%s bind = no domain = example.com Version 2 server_host = ldap://samba.example.com search_base = dc=company, dc=example, dc=com query_filter = mail=%s bind_dn = cn=Users,dc=company,dc=example,dc=com domain = example.com mail logs Nov 26 11:13:26 mail postfix/smtpd[19662]: match_string: example.com ~? example.com Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_lookup: No existing connection for LDAP source /etc/postfix/ldap-aliases.cf, reopening Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Connecting to server ldap://samba.example.com Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Actual Protocol version used is 3. Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Binding to server ldap://samba.example.com with dn cn=Users,dc=company,dc=example,dc=com Nov 26 11:13:26 mail postfix/smtpd[19662]: warning: dict_ldap_connect: Unable to bind to server ldap://samba.example.com with dn cn=Users,dc=company,dc=example,dc=com: 49 (Invalid credentials) Nov 26 11:13:26 mail postfix/smtpd[19662]: warning: ldap:/etc/postfix/ldap-aliases.cf lookup error for "[email protected]" Nov 26 11:13:26 mail postfix/smtpd[19662]: maps_find: virtual_alias_maps: [email protected]: search aborted Nov 26 11:13:26 mail postfix/smtpd[19662]: mail_addr_find: [email protected] -> (try again) Nov 26 11:13:26 mail postfix/smtpd[19662]: NOQUEUE: reject: RCPT from col0-omc3-s2.col0.hotmail.com[65.55.34.140]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<col0-omc3-s2.col0.hotmail.com> Nov 26 11:13:26 mail postfix/smtpd[19662]: > col0-omc3-s2.col0.hotmail.com[65.55.34.140]: 451 4.3.0 <[email protected]>: Temporary lookup failure here's another log with successful search result but fialed to get the values of the result Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: Using existing connection for LDAP source /etc/postfix/ldap-aliases.cf Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Searching with filter [email protected] Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_get_values[1]: Search found 1 match(es) Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_get_values[1]: Leaving dict_ldap_get_values Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: Search returned nothing Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: [email protected]: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: In dict_ldap_lookup Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Skipping lookup of key 'tola.akintola': domain mismatch Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: tola.akintola: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: In dict_ldap_lookup Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Skipping lookup of key '@example.com': domain mismatch Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: @example.com: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: mail_addr_find: [email protected] -> (not found) My refined ldap-aliases.cf looks like this: server_host = ldap://samba.example.com server_port = 3268 search_base = dc=company, dc=example, dc=com query_filter = mail=%s result_attribute = uid bind_dn = cn=Administrator,cn=Users,dc=company,dc=example,dc=com bind_pw = pass domain = example.com So I'll like to know what ldap filter is appropriate to get this to work. Thanks for helping out.

    Read the article

  • How to setup Lucene/Solr for a B2B web app?

    - by Bill Paetzke
    Given: 1 database per client (business customer) 5000 clients Clients have between 2 to 2000 users (avg is ~100 users/client) 100k to 10 million records per database Users need to search those records often (it's the best way to navigate their data) Possibly relevant info: Several new clients each week (any time during business hours) Multiple web servers and database servers (users can login via any web server) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. They use an index per client and store it in the client's database. I'm not sure on the details. And I'm not sure if this is a serious mod to Lucene. The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Where do you store the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >