Search Results

Search found 2185 results on 88 pages for 'n way merge'.

Page 83/88 | < Previous Page | 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Call asp.net web service from php

    - by SamB09
    Hi im calling an aps.net web service from a php service. The service searches both databases with a search parameter. Im not sure how to pass a search parameter to the asp.net service. Code is below. ( there is no search paramter currently but im just interested in how it would be passed to the asp.net service) $link = mysql_connect($host, $user, $passwd); mysql_select_db($dbName); $query = 'SELECT firstname, surname, phone, location FROM staff ORDER BY surname'; $result = mysql_query($query,$link); // if there is a result if ( mysql_num_rows($result) > 0 ) { // set up a DOM object $xmlDom1 = new DOMDocument(); $xmlDom1->appendChild($xmlDom1->createElement('directory')); $xmlRoot = $xmlDom1->documentElement; // loop over the rows in the result while ( $row = mysql_fetch_row($result) ) { $xmlPerson = $xmlDom1->createElement('staff'); $xmlFname = $xmlDom1->createElement('fname'); $xmlText = $xmlDom1->createTextNode($row[0]); $xmlFname->appendChild($xmlText); $xmlPerson->appendChild($xmlFname); $xmlSname = $xmlDom1->createElement('sname'); $xmlText = $xmlDom1->createTextNode($row[1]); $xmlSname->appendChild($xmlText); $xmlPerson->appendChild($xmlSname); $xmlTel = $xmlDom1->createElement('phone'); $xmlText = $xmlDom1->createTextNode($row[2]); $xmlTel->appendChild($xmlText); $xmlPerson->appendChild($xmlTel); $xmlLoc = $xmlDom1->createElement('loc'); $xmlText = $xmlDom1->createTextNode($row[3]); $xmlLoc->appendChild($xmlText); $xmlPerson->appendChild($xmlLoc); $xmlRoot->appendChild($xmlPerson); } } // // instance a SOAP client to the dotnet web service and read it into a DOM object // (this really should have an exception handler) // $client = new SoapClient('http://stuiis.cms.gre.ac.uk/mk05/dotnet/dataBind01/phoneBook.asmx?WSDL'); $xmlString = $client->getDirectoryDom()->getDirectoryDomResult->any; $xmlDom2 = new DOMDocument(); $xmlDom2->loadXML($xmlString); // merge the second DOM object into the first foreach ( $xmlDom2->documentElement->childNodes as $staffNode ) { $xmlPerson = $xmlDom1->createElement($staffNode->nodeName); foreach ( $staffNode->childNodes as $xmlNode ) { $xmlElement = $xmlDom1->createElement($xmlNode->nodeName); $xmlText = $xmlDom1->createTextNode($xmlNode->nodeValue); $xmlElement->appendChild($xmlText); $xmlPerson->appendChild($xmlElement); } $xmlRoot->appendChild($xmlPerson); } // return result echo $xmlDom

    Read the article

  • Unexpected output from Bubblesort program with MSVC vs TCC

    - by Sujith S Pillai
    One of my friends sent this code to me, saying it doesn't work as expected: #include<stdio.h> void main() { int a [10] ={23, 100, 20, 30, 25, 45, 40, 55, 43, 42}; int sizeOfInput = sizeof(a)/sizeof(int); int b, outer, inner, c; printf("Size is : %d \n", sizeOfInput); printf("Values before bubble sort are : \n"); for ( b = 0; b &lt; sizeOfInput; b++) printf("%d\n", a[b]); printf("End of values before bubble sort... \n"); for ( outer = sizeOfInput; outer &gt; 0; outer-- ) { for ( inner = 0 ; inner &lt; outer ; inner++) { printf ( "Comparing positions: %d and %d\n",inner,inner+1); if ( a[inner] &gt; a[inner + 1] ) { int tmp = a[inner]; a[inner] = a [inner+1]; a[inner+1] = tmp; } } printf ( "Bubble sort total array size after inner loop is %d :\n",sizeOfInput); printf ( "Bubble sort sizeOfInput after inner loop is %d :\n",sizeOfInput); } printf ( "Bubble sort total array size at the end is %d :\n",sizeOfInput); for ( c = 0 ; c &lt; sizeOfInput; c++) printf("Element: %d\n", a[c]); } I am using Micosoft Visual Studio Command Line Tool for compiling this on a Windows XP machine. cl /EHsc bubblesort01.c My friend gets the correct output on a dinosaur machine (code is compiled using TCC there). My output is unexpected. The array mysteriously grows in size, in between. If you change the code so that the variable sizeOfInput is changed to sizeOfInputt, it gives the expected results! A search done at Microsoft Visual C++ Developer Center doesn't give any results for "sizeOfInput". I am not a C/C++ expert, and am curious to find out why this happens - any C/C++ experts who can "shed some light" on this? Unrelated note: I seriously thought of rewriting the whole code to use quicksort or merge sort before posting it here. But, after all, it is not Stooge sort... Edit: I know the code is not correct (it reads beyond the last element), but I am curious why the variable name makes a difference.

    Read the article

  • How do I update with a newly-created detached entity using NHibernate?

    - by Daniel T.
    Explanation: Let's say I have an object graph that's nested several levels deep and each entity has a bi-directional relationship with each other. A -> B -> C -> D -> E Or in other words, A has a collection of B and B has a reference back to A, and B has a collection of C and C has a reference back to B, etc... Now let's say I want to edit some data for an instance ofC. In Winforms, I would use something like this: var instanceOfC; using (var session = SessionFactory.OpenSession()) { // get the instance of C with Id = 3 instanceOfC = session.Linq<C>().Where(x => x.Id == 3); } SendToUIAndLetUserUpdateData(instanceOfC); using (var session = SessionFactory.OpenSession()) { // re-attach the detached entity and update it session.Update(instanceOfC); } In plain English, we grab a persistent instance out of the database, detach it, give it to the UI layer for editing, then re-attach it and save it back to the database. Problem: This works fine for Winform applications because we're using the same entity all throughout, the only difference being that it goes from persistent to detached to persistent again. The problem occurs when I'm using a web service and a browser, sending over JSON data. In this case, the data that comes back is no longer a detached entity, but rather a transient one that just happens to have the same ID as the persistent one. If I use this entity to update, it will wipe out the relationship to B and D unless I sent the entire object graph over to the UI and got it back in one piece. Question: My question is, how do I serialize detached entities over the web, receive them back, and save them, while preserving any relationships that I didn't explicitly change? I know about ISession.SaveOrUpdateCopy and ISession.Merge() (they seem to do the same thing?), but this will still wipe out the relationships if I don't explicitly set them. I could copy the fields from the transient entity to the persistent entity one by one, but this doesn't work too well when it comes to relationships and I'd have to handle version comparisons manually.

    Read the article

  • how can we make class with Linked list recursion ...

    - by epsilon_G
    Hi , I'm newbee in The C++ heaven and the OOP, ... I'd like to make some stuffs with the Data structures ... However,I'd like to merge two linked listes ... I made a class before "List" wich contain all what I need to programme something with List ... Add , Display .. The probleme is that I programmed the function "Merge2lists" which give us the third list .. How can I display the third list in the main program after using "Merge2lists" Plz , my prob with the Class and the OOP syntaxe ... try to give me the implementation in the main program ??? Otherwise,How can I apply function given pointer in main program wich declared in class .. Thankx class Liste { private: struct node { int elem ; node *next; }*p; public: LLC(); void Merge2lists (node* a, node * b,node *&result); ~LLC(); }; void List::Merge2lists (node* a, node * b,node *&result) { result = NULL; if (a==NULL) { result=b; return;} else if (b==NULL) { result=a; return;} if (a->elem <= b->elem) { result = a; Merge2lists(a->next, b,result->next); } else { result = b; Merge2lists(a, b->next,result->next); } return; } int main() { liste a ; a.Aff_Val(46); a.Aff_Val(54); a.add_as_first(2); a.add_as_first(1); a.Display(); /*This to displat the elemnts ... Don't care about it it's easy to make*/ list liste2; b.Add(2); b.Add(14); b.Add(16); list result; Merge2lists (a,b,result); /*The probleme is here , how can I use this in my program */

    Read the article

  • MSSQL 2005: Update rows in a specified order (like ORDER BY)?

    - by JMTyler
    I want to update rows of a table in a specific order, like one would expect if including an ORDER BY clause, but MS SQL does not support the ORDER BY clause in UPDATE queries. I have checked out this question which supplied a nice solution, but my query is a bit more complicated than the one specified there. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) ORDER BY Parent.Depth DESC; So, what I'm hoping that you'll notice is that a single table (TableA) contains a hierarchy of rows, wherein one row can be the parent or child of any other row. The rows need to be updated in order from the deepest child up to the root parent. This is because TableA.ColA must contain an up-to-date concatenation of its own current value with the values of its children (I realize this query only concats with one child, but that is for the sake of simplicity - the purpose of the example in this question does not necessitate any more verbosity), therefore the query must update from the bottom up. The solution suggested in the question I noted above is as follows: UPDATE messages SET status=10 WHERE ID in (SELECT TOP (10) Id FROM Table WHERE status=0 ORDER BY priority DESC ); The reason that I don't think I can use this solution is because I am referencing column values from the parent table inside my subquery (see WHERE Child.ParentColB = Parent.ColB), and I don't think two sibling subqueries would have access to each others' data. So far I have only determined one way to merge that suggested solution with my current problem, and I don't think it works. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) WHERE Parent.Id IN (SELECT Id FROM TableA ORDER BY Parent.Depth DESC); The WHERE..IN subquery will not actually return a subset of the rows, it will just return the full list of IDs in the order that I want. However (I don't know for sure - please tell me if I'm wrong) I think that the WHERE..IN clause will not care about the order of IDs within the parentheses - it will just check the ID of the row it currently wants to update to see if it's in that list (which, they all are) in whatever order it is already trying to update... Which would just be a total waste of cycles, because it wouldn't change anything. So, in conclusion, I have looked around and can't seem to figure out a way to update in a specified order (and included the reason I need to update in that order, because I am sure I would otherwise get the ever-so-useful "why?" answers) and I am now hitting up Stack Overflow to see if any of you gurus out there who know more about SQL than I do (which isn't saying much) know of an efficient way to do this. It's particularly important that I only use a single query to complete this action. A long question, but I wanted to cover my bases and give you guys as much info to feed off of as possible. :) Any thoughts?

    Read the article

  • Efficient algorithm to distribute work?

    - by Zwei Steinen
    It's a bit complicated to explain but here we go. We have problems like this (code is pseudo-code, and is only for illustrating the problem. Sorry it's in java. If you don't understand, I'd be glad to explain.). class Problem { final Set<Integer> allSectionIds = { 1,2,4,6,7,8,10 }; final Data data = //Some data } And a subproblem is: class SubProblem { final Set<Integer> targetedSectionIds; final Data data; SubProblem(Set<Integer> targetedSectionsIds, Data data){ this.targetedSectionIds = targetedSectionIds; this.data = data; } } Work will look like this, then. class Work implements Runnable { final Set<Section> subSections; final Data data; final Result result; Work(Set<Section> subSections, Data data) { this.sections = SubSections; this.data = data; } @Override public void run(){ for(Section section : subSections){ result.addUp(compute(data, section)); } } } Now we have instances of 'Worker', that have their own state sections I have. class Worker implements ExecutorService { final Map<Integer,Section> sectionsIHave; { sectionsIHave = {1:section1, 5:section5, 8:section8 }; } final ExecutorService executor = //some executor. @Override public void execute(SubProblem problem){ Set<Section> sectionsNeeded = fetchSections(problem.targetedSectionIds); super.execute(new Work(sectionsNeeded, problem.data); } } phew. So, we have a lot of Problems and Workers are constantly asking for more SubProblems. My task is to break up Problems into SubProblem and give it to them. The difficulty is however, that I have to later collect all the results for the SubProblems and merge (reduce) them into a Result for the whole Problem. This is however, costly, so I want to give the workers "chunks" that are as big as possible (has as many targetedSections as possible). It doesn't have to be perfect (mathematically as efficient as possible or something). I mean, I guess that it is impossible to have a perfect solution, because you can't predict how long each computation will take, etc.. But is there a good heuristic solution for this? Or maybe some resources I can read up before I go into designing? Any advice is highly appreciated!

    Read the article

  • UCA + Natural Sorting

    - by Alix Axel
    I recently learnt that PHP already supports the Unicode Collation Algorithm via the intl extension: $array = array ( 'al', 'be', 'Alpha', 'Beta', 'Álpha', 'Àlpha', 'Älpha', '????', 'img10.png', 'img12.png', 'img1.png', 'img2.png', ); if (extension_loaded('intl') === true) { collator_asort(collator_create('root'), $array); } Array ( [0] => al [2] => Alpha [4] => Álpha [5] => Àlpha [6] => Älpha [1] => be [3] => Beta [11] => img1.png [9] => img10.png [8] => img12.png [10] => img2.png [7] => ???? ) As you can see this seems to work perfectly, even with mixed case strings! The only drawback I've encountered so far is that there is no support for natural sorting and I'm wondering what would be the best way to work around that, so that I can merge the best of the two worlds. I've tried to specify the Collator::SORT_NUMERIC sort flag but the result is way messier: collator_asort(collator_create('root'), $array, Collator::SORT_NUMERIC); Array ( [8] => img12.png [7] => ???? [9] => img10.png [10] => img2.png [11] => img1.png [6] => Älpha [5] => Àlpha [1] => be [2] => Alpha [3] => Beta [4] => Álpha [0] => al ) However, if I run the same test with only the img*.png values I get the ideal output: Array ( [3] => img1.png [2] => img2.png [1] => img10.png [0] => img12.png ) Can anyone think of a way to preserve the Unicode sorting while adding natural sorting capabilities?

    Read the article

  • Finding contained bordered regions from Excel imports.

    - by dmaruca
    I am importing massive amounts of data from Excel that have various table layouts. I have good enough table detection routines and merge cell handling, but I am running into a problem when it comes to dealing with borders. Namely performance. The bordered regions in some of these files have meaning. Data Setup: I am importing directly from Office Open XML using VB6 and MSXML. The data is parsed from the XML into a dictionary of cell data. This wonks wonderfully and is just as fast as using docmd.transferspreadsheet in Access, but returns much better results. Each cell contains a pointer to a style element which contains a pointer to a border element that defines the visibility and weight of each border (this is how the data is structured inside OpenXML, also). Challenge: What I'm trying to do is find every region that is enclosed inside borders, and create a list of cells that are inside that region. What I have done: I initially created a BFS(breadth first search) fill routine to find these areas. This works wonderfully and fast for "normal" sized spreadsheets, but gets way too slow for imports into the thousands of rows. One problem is that a border in Excel could be stored in the cell you are checking or the opposing border in the adjacent cell. That's ok, I can consolidate that data on import to reduce the number of checks needed. One thing I thought about doing is to create a separate graph that outlines the cells using the borders as my edges and using a graph algorithm to find regions that way, but I'm having trouble figuring out how to implement the algorithm. I've used Dijkstra in the past and thought I could do similar with this. So I can span out using no endpoint to search the entire graph, and if I encounter a closed node I know that I just found an enclosed region, but how can I know if the route I've found is the optimal one? I guess I could flag that to run a separate check for the found closed node to the previous node ignoring that one edge. This could work, but wouldn't be much better performance wise on dense graphs. Can anyone else suggest a better method? Thanks for taking the time to read this.

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • ajaxSubmit and Other Code. Can someone help me determine what this code is doing?

    - by Matt Dawdy
    I've inherited some code that I need to debug. It isn't working at present. My task is to get it to work. No other requirements have been given to me. No, this isn't homework, this is a maintenance nightmare job. ASP.Net (framework 3.5), C#, jQury 1.4.2. This project makes heavy use of jQuery and AJAX. There is a drop down on a page that, when an item is chosen, is supposed to add that item (it's a user) to an object in the database. To accomplish this, the previous programmer first, on page load, dynamically loads the entire page through AJAX. To do this, he's got 5 div's, and each one is loaded from a jquery call to a different full page in the website. Somehow, the HTML and BODY and all the other stuff is stripped out and the contents of the div are loaded with the content of the aspx page. Which seems incredibly wrong to me since it relies on the browser to magically strip out html, head, body, form tags and merge with the existing html head body form tags. Also, as the "content" page is returned as a string, the previous programmer has this code running on it before it is appended to the div: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } When the dropdown itself fires it's onchange event, here is the code that gets fired: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } This blows up in IE, with the message that "Microsoft JScript runtime error: Object doesn't support this property or method" -- which, I think, has to be that $(form).ajaxSubmit method doesn't exist. What is this code really trying to do? I am so turned around right now that I think my only option is to scrap everything and start over. But I'd rather not do that unless necessary. Is this code good? Is it working against .Net, and is that why we are having issues?

    Read the article

  • SQL Server 2005: Update rows in a specified order (like ORDER BY)?

    - by JMTyler
    I want to update rows of a table in a specific order, like one would expect if including an ORDER BY clause, but SQL Server does not support the ORDER BY clause in UPDATE queries. I have checked out this question which supplied a nice solution, but my query is a bit more complicated than the one specified there. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) ORDER BY Parent.Depth DESC; So, what I'm hoping that you'll notice is that a single table (TableA) contains a hierarchy of rows, wherein one row can be the parent or child of any other row. The rows need to be updated in order from the deepest child up to the root parent. This is because TableA.ColA must contain an up-to-date concatenation of its own current value with the values of its children (I realize this query only concats with one child, but that is for the sake of simplicity - the purpose of the example in this question does not necessitate any more verbosity), therefore the query must update from the bottom up. The solution suggested in the question I noted above is as follows: UPDATE messages SET status=10 WHERE ID in (SELECT TOP (10) Id FROM Table WHERE status=0 ORDER BY priority DESC ); The reason that I don't think I can use this solution is because I am referencing column values from the parent table inside my subquery (see WHERE Child.ParentColB = Parent.ColB), and I don't think two sibling subqueries would have access to each others' data. So far I have only determined one way to merge that suggested solution with my current problem, and I don't think it works. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) WHERE Parent.Id IN (SELECT Id FROM TableA ORDER BY Parent.Depth DESC); The WHERE..IN subquery will not actually return a subset of the rows, it will just return the full list of IDs in the order that I want. However (I don't know for sure - please tell me if I'm wrong) I think that the WHERE..IN clause will not care about the order of IDs within the parentheses - it will just check the ID of the row it currently wants to update to see if it's in that list (which, they all are) in whatever order it is already trying to update... Which would just be a total waste of cycles, because it wouldn't change anything. So, in conclusion, I have looked around and can't seem to figure out a way to update in a specified order (and included the reason I need to update in that order, because I am sure I would otherwise get the ever-so-useful "why?" answers) and I am now hitting up Stack Overflow to see if any of you gurus out there who know more about SQL than I do (which isn't saying much) know of an efficient way to do this. It's particularly important that I only use a single query to complete this action. A long question, but I wanted to cover my bases and give you guys as much info to feed off of as possible. :) Any thoughts?

    Read the article

  • Search row with highest number of cells in a row with colspan

    - by user593029
    What is the efficient way to search highest number of cells in big table with numerous colspan (merge cells ** colspan should be ignored so in below example highest number of cells is 4 in first row). Is it js/jquery with reg expression or just the loop with bubble sorting. I got one link as below explainig use of regex is it ideal way ... can someone suggest pseudo code for this. High cpu consumption due to a jquery regex patch <table width="156" height="84" border="0" > <tbody> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px" colspan="2"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> </tbody> </table>

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • How come Java doesn't accept my LinkedList in a Generic, but accepts its own?

    - by master chief
    For a class assignment, we can't use any of the languages bultin types, so I'm stuck with my own list. Anyway, here's the situation: public class CrazyStructure <T extends Comparable<? super T>> { MyLinkedList<MyTree<T>> trees; //error: type parameter MyTree is not within its bound } However: public class CrazyStructure <T extends Comparable<? super T>> { LinkedList<MyTree<T>> trees; } Works. MyTree impleements the Comparable interface, but MyLinkedList doesn't. However, Java's LinkedList doesn't implement it either, according to this. So what's the problem and how do I fix it? MyLinkedList: public class MyLinkedList<T extends Comparable<? super T>> { private class Node<T> { private Node<T> next; private T data; protected Node(); protected Node(final T value); } Node<T> firstNode; public MyLinkedList(); public MyLinkedList(T value); //calls node1.value.compareTo(node2.value) private int compareElements(final Node<T> node1, final Node<T> node2); public void insert(T value); public void remove(T value); } MyTree: public class LeftistTree<T extends Comparable<? super T>> implements Comparable { private class Node<T> { private Node<T> left, right; private T data; private int dist; protected Node(); protected Node(final T value); } private Node<T> root; public LeftistTree(); public LeftistTree(final T value); public Node getRoot(); //calls node1.value.compareTo(node2.value) private int compareElements(final Node node1, final Node node2); private Node<T> merge(Node node1, Node node2); public void insert(final T value); public T extractMin(); public int compareTo(final Object param); }

    Read the article

  • C# Windows Service Multiple Config Files

    - by Goober
    Quick Question Is it possible to have more than 1 config file in a windows service? Or is there some way I can merge them at run time? Scenario Currently I have two config files containing the below contents. After I build and install my Windows Service, I can't get my custom XML Parser class to read the content because it keeps pointing to the wrong config file, even though I am using a few lines of code to ensure it gets the right name of the config file I need to access. ALSO When I navigate to the folder in which the service is installed, there is only sign of the normal APP.Config file, and no sign of the custom config file. (I have even set the normal ones properties to "Do Not Copy" and the custom ones properties to "Copy Always"). Config File Determination Code string settingsFile = String.Format("{0}.exe.config", System.AppDomain.CurrentDomain.BaseDirectory + Assembly.GetExecutingAssembly().GetName().Name); CUSTOM CONFIG File <?xml version="1.0" encoding="utf-8" ?> <configuration> <servers> <SV066930> <add name="Name" value = "SV066930" /> <processes> <SimonTest1> <add name="ProcessName" value="notepad.exe" /> <add name="CommandLine" value="C:\\WINDOWS\\system32\\notepad.exe C:\\WINDOWS\\Profiles\\TA2TOF1\\Desktop\\SimonTest1.txt" /> </SimonTest1> </processes> </SV066930> </servers> </configuration> NORMAL APP.CONFIG File <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxx" /> </configSections> <connectionStrings> <add name="DB" connectionString="Data Source=etc......" /> </connectionStrings> </configuration>

    Read the article

  • My winform application doesn't work on others' pc without vs 2010 installed

    - by wings
    Just like I said, my winform application works properly on computers with VS installed, but on other computers, it will crash due to a FileNotFound Exception. I used using Application = Microsoft.Office.Interop.Excel.Application; in my source code to generate a Excel file, and the problem occurs as soon as the Excel-related function is called. But I don't know what it refers to exactly. Do I have to get some .dll included along with the .exe file? And what DLL is that? Below are part of my codes: private void FileExport(object objTable) { StartWaiting(); string[,] table = null; try { table = (string[,])objTable; } catch (Exception ex) { ShowStatus(ex.Message, StatusType.Warning); } if (table == null) { return; } Application excelApp = new Application { DisplayAlerts = false }; Workbooks workbooks = excelApp.Workbooks; Workbook workbook = workbooks.Add(XlWBATemplate.xlWBATWorksheet); Worksheet worksheet = (Worksheet)workbook.Worksheets[1]; worksheet.Name = "TABLE"; for (int i = 0; i < table.GetLength(0); i++) { for (int j = 0; j < table.GetLength(1); j++) { worksheet.Cells[i + 1, j + 1] = table[i, j]; } } Range range = excelApp.Range["A1", "H1"]; range.Merge(); range.Font.Bold = true; range.Font.Size = 15; range.RowHeight = 50; range.EntireRow.AutoFit(); range = excelApp.Range["A2", "H8"]; range.Font.Size = 11; range = excelApp.Range["A1", "H8"]; range.NumberFormatLocal = "@"; range.RowHeight = 300; range.ColumnWidth = 50; range.HorizontalAlignment = XlHAlign.xlHAlignCenter; range.VerticalAlignment = XlVAlign.xlVAlignCenter; range.EntireRow.AutoFit(); range.EntireColumn.AutoFit(); worksheet.UsedRange.Borders.LineStyle = 1; Invoke(new MainThreadInvokerDelegate(SaveAs), new object[] { worksheet, workbook, excelApp } ); EndWaiting(); }

    Read the article

  • CLR: Multi Param Aggregate, Argument not in Final Output?

    - by OMG Ponies
    Why is my delimiter not appearing in the final output? It's initialized to be a comma, but I only get ~5 white spaces between each attribute using: SELECT [article_id] , dbo.GROUP_CONCAT(0, t.tag_name, ',') AS col FROM [AdventureWorks].[dbo].[ARTICLE_TAG_XREF] atx JOIN [AdventureWorks].[dbo].[TAGS] t ON t.tag_id = atx.tag_id GROUP BY article_id The bit for DISTINCT works fine, but it operates within the Accumulate scope... Output: article_id | col ------------------------------------------------- 1 | a a b c I only have rudimentary C# API knowledge... C# Code: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Xml.Serialization; using System.Xml; using System.IO; using System.Collections; using System.Text; [Serializable] [SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = 8000)] public struct GROUP_CONCAT : IBinarySerialize { ArrayList list; string delimiter; public void Init() { list = new ArrayList(); delimiter = ","; } public void Accumulate(SqlBoolean isDistinct, SqlString Value, SqlString separator) { delimiter = (separator.IsNull) ? "," : separator.Value ; if (!Value.IsNull) { if (isDistinct) { if (!list.Contains(Value.Value)) { list.Add(Value.Value); } } else { list.Add(Value.Value); } } } public void Merge(GROUP_CONCAT Group) { list.AddRange(Group.list); } public SqlString Terminate() { string[] strings = new string[list.Count]; for (int i = 0; i < list.Count; i++) { strings[i] = list[i].ToString(); } return new SqlString(string.Join(delimiter, strings)); } #region IBinarySerialize Members public void Read(BinaryReader r) { int itemCount = r.ReadInt32(); list = new ArrayList(itemCount); for (int i = 0; i < itemCount; i++) { this.list.Add(r.ReadString()); } } public void Write(BinaryWriter w) { w.Write(list.Count); foreach (string s in list) { w.Write(s); } } #endregion }

    Read the article

  • Inbreeding-immune database structure

    - by Nick Savage
    I have an application that requires a "simple" family tree. I would like to be able to perform queries that will give me data for an entire family given one id from a member in the family. I say simple because it does not need to take into account adoption or any other obscurities. The requirements for the application are as follows: Any two people will not be able to breed if they're from the same genetic line Needs to allow for the addition of new family lines (new people with no previous family) Need to be able to pull siblings, parents separately through queries I'm having trouble coming up with the proper structure for the database. So far I've come up with two solutions but they're not very reliable and will probably get out of hand quite quickly. Solution 1 involves placing a family_ids field on the people table and storing a list of unique family ids. Each time two people breed the lists are checked against each other to make sure no ids match and if everything checks out will merge the two lists and set that as the child's family_ids field. Example: Father (family_ids: (null)) breeds with Mother (family_ids: (213, 519)) -> Child (family_ids: (213, 519)) breeds with Random Person (family_ids: (813, 712, 122, 767)) -> Grandchild (family_ids: (213, 519, 813, 712, 122, 767)) And so on and so forth... The problem I see with this is the lists becoming unreasonably large as time goes on. Solution 2 uses cakephp's associations to declare: public $belongsTo = array( 'Father' => array( 'className' => 'User', 'foreignKey' => 'father_id' ), 'Mother' => array( 'className' => 'User', 'foreignKey' => 'mother_id' ) ); Now setting recursive to 2 will fetch the results of the mother and father, along with their mother and father, and so on and so forth all the way down the line. The problem with this route is that the data is in nested arrays and I'm unsure of how to efficiently work through the code. If anyone would be able to steer me in the direction of the most efficient way to handle what I want to achieve that would be tremendously helpful. Any and all help is greatly appreciated and I'll gladly answer any questions anyone has. Thanks a lot.

    Read the article

  • 'Timeout Expired' error against local SQL Express on only 2 LINQ Methods...

    - by Refracted Paladin
    I am going to sum up my problem first and then offer massive details and what I have already tried. Summary: I have an internal winform app that uses Linq 2 Sql to connect to a local SQL Express database. Each user has there own DB and the DB stay in sync through Merge Replication with a Central DB. All DB's are SQL 2005(sp2or3). We have been using this app for over 5 months now but recently our users are getting a Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Detailed: The strange part is they get that in two differnt locations(2 differnt LINQ Methods) and only the first time they fire in a given time period(~5mins). One LINQ method is pulling all records that match a FK ID and then Manipulating them to form a Heirarchy View for a TreeView. The second is pulling all records that match a FK ID and dumping them into a DataGridView. The only things I can find in common with the 2 are that the first IS an IEnumerable and the second converts itself from IQueryable - IEnumerable - DataTable... I looked at the query's in Profiler and they 'seemed' normal. They are not very complicated querys. They are only pulling back 10 - 90 records, from one table. Any thoughts, suggestions, hints whatever would be greatly appreciated. I am at my wit's end on this.... public IList<CaseNoteTreeItem> GetTreeViewDataAsList(int personID) { var myContext = MatrixDataContext.Create(); var caseNotesTree = from cn in myContext.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate descending, cn.InsertDate descending select new CaseNoteTreeItem { CaseNoteID = cn.CaseNoteID, NoteContactDate = Convert.ToDateTime(cn.ContactDate). ToShortDateString(), ParentNoteID = cn.ParentNote, InsertUser = cn.InsertUser, ContactDetailsPreview = cn.ContactDetails.Substring(0, 75) }; return caseNotesTree.ToList<CaseNoteTreeItem>(); } AND THIS ONE public static DataTable GetAllCNotes(int personID) { using (var context = MatrixDataContext.Create()) { var caseNotes = from cn in context.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate select new { cn.ContactDate, cn.ContactDetails, cn.TimeSpentUnits, cn.IsCaseLog, cn.IsPreEnrollment, cn.PresentAtContact, cn.InsertDate, cn.InsertUser, cn.CaseNoteID, cn.ParentNote }; return caseNotes.ToList().CopyLinqToDataTable(); } }

    Read the article

  • Spanning columns in HTML table.

    - by Tony
    I'm trying do to a very simple operation of merging two columns in a table. This seems easy with the colspan, but if I merge different columns without leaving at least one row without any merged columns, the sizing gets completely messed up. Please see the following example at http://www.allthingsdope.com/table.html or take a look at and try the following code: Good: <table width="700px"> <tr> <th width="100px">1: 100px</th> <td width="300px">2: 300px</td> <td width="200px">3: 200px</td> <td width="100px">4: 100px</td> </tr> <tr> <th width="100px">1: 100px</th> <td colspan=2 width="500px" >2 & 3: 500px</td> <td width="100px">4: 100px</td> </tr> <tr> <th width="100px">1: 100px</th> <td width="300px">2: 300px</td> <td colspan=2 width="300px">3 & 4: 300px</td> </tr> </table> Bad: <table width="700px"> <tr> <th width="100px">1: 100px</th> <td colspan=2 width="500px" >2 & 3: 500px</td> <td width="100px">4: 100px</td> </tr> <tr> <th width="100px">1: 100px</th> <td width="300px">2: 300px</td> <td colspan=2 width="300px">3 & 4: 300px</td> </tr> </table> This seems so simple but I can not figure it out!

    Read the article

  • Delete duplicate rows, do not preserve one row

    - by Radley
    I need a query that goes through each entry in a database, checks if a single value is duplicated elsewhere in the database, and if it is - deletes both entries (or all, if more than two). Problem is the entries are URLs, up to 255 characters, with no way of identifying the row. Some existing answers on Stackoverflow do not work for me due to performance limitations, or they use uniqueid which obviously won't work when dealing with a string. Long Version: I have two databases containing URLs (and only URLs). One database has around 3,000 urls and the other around 1,000. However, a large majority of the 1,000 urls were taken from the 3,000 url database. I need to merge the 1,000 into the 3,000 as new entries only. For this, I made a third database with combined URLs from both tables, about 4,000 entries. I need to find all duplicate entries in this database and delete them (Both of them, without leaving either). I have followed the query of a few examples on this site, but whenever I try to delete both entries it ends up deleting all the entries, or giving sql errors. Alternatively: I have two databases, each containing the separate database. I need to check each row from one database against the other to find any that aren't duplicates, and then add those to a third database. Edit: I've got my own PHP solution which is pretty hacky, but works. I cannot answer my own question for 8 hours because I'm new, so here it is for now: I went with a PHP script to accomplish this, as I'm more familiar with PHP than MySQL. This generates a simple list of urls that only exist in the target database, but not both. If you have more than 7,000 entries to parse this may take awhile, and you will need to copy/paste the results into a text file or expand the script to store them back into a database. I'm just doing it manually to save time. Note: Uses MeekroDB <pre> <?php require('meekrodb.2.1.class.php'); DB::$user = 'root'; DB::$password = ''; DB::$dbName = 'testdb'; $all = DB::query('SELECT * FROM old_urls LIMIT 7000'); foreach($all as $row) { $test = DB::query('SELECT url FROM new_urls WHERE url=%s', $row['url']); if (!is_array($test)) { echo $row['url'] . "\n"; }else{ if (count($test) == 0) { echo $row['url'] . "\n"; } } } ?> </pre>

    Read the article

  • Unexpected output while using Microsoft Visual Studio 2008 (Express Edition) C++ Command Line Tool

    - by Sujith S Pillai
    One of my friends sent this code to me, saying it doesn't work as expected: #include<stdio.h> void main() { int a [10] ={23, 100, 20, 30, 25, 45, 40, 55, 43, 42}; int sizeOfInput = sizeof(a)/sizeof(int); int b, outer, inner, c; printf("Size is : %d \n", sizeOfInput); printf("Values before bubble sort are : \n"); for ( b = 0; b < sizeOfInput; b++) printf("%d\n", a[b]); printf("End of values before bubble sort... \n"); for ( outer = sizeOfInput; outer > 0; outer-- ) { for ( inner = 0 ; inner < outer ; inner++) { printf ( "Comparing positions: %d and %d\n",inner,inner+1); if ( a[inner] > a[inner + 1] ) { int tmp = a[inner]; a[inner] = a [inner+1]; a[inner+1] = tmp; } } printf ( "Bubble sort total array size after inner loop is %d :\n",sizeOfInput); printf ( "Bubble sort sizeOfInput after inner loop is %d :\n",sizeOfInput); } printf ( "Bubble sort total array size at the end is %d :\n",sizeOfInput); for ( c = 0 ; c < sizeOfInput; c++) printf("Element: %d\n", a[c]); } I am using Micosoft Visual Studio Command Line Tool for compiling this on a Windows XP machine. cl /EHsc bubblesort01.c My friend gets the correct output on a dinosaur machine (code is compiled using TCC there). My output is unexpected. The array mysteriously grows in size, in between. If you change the code so that the variable sizeOfInput is changed to sizeOfInputt, it gives the expected results! A search done at Microsoft Visual C++ Developer Center doesn't give any results for "sizeOfInput". I am not a C/C++ expert, and am curious to find out why this happens - any C/C++ experts who can "shed some light" on this? Unrelated note: I seriously thought of rewriting the whole code to use quicksort or merge sort before posting it here. But, after all, it is not Stooge sort...

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user38400
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88  | Next Page >