Search Results

Search found 4127 results on 166 pages for 'kd tree'.

Page 145/166 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • How to get JTree expanded?

    - by Dzmitry Zhaleznichenka
    I have a wizard with several screens where user has to fill his/her details for further processing. At the second screen I have a radio group with three radio buttons that enable additional elements. To proceed, user has to choose one of them. When user selects third button, single-selection JTree filled in with data enables and user has to select an option from it. Then user has to press "Next" to get to next screen. The option he\she had selected is stored as a TreePath. So far so good. My problem is the following. If the user wants to come back from the following screen to the screen with a JTree, I want to provide him\her with the JTree expanded to the option that had been selected and to highlight the option. However, whatsoever I try to do for that (any combinations of expandPath, scrollPathToVisible, addSelectionPath, makeVisible) always provides me with a collapsed tree. I try to expand both leaves and nodes. My code looks like this: rbProcessJTree.setSelected(isProcessJTree()); if (null != getSelectedTablePath()){ trTables.addSelectionPath(getSelectedTablePath()); trTables.expandPath(getSelectedTablePath()); trTables.scrollPathToVisible(getSelectedTablePath()); } When setSelected() is called, state change listener is invoked that enables JTree. The model is loaded during the form initialization. Each time I switch between screens, I save the input data from previous screen and dispose it. Then, when I need to open previous screen back, I save data from the following screen, dispose it, load data to this screen and show it. So each time the screen is generating from scratch. Could you please explain, what sequence of operations has to be done to get JTree expanded in a newly created form,with data model loaded and selection path provided?

    Read the article

  • How to marshal an object and its content (also objects)

    - by Waldo Spek
    I have a question for which I suspect the answer is a bit complex. At this moment I am programming a DLL (class library) in C#. This DLL uses a 3rd party library and therefore deals with 3rd party objects of which I do not have the source code. Now I am planning to create another DLL, which is going to be used in a later stadium in my application. This second DLL should use the 3rd party objects (with corresponding object states) created by the first DLL. Luckily the 3rd party objects extend the MarshalByRefObject class. I can marshal the objects using System.Runtime.Remoting.Marshal(...). I then serialize the objects using a BinaryFormatter and store the objects as a byte[] array. All goes well. I can deserialize and unmarshal in a the opposite way and end up with my original 3rd party objects...so it appears... Nevertheless, when calling methods on my 3rd party deserialized objects I get object internal exceptions. Normally these methods return other 3rd party objects, but (obviously - I guess) now these objects are missing because they weren't serialized. Now my global question: how would I go about marshalling/serializing all the objects which my 3rd party objects reference...and cascade down the "reference tree" to obtain a full and complete serialized object? Right now my guess is to preprocess: obtain all the objects and build my own custom object and serialize it. But I'm hoping there is some other way...

    Read the article

  • How lucene indexing ?

    - by user312140
    Hello I read some document about lucene ; also i read the document in this link ( http://lucene.sourceforge.net/talks/pisa ) . I don't really understand how lucene index documents and don't understand lucene work with which algorithm for indexing ? On above link , said lucene use this algorithm for indexing : * incremental algorithm: o maintain a stack of segment indices o create index for each incoming document o push new indexes onto the stack o let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How this algorithm help us to have an optimize indexing ? Does lucene use B-tree algorithm or any other algorithm like that for indexing or have a paticular algorithm ? Thank you for reading my post .

    Read the article

  • Rationale in selecting Hash Key type

    - by Amrish
    Guys, I have a data structure which has 25 distinct keys (integer) and a value. I have a list of these objects (say 50000) and I intend to use a hash table to store/retrieve them. I am planning to take one of these approaches. Create a integer hash from these 25 integer keys and store it on a hash table. (Yeah! I have some means to handle collisions) Make a string concatenation on the individual keys and use it as a hash key for the hash table. For example, if the key values are 1,2,4,6,7 then the hash key would be "12467". Assuming that I have a total of 50000 records each with 25 distinct keys and a value, then will my second approach be a overkill when it comes to the cost of string comparisons it needs to do to retrieve and insert a record? Some more information! Each bucket in the hash table is a balanced binary tree. I am using the boost library's hash_combine method to create the hash from the 25 keys.

    Read the article

  • Legacy application creates dialogs in non-ui thread.

    - by Frater
    I've been working support for a while on a legacy application and I've noticed a bit of a problem. The system is an incredibly complex client/server with standard and custom frameworks. One of the custom frameworks built into the application involves validating workflow actions. It finds potential errors, separates them into warnings and errors, and passes the results back to the client. The main difference between warnings and errors is that warnings ask the user if they wish to ignore the error. The issue I have is that the dialog for this prompt is created on a non-ui thread, and thus we get cross-threading issues when the dialog is shown. I have attempted to invoke the showing of the dialog, however this fails because the window handle has not been created. (InvokeRequired returns false, which I assume in this case means it cannot find a decent handle in its parent tree, rather than that it doesn't require it.) Does anyone have any suggestions for how I can create this dialog and get the UI thread to set it up and call it?

    Read the article

  • Python: problem with tiny script to delete files

    - by Rosarch
    I have a project that used to be under SVN, but now I'm moving it to Mercurial, so I want to clear out all the .svn files. It's a little too big to do it by hand, so I decided to write a python script to do it. But it isn't working. def cleandir(dir_path): print "Cleaning %s\n" % dir_path toDelete = [] files = os.listdir(dir_path) for filedir in files: print "considering %s" % filedir # continue if filedir == '.' or filedir == '..': print "skipping %s" % filedir continue path = dir_path + os.sep + filedir if os.path.isdir(path): cleandir(path) else: print "not dir: %s" % path if 'svn' in filedir: toDelete.append(path) print "Files to be deleted:" for candidate in toDelete: print candidate print "Delete all? [y|n]" choice = raw_input() if choice == 'y': for filedir in toDelete: if os.path.isdir(filedir): os.rmdir(filedir) else: os.unlink(filedir) exit() if __name__ == "__main__": cleandir(dir) The print statements show that it's only "considering" the filedirs whose names start with ".". However, if I uncomment the continue statement, all the filedirs are "considered". Why is this? Or is there some other utility that already exists to recursively de-SVN-ify a directory tree?

    Read the article

  • Easy way to "organize"/"render"/"generate" HTML?

    - by Rockmaninoff
    Hi all, I'm a relative newbie to web development. I know my HTML and CSS, and am getting involved with Ruby on Rails for some other projects, which has been daunting but very rewarding. Basically I'm wondering if there's a language/backbone/program/solution to eliminate the copypasta trivialities of HTML, with some caveats. Currently my website is hosted on a school server and unfortunately can't use Rails. Being a newbie I also don't really know what other technologies are available to me (or even what those technologies might be). I'm essentially looking for a way to auto-insert all of my header/sidebar/footer/menu information, and when those need to be updated, the rest of the pages get updated. Right now, I have a sidebar that is a tree of all of the pages on my website. When I add a page, not only do I need to update the sidebar, I have to update it for every page in my domain. This is really inefficient and I'm wondering if there is a better way. I imagine this is a pretty widespread problem, but searching Google turns up too many irrelevant links (design template websites, tutorials, etc.). I'd appreciate any help. Oh, and I've heard of HAML as a way to render HTML; how would it be used in this situation?

    Read the article

  • Dimension Reduction in Categorical Data with missing values

    - by user227290
    I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.

    Read the article

  • Trie Backtracking in Recursion

    - by Darksky
    I am building a tree for a spell checker with suggestions. Each node contains a key (a letter) and a value (array of letters down that path). So assume the following sub-trie in my big trie: W / \ a e | | k k | | is word--> e e | ... This is just a subpath of a sub-trie. W is a node and a and e are two nodes in its value array etc... At each node, I check if the next letter in the word is a value of the node. I am trying to support mistyped vowels for now. So 'weke' will yield 'wake' as a suggestion. Here's my searchWord function in my trie: def searchWord(self, word, path=""): if len(word) > 0: key = word[0] word = word[1:] if self.values.has_key(key): path = path + key nextNode = self.values[key] return nextNode.searchWord(word, path) else: # check here if key is a vowel. If it is, check for other vowel substitutes else: if self.isWord: return path # this is the word found else: return None Given 'weke', at the end when word is of length zero and path is 'weke', my code will hit the second big else block. weke is not marked as a word and so it will return with None. This will return out of searchWord with None. To avoid this, at each stack unwind or recursion backtrack, I need to check if a letter is a vowel and if it is, do the checking again. I changed the if self.values.has_key(key) loop to the following: if self.values.has_key(key): path = path + key nextNode = self.values[key] ret = nextNode.searchWord(word, path) if ret == None: # check if key == vowel and replace path # return nextNode.searchWord(... return ret What am I doing wrong here? What can I do when backtracking to achieve what I'm trying to do?

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • 3 questions about JSON-P format.

    - by Tristan
    Hi, I have to write a script which will be hosted on differents domains. This script has to get information from my server. So, stackoverflow's user told me that i have to use JSON-P format, which, after research, is what i'm going to do. (the data provided in JSON-P is for displaying some information hosted on my server on other website) How do I output JSON-P from my server ? Is it the same as the json_encode function from PHP How do i design the tree pattern for the output JSON-P (you know, like : ({"name" : "foo", "id" : "xxxxx", "blog" : "http://xxxxxx.com"}); can I steal this from my XML output ? (http://bit.ly/9kzBDP) Each time a visitor browse a website on which my widget is it'll make a request on my server, requesting the JSON-P data to display on the client side. It'll increase dramatically the CPU load (1 visitor on the website who will have the script = 1 SQL request on my server to output data), so is there any way to 'caching' the JSON-P information output to refresh it only one or twice a day and stores it into a 'file' (in which extension?). BUT on the other hand i would say that requesting the JSON-P data directly (without caching it) is a plus, because, websites which will integrates the script only want to display THEIR information and not the whole data. So, making a script with something like that: jQuery.getJSON("http://www.something.com/json-p/outpout?filter=5&callback=?", function(data) { ................); }); Where filter= the information the website wants to display What do you think ? Thank you very much ;)

    Read the article

  • Simple HTML interface to XSD?

    - by Visage
    I'm writing an app that, at its heart, uses a hierarchical tree of nodes in XML, it looks like this: <node> <name>Node1</name> <Attribute1>Something</Attribute1> <Attribute2>SomethingElse</Attribute2> <child>Node2</child> <child>Node4</child> <child>Node7</child> </node> And so on (all child elements must refer to an existing node, though the node inquestion doesnt have to precede the first reference to it) For a simple structure like this is there a simple tool to generate a html page that will allow a user to enter Nodes and dynamically update a server-side xml file? Im basically writing a tool that will use such a file, but the people who's job it is to create the file arent especially techno-literate, so creating the XML by hand is a no-no. I could hand-crank one fairly quickly, but if I can get a tool to do it, even better (especially as the format may change in future)....

    Read the article

  • Can the Visual Studio (2010) Command Window handle "external tools" with project/solution relative p

    - by ee
    I have been playing with the Command Window in Visual Studio (View-Other Windows-Command Window). It is great for several mouse-free scenarios. (The autocompleting file "Open" command rocks in a non-trivial solution.) That success got me thinking and experimenting: Possibility 1.1: You can use the Alias commands to create custom commands Possibility 1.2: You can use the Shell command to run arbitrary executables and specify parameters (and pipe the result to the output or command windows) Possibility 2: A previously setup external tool definition (with project-relative path variables) could be run from the command window What I am stuck on is: There doesn't appear to be a way to send parameters to an aliased command (and thus the underlying Shell call) There doesn't appear to be a way to use project/solution relative paths ($SolutionDir/$ProjectDir) on a Shell call Using absolute paths in Shell works, but is fragile and high-maintenance (one alias for each needed use case). Typically you want the command to run against a file relative to your project/solution. It seems you can't run the traditional external tools (Tools-External Tools...) in the command window Ultimately I want the external tool functionality in the command window in some way. Can anyone see a way to do this? Or am I barking up the wrong tree? So my questions: Can an "external tool" of some sort (using relative project/solution path parameters) be used in the Command Window? If yes, How? If no, what might be a suitable alternative?

    Read the article

  • Getting memory leak at NSURL connection in Iphone sdk.

    - by monish
    Hi guys, Here Im getting leak at the NSURL connection in my libxml parser can anyone tell how to resolve it. The code where leak generates is: - (BOOL)parseWithLibXML2Parser { BOOL success = NO; ZohoAppDelegate *appDelegate = (ZohoAppDelegate*) [ [UIApplication sharedApplication] delegate]; NSString* curl; if ([cName length] == 0) { curl = @"https://invoice.zoho.com/api/view/settings/currencies?ticket="; curl = [curl stringByAppendingString:appDelegate.ticket]; curl = [curl stringByAppendingString:@"&apikey="]; curl = [curl stringByAppendingString:appDelegate.apiKey]; curl = [curl stringByReplacingOccurrencesOfString:@"\n" withString:@""]; } NSURLRequest *theRequest = [NSURLRequest requestWithURL:[NSURL URLWithString:curl]]; NSLog(@"the request parserWithLibXml2Parser %@",theRequest); NSURLConnection *con = [[[NSURLConnection alloc] initWithRequest:theRequest delegate:self] autorelease];//Memory leak generated here at this line of code. //self.connection = con; //[con release]; // This creates a context for "push" parsing in which chunks of data that are // not "well balanced" can be passed to the context for streaming parsing. // The handler structure defined above will be used for all the parsing. The // second argument, self, will be passed as user data to each of the SAX // handlers. The last three arguments are left blank to avoid creating a tree // in memory. _xmlParserContext = xmlCreatePushParserCtxt(&simpleSAXHandlerStruct, self, NULL, 0, NULL); if(con != nil) { do { [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]; } while (!_done && !self.error); } if(self.error) { //NSLog(@"parsing error"); [self.delegate parser:self encounteredError:nil]; } else { success = YES; } return success; } Anyone's help will be muck appreciated . Thank you, Monish.

    Read the article

  • Using an embedded DB (SQLite / SQL Compact) for Message Passing within an app?

    - by wk1989
    Hello, Just out of curiosity, for applications that have a fairly complicated module tree, would something like sqlite/sql compact edition work well for message passing? So if I have modules containing data such as: \SubsystemA\SubSubSysB\ModuleB\ModuleDataC, \SubSystemB\SubSubSystemC\ModuleA\ModuleDataX Using traditional message passing/routing, you have to go through intermediate modules in order to pass a message to ModuleB to request say ModuleDataC. Instead of doing that, if we we simply store "\SubsystemA\SubSubSysB\ModuleB\ModuleDataC" in a sqlite database, getting that data is as simple as a sql query and needs no routing and passing stuff around. Has anyone done this before? Even if you haven't, do you foresee any issues & performance impact? The only concern I have right now would be the passing of custom types, e.g. if ModuleDataC is a custom data structure or a pointer, I'll need some way of storing the data structure into the DB or storing the pointer into the DB. Thanks, JW EDIT One usage case I haven't thought about is when you want to send a message from ModuleA to ModuleB to get ModuleB to do something rather than just getting/setting data. Is it possible to do this using an embedded DB? I believe callback from the DB would be needed, how feasible is this?

    Read the article

  • jquery how to access the an xml node by index?

    - by DS
    Hi, say I've an xml returned from server like this: <persons> <person> <firstname>Jon</firstname> </person> <person> <firstname>Jack</firstname> </person> <person> <firstname>James</firstname> </person> </persons> If I want to access the 3rd firstname node (passed dynamically and stored in i, assumed to be 3 here), how do I do that? My weird attempt follows: var i=3; $(xml).find('firstname').each(function(idx){ if (idx==i) alert($(this).text()); }); It does fetch me the right content... but it just feels wrong to me especially the looping part. Basically I'm looping through the whole tree using .each()! Is there any better approach than this? Something that'd take me to the nth node directly like: alert( $(xml).find('firstname')[idx].text() ); // where idx=n I'm new to jquery so please excuse my jquery coding approach.

    Read the article

  • How to access frame (not iframe) contents from jQuery.

    - by kazanaki
    Hello I have 2 frames in one page like this (home.html) <frameset rows="50%, 50%"> <frame id="treeContent" src="treeContent.html" /> <frame id="treeStatus" src="treeStatus.html" /> </frameset> and then in one frame (treeStatus.html) I have something like <body style="margin: 0px"> <div id="statusText">Status bar for Tree</div> </body> I want from the top window to manipulate the div located in the child frame via jquery (e.g show and hide). I have seen several questions like this and they suggest the following $(document).ready(function(){ $('#treeStatus').contents().find("statusText").hide(); }); I do not know if this works with iframes but in my case where I have simple frames it does not seem to work. The code is placed inside home.html Here is some output from firebug console >>> $('#treeStatus') [frame#treeStatus] >>> $('#treeStatus').contents() [] >>> $('#treeStatus').children() [] So how do I access frame elements from the top frame? Am I missing something here?

    Read the article

  • How to cancel a deeply nested process

    - by Mystere Man
    I have a class that is a "manager" sort of class. One of it's functions is to signal that the long running process of the class should shut down. It does this by setting a boolean called "IsStopping" in class. public class Foo { bool isStoping void DoWork() { while (!isStopping) { // do work... } } } Now, DoWork() was a gigantic function, and I decided to refactor it out and as part of the process broke some of it into other classes. The problem is, Some of these classes also have long running functions that need to check if isStopping is true. public class Foo { bool isStoping void DoWork() { while (!isStopping) { MoreWork mw = new MoreWork() mw.DoMoreWork() // possibly long running // do work... } } } What are my options here? I have considered passing isStopping by reference, which I don't really like because it requires there to be an outside object. I would prefer to make the additional classes as stand alone and dependancy free as possible. I have also considered making isStopping a property, and then then having it call an event that the inner classes could be subscribed to, but this seems overly complex. Another option was to create a "Process Cancelation Token" class, similar to what .net 4 Tasks use, then that token be passed to those classes. How have you handled this situation? EDIT: Also consider that MoreWork might have a EvenMoreWork object that it instantiates and calls a potentially long running method on... and so on. I guess what i'm looking for is a way to be able to signal an arbitrary number of objects down a call tree to tell them to stop what they're doing and clean up and return.

    Read the article

  • Cross-platform build UNC share (Windows->Linux) - possible to be case-sensitive on CIFS share?

    - by holtavolt
    To optimize builds between Windows and Linux (Ubuntu 10.04), I've got a UNC share of the source tree that is shared between systems, and all build output goes to local disk on each system. This mostly works great, as source updates and changes can quickly be tested on both systems, but there's one annoying limitation I can't find a way around, which is that the Linux CIFS mount is case-insensitive. Consequently, a test compile of code that has an error like: #include "Foo.h" for a file foo.h, will not be caught by a test build (until a local compile is done on the Linux box, e.g. nightly builds) Is it possible to have case-sensitivity of the Windows UNC share on the Linux box? I've tried a variety of fstab and mount combinations with no success, as well as editing the smb.config to set "case sensitive = yes" Given what the Ubuntu man page info states on this: nocase Request case insensitive path name matching (case sensitive is the default if the server suports it). I suspect that this is a limitation from the Windows UNC side, and there's nothing to be done short of switching to some other mechanism (is NFS still viable anywhere?) If anyone has already solved this to support optimized cross-platform build environments, I'd appreciate hearing about it!

    Read the article

  • Measure text size in JavaScript

    - by kayahr
    I want to measure the size of a text in JavaScript. So far this isn't so difficult because I can simply put a temporary invisible div into the DOM tree and check the offsetWidth and offsetHeight. The problem is, I want to do this BEFORE the DOM is ready. Here is a stub: <html> <head> <script type="text/javascript"> var text = "Hello world"; var fontFamily = "Arial"; var fontSize = 12; var size = measureText(text, fontSize, fontFamily); function measureText(text, fontSize, fontFamily) { // TODO Implement me! } </script> </head> <body> </body> </html> Again: I KNOW how to do it asynchronously when DOM (or body) signals that it is ready. But I want to do it synchronously right in the header as shown in the stub above. Any ideas how I can accomplish this? My current opinion is that this is impossible but maybe someone has a crazy idea.

    Read the article

  • [OpenCV] What do the "left" and "right" values mean in the haar cascade xml files?

    - by user117046
    In OpenCV's haar cascade files, what are the "left" and "right" values, and how does this refer to the "threshold" value? Thanks! Just for reference, here's the structure of the files: <haarcascade_frontalface_alt type_id="opencv-haar-classifier"> <size>20 20</size> <stages> <_> <!-- stage 0 --> <trees> <_> <!-- tree 0 --> <_> <!-- root node --> <feature> <rects> <_>3 7 14 4 -1.</_> <_>3 9 14 2 2.</_></rects> <tilted>0</tilted></feature> <threshold>4.0141958743333817e-003</threshold> <left_val>0.0337941907346249</left_val> <right_val>0.8378106951713562</right_val></_></_> <_>

    Read the article

  • How can I build value for "for-each" expression in XSLT with help of parameter

    - by Artic
    I need to navigate through this xml tree. <publication> <corporate> <contentItem> <metadata>meta</metadata> <content>html</content> </contentItem> <contentItem > <metadata>meta1</metadata> <content>html1</content> </contentItem> </corporate> <eurasia-and-africa> ... </eurasia-and-africa> <europe> ... </europe> </publication> and convert it to html with this stylesheet <ul> <xsl:variable name="itemsNode" select="concat('publicationManifest/',$group,'/contentItem')"></xsl:variable> <xsl:for-each select="$itemsNode"> <li> <xsl:value-of select="content"/> </li> </xsl:for-each> </ul> $group is a parameter with name of group for example "corporate". I have an error with compilation of this stylsheet. SystemID: D:\1\contentsTransform.xslt Engine name: Saxon6.5.5 Severity: error Description: The value is not a node-set Start location: 18:0 What the matter?

    Read the article

  • How to import a module from PyPI when I have another module with the same name

    - by kuzzooroo
    I'm trying to use the lockfile module from PyPI. I do my development within Spyder. After installing the module from PyPI, I can't import it by doing import lockfile. I end up importing anaconda/lib/python2.7/site-packages/spyderlib/utils/external/lockfile.py instead. Spyder seems to want to have the spyderlib/utils/external directory at the beginning of sys.path, or at least none of the polite ways I can find to add my other paths get me in front of spyderlib/utils/external. I'm using python2.7 but with from __future__ import absolute_import. Here's what I've already tried: Writing code that modifies sys.path before running import lockfile. This works, but it can't be the correct way of doing things. Circumventing the normal mechanics of importing in Python using the imp module (I haven't gotten this to work yet, but I'm guessing it could be made to work) Installing the package with something like pip install --install-option="--prefix=modules_with_name_collisions" package_name. I haven't gotten this to work yet either, but I'm guess it could be made to work. It looks like this option is intended to create an entirely separate lib tree, which is more than I need. Source Using pip install --target=lockfile_from_pip. The files show up in the directory where I tell them to go, but import doesn't find them. And in fact pip uninstall can't find them either. I get Cannot uninstall requirement lockfile-from-pip, not installed and I guess I will just delete the directories and hope that's clean. Source So what's the preferred way for me to get access to the PyPI lockfile module?

    Read the article

  • How to combine two separate unrelated Git repositories into one with single history timeline

    - by Antony
    I have two unrelated (not sharing any ancestor check in) Git repositories, one is a super repository which consists a number of smaller projects (Lets call it repository A). Another one is just a makeshift local Git repository for a smaller project (lets call it repository B). Graphically, it would look like this A0-B0-C0-D0-E0-F0-G0-HEAD (repo A) A0-B0-C0-D0-E0-F0-G0-HEAD (remote/master bare repo pulled & pushed from repo A) A1-B1-C1-D1-E1-HEAD (repo B) Ideally, I would really like to merge repo B into repo A with a single history timeline. So it would appear that I originally started project in repo A. Graphically, this would be the ideal end result A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (new repo A) A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (remote/master bare repo pulled & pushed from repo A) I have been doing some reading with submodules and subtree (Pro Git is a pretty good book by the way), but both of them seem to cater solution towards maintaining two separate branch with sub module being able to pull changes from upstream and subtree being slightly less headache. Both solution require additional and specialized git commands to handle check ins and sync between master and sub tree/module branch. Both solution also result in multiple time-lines (with --squash you even get 3 timelines with subtree). The closest solution from SO seems to talk about "graft", but is that really it? The goal is to have a single unified repository where I can pull/push check-ins, so that there are no more repo B, just repo A in the end.

    Read the article

  • [javascript] Populating jsTree based on XML data uploaded to server folder

    - by PFM
    tl:dr How can I populate jsTree based on a folder location instead of an exact XML url? I'm looking for a little direction on this project. Currently I am trying to copy file structures of hard drives as XML files and recreate them using jsTree on the webserver for a completely independent version of the file structure. I have some python script that outputs XML files that are formed to jsTree and automatically uploads to a folder on the server. The problem is now I am a little lost because I have to manually enter each XML file into jsTree code for it to display so I have multiple entries like this: $("#tree1") .jstree({ "plugins" : [ "themes", "xml_data", "ui", "search", "types" ], "xml_data" : { "ajax" : { "url" : "./XML_DATA/DRIVE1.xml" }, "xsl" : "nest" }, I see in the documentation that instead of populated by the direct file the folders are populated by "server.php" but no where in the php code does it point to any directories or files. After considering the problem I thought of a few solutions and could use some advice on them: Should I be trying to write php code to automatically look through my XML_DATA folder to upload each XML file? Should I just upload all the XML to mySQL and populate my tree based on that? Should the javascript be the code looking through the server's folder for XML files? All the XML is formed the same way but the number of XML files on the server will increase and will have to be refreshed as well as they will be overwritten with changes. Any direction would be appreciated, thanks.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >