Search Results

Search found 616 results on 25 pages for 'recursively'.

Page 22/25 | < Previous Page | 18 19 20 21 22 23 24 25  | Next Page >

  • auto-document exceptions on methods in C#/.NET

    - by Sarah Vessels
    I would like some tool, preferably one that plugs into VS 2008/2010, that will go through my methods and add XML comments about the possible exceptions they can throw. I don't want the <summary> or other XML tags to be generated for me because I'll fill those out myself, but it would be nice if even on private/protected methods I could see which exceptions could be thrown. Otherwise I find myself going through the methods and hovering on all the method calls within them to see the list of exceptions, then updating that method's <exception list to include those. Maybe a VS macro could do this? From this: private static string getConfigFilePath() { return Path.Combine(Environment.CurrentDirectory, CONFIG_FILE); } To this: /// <exception cref="System.ArgumentException"/> /// <exception cref="System.ArgumentNullException"/> /// <exception cref="System.IO.IOException"/> /// <exception cref="System.IO.DirectoryNotFoundException"/> /// <exception cref="System.Security.SecurityException"/> private static string getConfigFilePath() { return Path.Combine(Environment.CurrentDirectory, CONFIG_FILE); } Update: it seems like the tool would have to go through the methods recursively, e.g., method1 calls method2 which calls method3 which is documented as throwing NullReferenceException, so both method2 and method1 are documented by the tool as also throwing NullReferenceException. The tool would also need to eliminate duplicates, like if two calls within a method are documented as throwing DirectoryNotFoundException, the method would only list <exception cref="System.IO.DirectoryNotFoundException"/> once.

    Read the article

  • How to get the Actual Link file location in VSS?

    - by Regi
    I use VSS and currently I am adding a link file using following code: int ShareFlags = (int)VSSFlags.VSSFLAG_RECURSNO; //Link in sourcesafe IVSSDatabase ssdb = GetVssDatabase(); Shared.Enums.SqlObjectSubType _sqlSubType = new Shared.Enums.SqlObjectSubType(); VSSItem SourceItem = ssdb.get_VSSItem(pSourceItemPath, false); //if source is a proj, recursively share the whole thing if (SourceItem.Type == (int)VSSItemType.VSSITEM_PROJECT) ShareFlags = (int)VSSFlags.VSSFLAG_RECURSYES; VSSItem DestItem = ssdb..get_VSSItem(pDestItemPath, false); //share the item DestItem.Share(SourceItem, pComment, ShareFlags); if (SourceItem.Type == (int)VSSItemType.VSSITEM_FILE) { bResult = true; } return bResult; This will works fine. My issue is that I need to find the actual link location. For example I have a Project named as Link and it contains 2 files say file1 and file2. I added a Link to my Working project (say CurrentProject). This current project have 2 files say f1 and f2. After sharing the Link project then we get the item in Current project as: $/CurrentProject/File1 $/CurrentProject/File2 $/CurrentProject/F1 $/CurrentProject/F2 Here File1 and File2 are link files. I need to get its parent (Actual) location i.e. $/Link/file1 and $/Link/File2 Is there any way to find Link files location using SourceSafeTypeLib?

    Read the article

  • Quickly accessing files in a 'project'

    - by bbbscarter
    Hi all. I'm looking for a way to quickly open files in my project's source tree. What I've been doing so far is adding files to the file-name-cache like so: (file-cache-add-directory-recursively (concat project-root "some/sub/folder") ".*\\.\\(py\\)$") after which I can use anything-for-files to access any file in the source tree with about 4 keystrokes. Unfortunately, this solution started falling over today. I've added another folder to the cache and emacs has started running out of memory. What's weird is that this folder contains less than 25% of files I'm adding, and yet emacs memory use goes up from 20mb to 400mb on adding just this folder. The total number of files is around 2000, so this memory use seems very high. Presumably I'm abusing the file cache. Anyway, what do other people do for this? I like this solution for its simplicity and speed; I've looked at some of the many, many project management packages for emacs and none of them really grabbed me... Thanks in advance! Simon

    Read the article

  • In Javascript, what's better than try/catch for exiting an outer scope?

    - by gruseom
    In Javascript, I sometimes want to return a value from a scope that isn't the current function. It might be a block of code within the function, or it might be an enclosing function as in the following example, which uses a local function to recursively search for something. As soon as it finds a solution, the search is done and the whole thing should just return. Unfortunately, I can't think of a simpler way to do this than by hacking try/catch for the purpose: function solve(searchSpace) { var search = function (stuff) { solution = isItSolved(stuff); if (solution) { throw solution; } else { search(narrowThisWay(stuff)); search(narrowThatWay(stuff)); }; }; try { return search(searchSpace); } catch (solution) { return solution; }; }; I realize one could assign the solution to a local variable and then check it before making another recursive call, but my question is specifically about transfer of control. Is there a better way than the above? Perhaps involving label/break?

    Read the article

  • python recursive iteration exceeding limit for tree implementation

    - by user3698027
    I'm implementing a tree dynamically in python. I have defined a class like this... class nodeobject(): def __init__(self,presentnode=None,parent=None): self.currentNode = presentnode self.parentNode = parent self.childs = [] I have a function which gets possible childs for every node from a pool def findchildren(node, childs): # No need to write the whole function on how it gets childs Now I have a recursive function that starts with the head node (no parent) and moves down the chain recursively for every node (base case being the last node having no children) def tree(dad,children): for child in children: childobject = nodeobject(child,dad) dad.childs.append(childobject) newchilds = findchildren(child, children) if len(newchilds) == 0: lastchild = nodeobject(newchilds,childobject) childobject.childs.append(lastchild) loopchild = copy.deepcopy(lastchild) while loopchild.parentNode != None: print "last child" else: tree(childobject,newchilds) The tree formation works for certain number of inputs only. Once the pool gets bigger, it results into "MAXIMUM RECURSION DEPTH EXCEEDED" I have tried setting the recursion limit with set.recursionlimit() and it doesn't work. THe program crashes. I want to implement a stack for recursion, can someone please help, I have gone no where even after trying for a long time ?? Also, is there any other way to fix this other than stack ?

    Read the article

  • How to parse a string (by a "new" markup) with R ?

    - by Tal Galili
    Hi all, I want to use R to do string parsing that (I think) is like a simplistic HTML parsing. For example, let's say we have the following two variables: Seq <- "GCCTCGATAGCTCAGTTGGGAGAGCGTACGACTGAAGATCGTAAGGtCACCAGTTCGATCCTGGTTCGGGGCA" Str <- ">>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<." Say that I want to parse "Seq" According to "Str", by using the legend here Seq: GCCTCGATAGCTCAGTTGGGAGAGCGTACGACTGAAGATCGTAAGGtCACCAGTTCGATCCTGGTTCGGGGCA Str: >>>>>>>..>>>>........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<. | | | | | | | || | +-----+ +--------------+ +---------------+ +---------------++-----+ | Stem 1 Stem 2 Stem 3 | | | +----------------------------------------------------------------+ Stem 0 Assume that we always have 4 stems (0 to 3), but that the length of letters before and after each of them can very. The output should be something like the following list structure: list( "Stem 0 opening" = "GCCTCGA", "before Stem 1" = "TA", "Stem 1" = list(opening = "GCTC", inside = "AGTTGGGA", closing = "GAGC" ), "between Stem 1 and 2" = "G", "Stem 2" = list(opening = "TACGA", inside = "CTGAAGA", closing = "TCGTA" ), "between Stem 2 and 3" = "AGGtC", "Stem 3" = list(opening = "ACCAG", inside = "TTCGATC", closing = "CTGGT" ), "After Stem 3" = "", "Stem 0 closing" = "TCGGGGC" ) I don't have any experience with programming a parser, and would like advices as to what strategy to use when programming something like this (and any recommended R commands to use). What I was thinking of is to first get rid of the "Stem 0", then go through the inner string with a recursive function (let's call it "seperate.stem") that each time will split the string into: 1. before stem 2. opening stem 3. inside stem 4. closing stem 5. after stem Where the "after stem" will then be recursively entered into the same function ("seperate.stem") The thing is that I am not sure how to try and do this coding without using a loop. Any advices will be most welcomed.

    Read the article

  • CakePHP - recursive on specific fields in model?

    - by Paul
    Hi, I'm pretty new to CakePHP but I think I'm starting to get the hang of it. I'm trying to pull related table information recursively, but I want to specify which related models to recurse on. Let me give you an example to demonstrate my goal: I have a model "Customer", which has info like Company name, website, etc. "Customer" hasMany "Addresses", which contain info for individual contacts like Contact Name, Street, City, State, Country, etc. "Customer" also belongsTo "CustomerType", which is just has descriptive category info - a name and description, like "Distributor" or "Manufacturer". When I do a find on "Customer" I want to get associated "CustomerType" and "Address" info as sub-arrays, and this works fine just by setting up the hasMany and belongsTo associations properly. But now, here's my issue: I want to get associated State/Country info. So, instead of each "Address" array row just having "state_id", I want it to have "state" = array("id" = 20, "name" = "New York",...) etc. If I set $recursive to a higher value (e.g., 2) in the Partner model, I get what I want for the State/Country info in each "Address". BUT it also recurses on "CustomerType", and that results in the "CustomerType" field of my "Partner" object having a huge array of all Customer objects that match that type, which could be thousands long. So the point is, I DON'T want to recurse on "CustomerType", only on "Address". Is there a way I can set this up? Sorry for the long-winded question, and thanks in advance!

    Read the article

  • How to parse multiple dates from a block of text in Python (or another language)

    - by mlissner
    I have a string that has several date values in it, and I want to parse them all out. The string is natural language, so the best thing I've found so far is dateutil. Unfortunately, if a string has multiple date values in it, dateutil throws an error: >>> s = "I like peas on 2011-04-23, and I also like them on easter and my birthday, the 29th of July, 1928" >>> parse(s, fuzzy=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 697, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/usr/lib/pymodules/python2.7/dateutil/parser.py", line 303, in parse raise ValueError, "unknown string format" ValueError: unknown string format Any thoughts on how to parse all dates from a long string? Ideally, a list would be created, but I can handle that myself if I need to. I'm using Python, but at this point, other languages are probably OK, if they get the job done. PS - I guess I could recursively split the input file in the middle and try, try again until it works, but it's a hell of a hack.

    Read the article

  • XPath Query - Select relative top level nodes

    - by John
    I'm trying to iterate over some elements in an XML document that may be nested, but as I iterate over them I may be removing some from the tree. I'm thinking the best way is to do this recursively, so I'm trying to come up with an XPath Query that will select all the top-level nodes relative to the current node. //foo[not(ancestor::foo)] works great at the document level, but I'm trying to figure out how to do this from a relative query. <foo id="1"> <foo id="2" /> <foo id="3"> <foo id="4"> <bar> <foo id="5"> <foo id="6" /> </foo> <foo id="7" /> </bar> </foo> </foo> </foo> If the current node is foo#3, I only want to select foo#4. When the current node is foo#4, I only want to select foo#5 and foo#7. I think I'm trying to select any descendant foo nodes of the current node, but without any ancestor foo nodes between the current node and the node I'm selecting. My conundrum is if we're already inside a foo node, not(ancestor::foo) doesn't help.

    Read the article

  • MySql stored procedure not found in PHP

    - by kaupov
    Hello, I have a trouble with MySql stored procedure that calls itself recursively using PHP (CakePHP). Calling it I receive following error: SQL Error: 1305: FUNCTION dbname.GetAdvertCounts does not exist The procedure itself is following: delimiter // DROP PROCEDURE IF EXISTS GetAdvertCounts// CREATE PROCEDURE GetAdvertCounts(IN category_id INT) BEGIN DECLARE no_more_sub_categories, advert_count INT DEFAULT 0; DECLARE sub_cat_id INT; DECLARE curr_sub_category CURSOR FOR SELECT id FROM categories WHERE parent_id = category_id; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_sub_categories = 1; SELECT COUNT(*) INTO advert_count FROM adverts WHERE category_id = category_id; OPEN curr_sub_category; FETCH curr_sub_category INTO sub_cat_id; REPEAT SELECT advert_count + GetAdvertCounts(sub_cat_id) INTO advert_count; FETCH curr_sub_category INTO sub_cat_id; UNTIL no_more_sub_categories = 1 END REPEAT; CLOSE curr_sub_category; SELECT advert_count; END // delimiter ; If I remove or comment out the recursive call, the procedure is working. Any idea what I'm missing here? The categories are 2 level deep.

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

  • Python list recursion type error

    - by Jacob J Callahan
    I can't seem to figure out why the following code is giving me a TypeError: 'type' object is not iterable pastebin: http://pastebin.com/VFZYY4v0 def genList(self): #recursively generates a sorted list of child node values numList = [] if self.leftChild != 'none': numList.extend(self.leftChild.genList()) #error numList.extend(list((self.Value,))) if self.rightChild != 'none': numList.extend(self.rightChild.genList()) #error return numList code that adds child nodes (works correctly) def addChild(self, child): #add a child node. working if child.Value < self.Value: if self.leftChild == 'none': self.leftChild = child child.parent = self else: self.leftChild.addChild(child) elif child.Value > self.Value: if self.rightChild == 'none': self.rightChild = child child.parent = self else: self.rightChild.addChild(child) Any help would be appreciated. Full interpreter session: >>> import BinTreeNode as BTN >>> node1 = BTN.BinaryTreeNode(5) >>> node2 = BTN.BinaryTreeNode(2) >>> node3 = BTN.BinaryTreeNode(12) >>> node3 = BTN.BinaryTreeNode(16) >>> node4 = BTN.BinaryTreeNode(4) >>> node5 = BTN.BinaryTreeNode(13) >>> node1.addChild(node2) >>> node1.addChild(node3) >>> node1.addChild(node4) >>> node1.addChild(node5) >>> node4.genList() <class 'list'> >>> node1.genList() Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:...\python\BinTreeNode.py", line 47, in genList numList.extend(self.leftChild.genList()) #error File "C:...\python\BinTreeNode.py", line 52, in genList TypeError: 'type' object is not iterable

    Read the article

  • All possibilities in 2d array

    - by valli-R
    I have this array: $array = array ( array('1', '2', '3'), array('!', '@'), array('a', 'b', 'c', 'd'), ); And I want to know all character combination of sub arrays.. for example : 1!a 1!b 1!c 1!d 1@a 1@b 1@c 1@d 2!a 2!b 2!c 2!d 2@a 2@b ... Currently I am having this code : for($i = 0; $i < count($array[0]); $i++) { for($j = 0; $j < count($array[1]); $j++) { for($k = 0; $k < count($array[2]); $k++) { echo $array[0][$i].$array[1][$j].$array[2][$k].'<br/>'; } } } It works, but I think it is ugly, and when I am adding more arrays, I have to add more for. I am pretty sure there is a way to do this recursively, but I don't know how to start/how to do it. A little help could be nice! Thanks you!

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • How to give my user permission to add/edit files on local apache server? [duplicate]

    - by Logan
    Possible Duplicate: How to make Apache run as current user I'm setting up my local test server again, and I seem to have forgotten how to successfully set up the LAMP server. I have installed LAMP server via tasksel command and I have configured the /var/www directory according to a guide I've found: After the lamp server installation you will need write permissions to the /var/www directory. Follow these steps to configure permissions. Add your user to the www-data group sudo usermod -a -G www-data <your user name> now add the /var/www folder to the www-data group sudo chgrp -R www-data /var/www now give write permissions to the www-data group sudo chmod -R g+w /var/www So logan user is now part of www-data group and the file/folder permissions look like the output below: logan@computer:/var/www$ ls -lart total 172 -rw-r--r-- 1 www-data www-data 1997 Oct 23 2010 wp-links-opml.php -rw-r--r-- 1 www-data www-data 3177 Nov 1 2010 wp-config-sample.php -rw-r--r-- 1 www-data www-data 3700 Jan 8 2012 wp-trackback.php -rw-r--r-- 1 www-data www-data 271 Jan 8 2012 wp-blog-header.php -rw-r--r-- 1 www-data www-data 395 Jan 8 2012 index.php -rw-r--r-- 1 www-data www-data 3522 Apr 10 2012 wp-comments-post.php -rw-r--r-- 1 www-data www-data 19929 May 6 2012 license.txt -rw-r--r-- 1 www-data www-data 18219 Sep 11 08:27 wp-signup.php -rw-r--r-- 1 www-data www-data 2719 Sep 11 16:11 xmlrpc.php -rw-r--r-- 1 www-data www-data 2718 Sep 23 12:57 wp-cron.php -rw-r--r-- 1 www-data www-data 7723 Sep 25 01:26 wp-mail.php -rw-r--r-- 1 www-data www-data 2408 Oct 26 15:40 wp-load.php -rw-r--r-- 1 www-data www-data 4663 Nov 17 10:11 wp-activate.php -rw-r--r-- 1 www-data www-data 9899 Nov 22 04:52 wp-settings.php -rw-r--r-- 1 www-data www-data 9175 Nov 29 19:57 readme.html -rw-r--r-- 1 www-data www-data 29310 Nov 30 08:40 wp-login.php drwxr-xr-x 14 root root 4096 Dec 24 17:41 .. drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-admin drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-includes -rw-rw-rw- 1 www-data www-data 3448 Dec 26 16:14 wp-config.php drwxrwxr-x 5 www-data www-data 4096 Dec 26 16:14 . drwx------ 6 www-data www-data 4096 Dec 26 16:19 wp-content Things work perfectly at http://localhost, I can view the website fine. The thing with this is that I will be working on a plugin for wordpress and I don't want to deal with separate owners under www directory to create or modify files/folders. When I give my user the ownership of /var/www recursively as logan:www-data I can create/modify files but cannot view the http://localhost. I get a Forbidden error. I'm assuming that this is because of the Apache's configuration? Which one is healthier or easier considering this is just a local test website, configuring apache to give user logan to view website and chmod /var/www logan:logan so that I can create files etc. without any sudo commands; or is it easier to configure user groups to get www-data user to act like my logan user? (Idk how that's possible, maybe putting www-data user under logan group?) Please shed some light to this subject. All I want is to be able to create/modifiy files under my user, and yet to be able to successfully view http://localhost I appreciate the help!

    Read the article

  • PowerShell Script To Find Where SharePoint 2010 Features Are Activated

    - by Brian Jackett
    The script on this post will find where features are activated within your SharePoint 2010 farm.   Problem    Over the past few months I’ve gotten literally dozens of emails, blog comments, or personal requests from people asking “how do I find where a SharePoint feature has been activated?”  I wrote a script to find which features are installed on your farm almost 3 years ago.  There is also the Get-SPFeature PowerShell commandlet in SharePoint 2010.  The problem is that these only tell you if a feature is installed not where they have been activated.  This is especially important to know if you have multiple web applications, site collections, and /or sites.   Solution    The default call (no parameters) for Get-SPFeature will return all features in the farm.  Many of the parameter sets accept filters for specific scopes such as web application, site collection, and site.  If those are supplied then only the enabled / activated features are returned for that filtered scope.  Taking the concept of recursively traversing a SharePoint farm and merging that with calls to Get-SPFeature at all levels of the farm you can find out what features are activated at that level.  Store the results into a variable and you end up with all features that are activated at every level.    Below is the script I came up with (slight edits for posting on blog).  With no parameters the function lists all features activated at all scopes.  If you provide an Identity parameter you will find where a specific feature is activated.  Note that the display name for a feature you see in the SharePoint UI rarely matches the “internal” display name.  I would recommend using the feature id instead.  You can download a full copy of the script by clicking on the link below.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first.   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(   [Parameter(position = 1, valueFromPipeline=$true)]   [Microsoft.SharePoint.PowerShell.SPFeatureDefinitionPipeBind]   $Identity )#end param   Begin   {     # declare empty array to hold results. Will add custom member `     # for Url to show where activated at on objects returned from Get-SPFeature.     $results = @()         $params = @{}   }   Process   {     if([string]::IsNullOrEmpty($Identity) -eq $false)     {       $params = @{Identity = $Identity             ErrorAction = "SilentlyContinue"       }     }       # check farm features     $results += (Get-SPFeature -Farm -Limit All @params |              % {Add-Member -InputObject $_ -MemberType noteproperty `                 -Name Url -Value ([string]::Empty) -PassThru} |              Select-Object -Property Scope, DisplayName, Id, Url)     # check web application features     foreach($webApp in (Get-SPWebApplication))     {       $results += (Get-SPFeature -WebApplication $webApp -Limit All @params |                % {Add-Member -InputObject $_ -MemberType noteproperty `                   -Name Url -Value $webApp.Url -PassThru} |                Select-Object -Property Scope, DisplayName, Id, Url)       # check site collection features in current web app       foreach($site in ($webApp.Sites))       {         $results += (Get-SPFeature -Site $site -Limit All @params |                  % {Add-Member -InputObject $_ -MemberType noteproperty `                     -Name Url -Value $site.Url -PassThru} |                  Select-Object -Property Scope, DisplayName, Id, Url)                          $site.Dispose()         # check site features in current site collection         foreach($web in ($site.AllWebs))         {           $results += (Get-SPFeature -Web $web -Limit All @params |                    % {Add-Member -InputObject $_ -MemberType noteproperty `                       -Name Url -Value $web.Url -PassThru} |                    Select-Object -Property Scope, DisplayName, Id, Url)           $web.Dispose()         }       }     }   }   End   {     $results   } } #end Get-SPFeatureActivated   Snippet of output from Get-SPFeatureActivated   Conclusion    This script has been requested for a long time and I’m glad to finally getting a working “clean” version.  If you find any bugs or issues with the script please let me know.  I’ll be posting this to the TechNet Script Center after some internal review.  Enjoy the script and I hope it helps with your admin / developer needs.         -Frog Out

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • Dynamically Changing the Display Names of Menus and Popups

    - by Geertjan
    Very interesting thing and handy to know when needed is the fact that "menuText" and "popupText" (from org.openide.awt.ActionRegistration) can be changed dynamically, via "putValue" as shown below for "popupText". The Action class, in this case, needs to be eager, hence you won't receive the object of interest via the constructor, but you can easily use the global Lookup for that purpose instead, as also shown below. import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.DemoProjectAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference(path = "Projects/Actions", position = 0) public final class DemoProjectAction extends AbstractAction{ private final ProjectInformation context; public DemoProjectAction() { putValue("popupText", "Select Me To See Current Time!"); context = ProjectUtils.getInformation( Utilities.actionsGlobalContext().lookup(Project.class)); } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { DateFormat formatter = new SimpleDateFormat("HH:mm:ss"); String formatted = formatter.format(System.currentTimeMillis()); putValue("popupText", "Time: " + formatted + " (" + context.getDisplayName() +")"); } } Now, let's do something semi useful and display, in the popup, which is available when you right-click a project, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.TrackProjectTimerAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectTimerAction extends AbstractAction implements FileChangeListener { private final Project context; private Long startTime; private Long changedTime; private DateFormat formatter; public TrackProjectTimerAction() { putValue("popupText", "Enable project time tracker"); this.formatter = new SimpleDateFormat("HH:mm:ss"); context = Utilities.actionsGlobalContext().lookup(Project.class); context.getProjectDirectory().addRecursiveListener(this); } @Override public void actionPerformed(ActionEvent e) { startTimer(); } protected void startTimer() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} }

    Read the article

  • Project Time Tracker

    - by Geertjan
    Based on yesterday's blog entry, let's do something semi useful and display, in the project popup, which is available when you right-click a project in the Projects window, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Collection; import java.util.List; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.awt.StatusDisplayer; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Lookup; import org.openide.util.LookupEvent; import org.openide.util.LookupListener; import org.openide.util.Utilities; import org.openide.util.WeakListeners; @ActionID( category = "Demo", id = "org.ptt.TrackProjectSelectionAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectSelectionAction extends AbstractAction implements LookupListener, FileChangeListener { private Lookup.Result<Project> projects; private Project context; private Long startTime; private Long changedTime; private DateFormat formatter; private List<Project> timedProjects; public TrackProjectSelectionAction() { putValue("popupText", "Timer"); formatter = new SimpleDateFormat("HH:mm:ss"); timedProjects = new ArrayList<Project>(); projects = Utilities.actionsGlobalContext().lookupResult(Project.class); projects.addLookupListener( WeakListeners.create(LookupListener.class, this, projects)); resultChanged(new LookupEvent(projects)); } @Override public void resultChanged(LookupEvent le) { Collection<? extends Project> allProjects = projects.allInstances(); if (allProjects.size() == 1) { Project currentProject = allProjects.iterator().next(); if (!timedProjects.contains(currentProject)) { String currentProjectName = ProjectUtils.getInformation(currentProject).getDisplayName(); putValue("popupText", "Start Timer for Project: " + currentProjectName); StatusDisplayer.getDefault().setStatusText( "Current Project: " + currentProjectName); timedProjects.add(currentProject); context = currentProject; } } } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} } Some more work needs to be done to complete the above, i.e., for each project you somehow need to maintain the start time and last change and redisplay that whenever the user right-clicks the project.

    Read the article

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • Apache Simple Configuration Issue: Setting up per-user directory permission denied problem

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work? [Added after comments] Hi Andol, thank you for your suggestions. So, first off, you said that you assume that I have the userdir module enabled, but I am not sure what that means exactly. That could be part of the problem. I do have the Module loaded, using the LoadModule directive. I have this: LoadModule userdir_module modules/mod_userdir.so I also did a find on where mod_userdir is, and I found it located here: [huckphin@crhyner-workbox]/% find / . -name '*mod_userdir.so*' 2> /dev/null /usr/lib64/lighttpd/mod_userdir.so /usr/lib64/httpd/modules/mod_userdir.so Is there something else I need to enable? Also, my directory configuration was mentioned. I have uncommented the default configuration given. I have not looked into what all of the configurations mean, and this could probably explain the issue. Here is the Directory that I have for the user directories: <Directory "/home/*/public_html"> AllowOverride FileInfo AuthConfig Limit Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory>

    Read the article

  • PowerShell remove force

    - by mausch
    Trying to delete a directory recursively with rm -Force -Recurse somedirectory, I get several "The directory is not empty" errors. If I retry the same command, it succeeds. Example: PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (RunTime:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Data:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (FileHelpers.Tests:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (nunit:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Libs:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (I:\Documents an...net\FileHelpers:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers PS I:\Documents and Settings\m\My Documents\prg\net> Of course, this doesn't happen always. Also, it doesn't happen only with _svn directories, and I don't have TortoiseSVN cache or anything like that so nothing is blocking the directory. Any ideas?

    Read the article

  • chrooting user causes "connection closed" message when using sftp

    - by George Reith
    First off I am a linux newbie so please don't assume much knowledge. I am using CentOS 5.8 (final) and using OpenSSH version 5.8p1. I have made a user playwithbits and I am attempting to chroot them to the directory home/nginx/domains/playwithbits/public I am using the following match statement in my sshd_config file: Match group web-root-locked ChrootDirectory /home/nginx/domains/%u/public X11Forwarding no AllowTcpForwarding no ForceCommand /usr/libexec/openssh/sftp-server # id playwithbits returns: uid=504(playwithbits) gid=504(playwithbits) groups=504(playwithbits),507(web-root-locked) I have changed the user's home directory to: home/nginx/domains/playwithbits/public Now when I attempt to sftp in with this user I instantly get the message: connection closed Does anyone know what I am doing wrong? Edit: Following advice from @Dennis Williamson I have connected in debug mode (I think... correct me if I'm wrong). I have made a bit of progress by using chmod to set permissions recursively of all files in the directly to 700. Now I get the following messages when I attempt to log on (still connection refused): Connection from [My ip address] port 38737 debug1: Client protocol version 2.0; client software version OpenSSH_5.6 debug1: match: OpenSSH_5.6 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user playwithbits service ssh-connection method none debug1: attempt 0 failures 0 debug1: user playwithbits matched group list web-root-locked at line 91 debug1: PAM: initializing for "playwithbits" debug1: PAM: setting PAM_RHOST to [My host info] debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user playwithbits service ssh-connection method password debug1: attempt 1 failures 0 debug1: PAM: password authentication accepted for playwithbits debug1: do_pam_account: called Accepted password for playwithbits from [My ip address] port 38737 ssh2 debug1: monitor_child_preauth: playwithbits has been authenticated by privileged process debug1: SELinux support disabled debug1: PAM: establishing credentials User child is on pid 3942 debug1: PAM: establishing credentials Changed root directory to "/home/nginx/domains/playwithbits/public" debug1: permanently_set_uid: 504/504 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 2097152 max 32768 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request env reply 0 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req env debug1: server_input_channel_req: channel 0 request subsystem reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req subsystem subsystem request for sftp by user playwithbits debug1: subsystem: cannot stat /usr/libexec/openssh/sftp-server: Permission denied debug1: subsystem: exec() /usr/libexec/openssh/sftp-server debug1: Forced command (config) '/usr/libexec/openssh/sftp-server' debug1: session_new: session 0 debug1: Received SIGCHLD. debug1: session_by_pid: pid 3943 debug1: session_exit_message: session 0 channel 0 pid 3943 debug1: session_exit_message: release channel 0 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from [My ip address]: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25  | Next Page >