Search Results

Search found 1759 results on 71 pages for 'naming conventions'.

Page 61/71 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Is it legal to take sealed .NET framework class source and extend it?

    - by Giedrius
    To be short, I'm giving very specific example, but I'm interested in general situation. There is a FtpWebRequest class in .NET framework and it is missing some of new FTP operations, like MFCT. It is ok in a sense that this operation is still in draft mode, but it is not ok in a sense, that FtpWebRequest is sealed and there's no other way (at least I don't see it) to extend it with this new operation. Easiest way to do it would be take FtpWebRequest class source from .NET reference sources and extend it, in such way will be kept all the consistence in naming, implementation, etc. Question is how much legal it is? I won't sell this class as a product, I can publish my changes on web - nothing to hide here. If it is not legal, can I take this class source from mono and include in native .net project? Did you had similar case and how you solved it? Update: as long as extension method is offered, I'm pasting source from .NET framework which should show that extension methods are not the solution. So there's a property Method, where you can pass FTP command: public override string Method { get { return m_MethodInfo.Method; } set { if (String.IsNullOrEmpty(value)) { throw new ArgumentException(SR.GetString(SR.net_ftp_invalid_method_name), "value"); } if (InUse) { throw new InvalidOperationException(SR.GetString(SR.net_reqsubmitted)); } try { m_MethodInfo = FtpMethodInfo.GetMethodInfo(value); } catch (ArgumentException) { throw new ArgumentException(SR.GetString(SR.net_ftp_unsupported_method), "value"); } } } As you can see there FtpMethodInfo.GetMethodInfo(value) call in setter, which basically validates value against internal enum static array. Update 2: Checked mono implementation and it is not exact replica of native code + it does not implement some of the things.

    Read the article

  • Setting UITabBarItem title from UINavigationController?

    - by fuzzygoat
    I have setup a UITabBarController with two tabs, one is a simple UIViewController and the other is a UINavigationController using second UIViewController as its rootController to set up a UITableView. My question is with regard to naming the tabs (i.e. UITabBarItem) For the first tab (simple UIViewController) I have added the following (see below) to the controllers -init method. - (id)init { self = [super init]; if(self) { UITabBarItem *tabBarItem = [self tabBarItem]; [tabBarItem setTitle:@"ONE"]; } return self; } For the other tab I have added (see below) to the second controllers init (rootController). - (id)init { self = [super init]; if(self) { UITabBarItem *tabBarItem = [[self navigationController] tabBarItem]; [tabBarItem setTitle:@"TWO"]; } return self; } Am I setting the second tabBarItem title in the right place as currently it is not showing? EDIT: I can correctly set the UITabBarItem from within the AppDelegate when I first create the controllers, ready for adding to the UITabBarController. But I really wanted to do this in the individual controller -init methods for neatness. // UITabBarController UITabBarController *tempRoot = [[UITabBarController alloc] init]; [self setRootController:tempRoot]; [tempRoot release]; NSMutableArray *tabBarControllers = [[NSMutableArray alloc] init]; // UIViewController ONE MapController *mapController = [[MapController alloc] init]; [tabBarControllers addObject:mapController]; [mapController release]; // UITableView TWO TableController *rootTableController = [[TableController alloc] init]; UINavigationController *tempNavController = [[UINavigationController alloc] initWithRootViewController:rootTableController]; [rootTableController release]; [tabBarControllers addObject:tempNavController]; [tempNavController release]; [rootController setViewControllers:tabBarControllers]; [tabBarControllers release]; [window addSubview:[rootController view]]; [window makeKeyAndVisible];

    Read the article

  • Pulling $_GET data and creating multidimensional array using loop

    - by Chris J
    I'm creating a checkout for customers and the data about what's in their cart is being sent to a page (for just now) via $_GET. I want to extract that data and then populate a multidimensional array with it using a loop. Here's how I'm naming the data: $itemCount = $_GET['itemCount']; $i = 1; while ($i <= $itemCount) { ${'item_name_'.$i} = $_GET["item_name_{$i}"]; ${'item_quantity_'.$i} = $_GET["item_quantity_{$i}"]; ${'item_price_'.$i} = $_GET["item_price_{$i}"]; //echo "<br />Name: " .${'item_name_'.$i}. " - Quantity: " .${'item_quantity_'.$i}. " - Price: ".${'item_price_'.$i}; $i++; } From here I'd like to create a multidimensional array like such: Array ( [Item_1] => Array ( [item_name] => Shoe [item_quantity] => 2 [item_price] => 40.00 ) [Item_2] => Array ( [item_name] => Bag [item_quantity] => 1 [item_price] => 60.00 ) [Item_3] => Array ( [item_name] => Parrot [item_quantity] => 4 [item_price] => 90.00 ) . . . ) What I'd like to know is if there is a way I can create this array in the existing while loop? I'm aware of being able to add data to an array like $data = [] after delacring an empty array but the actual syntax eludes me. Maybe I'm completely off the right track and there is a better way of doing it? Thanks

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • How do I pull `static final` constants from a Java class into a Clojure namespace?

    - by Joe Holloway
    I am trying to wrap a Java library with a Clojure binding. One particular class in the Java library defines a bunch of static final constants, for example: class Foo { public static final int BAR = 0; public static final int SOME_CONSTANT = 1; ... } I had a thought that I might be able to inspect the class and pull these constants into my Clojure namespace without explicitly def-ing each one. For example, instead of explicitly wiring it up like this: (def *foo-bar* Foo/BAR) (def *foo-some-constant* Foo/SOME_CONSTANT) I'd be able to inspect the Foo class and dynamically wire up *foo-bar* and *foo-some-constant* in my Clojure namespace when the module is loaded. I see two reasons for doing this: A) Automatically pull in new constants as they are added to the Foo class. In other words, I wouldn't have to modify my Clojure wrapper in the case that the Java interface added a new constant. B) I can guarantee the constants follow a more Clojure-esque naming convention I'm not really sold on doing this, but it seems like a good question to ask to expand my knowledge of Clojure/Java interop. Thanks

    Read the article

  • Saving an ActiveRecord non-transactionally.

    - by theFunkyEngineer
    My application accepts file uploads, with some metadata being stored in the DB, and the file itself on the file system. I am trying to make the metadata visible in the application before the file upload and post-processing are finished, but because saves are transactional, I have had no success. I have tried the callbacks and calling create_or_update() instead of save(), all to no avail. Is there a way to do this without re-writing the guts of ActiveRecord::Base? I've even attempted naming the method make() instead of save(), but perplexingly that had no effect. The code below "works" fine, but the database is not modified until everything else is finished. def save(upload) uploadFile = upload['datafile'] originalName = uploadFile.original_filename self.fileType = File.extname(originalName) create_or_update() # write the file File.open(self.filePath, "wb") { |f| f.write(uploadFile.read) } begin musicFile = TagLib::File.new(self.filePath()) self.id3Title = musicFile.title self.id3Artist = musicFile.artist self.id3Length = musicFile.length rescue TagLib::BadFile => exc logger.error("Failed to id track: \n #{exc}") end if(self.fileType == '.mp3') convertToOGG(); end create_or_update() end Any ideas would be quite welcome, thanks.

    Read the article

  • REST design: what verb and resource name to use for a filtering service

    - by kabaros
    I am developing a cleanup/filtering service that has a method that receives a list of objects serialized in xml, and apply some filtering rules to return a subset of those objects. In a REST-ful service, what verb shall I use for such a method? I thought that GET is a natural choice, but I have to put the serialized XML in the body of the request which works but feels incorrect. The other verbs don't seem to fit semantically. What is a good way to define that Service interface? Naming the resource /Cleanup or /Filter seems weird mainly because in the examples I see online, it is always a name rather than a verb being used for resource name. Am I right to feel that REST services are better suited for CRUD operations and you start bending the rules in situations like this service? If yes, am I then making a wrong architectural choice. I've pushed to develop this service in REST-ful style (as opposed to SOAP) for simplicity, but such awkward cases happen a lot and make me feel like I am missing something. Either choosing REST where it shouldn't be used or may be over-thinking some stuff that doesn't really matter? In that case, what really matters?

    Read the article

  • Is there a way to create subdatabases as a kind of subfolders in sql server?

    - by user193655
    I am creating an application where there is main DB and where other data is stored in secondary databases. The secondary databases follow a "plugin" approach. I use SQL Server. A simple installation of the application will just have the mainDB, while as an option one can activate more "plug-ins" and for every plug-in there will be a new database. Now why I made this choice is because I have to work with an exisiting legacy system and this is the smartest thing I could figure to implement the plugin system. MainDB and Plugins DB have exactly the same schema (basically Plugins DB have some "special content", some important data that one can use as a kind of template - think to a letter template for example - in the application). Plugin DBs are so used in readonly mode, they are "repository of content". The "smart" thing is that the main application can also be used by "plugin writers", they just write a DB inserting content, and by making a backup of the database they creaetd a potential plugin (this is why all DBs has the same schema). Those plugins DB are downloaded from internet as there is a content upgrade available, every time the full PlugIn DB is destroyed and a new one with the same name is creaetd. This is for simplicity and even because the size of this DBs is generally small. Now this works, anyway I would prefer to organize the DBs in a kind of Tree structure, so that I can force the PlugIn DBs to be "sub-DBs" of the main application DB. As a workaround I am thinking of using naming rules, like: ApplicationDB (for the main application DB) ApplicationDB_PlugIn_N (for the N-th plugin DB) When I search for plugin 1 I try to connect to ApplicationDB_PlugIn_1, if I don't find the DB i raise an error. This situation can happen for example if som DBA renamed ApplicationDB_Plugin_1. So since those Plugin DBs are really dependant on ApplicationDB only I was trying to "do the subfolder trick". Can anyone suggest a way to do this? Can you comment on this self-made plugin approach I decribed above?

    Read the article

  • Need help/guidance about creating a desktop application with gui

    - by Somebody still uses you MS-DOS
    I'm planning to do an Desktop application using Python, to learn some Desktop concepts. I'm going to use GTK or Qt, I still haven't decided which one. Fact is: I would like to create an application with the possibility to be called from command line, AND using a GUI. So it would be useful for cmd fans, and GUI users as well. It would be interesting to create a web interface too in the future, so it could be run in a server somewhere using an html interface created with a template language. I'm thinking about two approaches: - Creating a "model" with a simple interface which is called from a desktop/web implementation; - Creating a "model" with an html interface, and embeb a browser component so I could reuse all the code in both desktop/web scenarios. My question is: which exactly concepts are involved in this project? What advantages/disadvantages each approach has? Are they possible? By naming "interface", I'm planning to just do some interfaces.py files with def calls. Is this a bad approach? I would like to know some book recommendations, or resources to both options - or source code from projects which share the same GUI/cmd/web goals I'm after. Thanks in advance!

    Read the article

  • Sys. engineer has decided to dynamically transform all XSLs into DLLs on website build process. DLL

    - by John Sullivan
    Hello, OS: Win XP. Here is my situation. I have a browser based application. It is "wrapped" in a Visual Basic application. Our "Systems Engineer Senior" has decided to spawn DLL files from all of our XSL pages (many of which have duplicate names) upon building a new instance of the website and have the active server pages (ASPX) use the DLL instead. This has created a "known issue" in which ~200 DLL naming conflicts occur and, thus, half of our application is broken. I think a solution to this problem is that, thankfully, we're generating the names of the DLLs and linking them up with our application dynamically. Therefore we can do something kludgy like generate a hash and append it to the end of the DLL file name when we build our website, then always reference the DLL that had some kind of random string / hash appended to its name. Aside from outright renaming the DLLs, is there another way to have multiple DLLs with the same name register for one application? I think the answer is "No, only between different applications using a special technique." Please confirm. Another question I have on my mind is whether this whole idea is a good practice -- converting our XSL pages (which we use in mass -- every time a response from our web app occurs) into DLL functions that call a "function" to do what the XSL page did via an active server page (ASPX), when we were before just sending an XML response to an XSL page via aspx.

    Read the article

  • Highlighting a custom UIButton

    - by Dan Ray
    The app I'm building has LOTS of custom UIButtons laying over top of fairly precisely laid out images. Buttonish, controllish images and labels and what have you, but with a clear custom-style UIButton sitting over top of it to handle the tap. My client yesterday says, "I want that to highlight when you tap it". Never mind that it immediately pushes on a new uinavigationcontroller view... it didn't blink, and so he's confused. Oy. Here's what I've done to address it. I don't like it, but it's what I've done: I subclassed UIButton (naming it FlashingUIButton). For some reason I couldn't just configure it with a background image on control mode highlighted. It never seemed to hit the state "highlighted". Don't know why that is. So instead I wrote: -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [self setBackgroundImage:[UIImage imageNamed:@"grey_screen"] forState:UIControlStateNormal]; [self performSelector:@selector(resetImage) withObject:nil afterDelay:0.2]; [super touchesBegan:touches withEvent:event]; } -(void)resetImage { [self setBackgroundImage:nil forState:UIControlStateNormal]; } This happily lays my grey_screen.png (a 30% opaque black box) over the button when it's tapped and replaces it with happy emptyness .2 of a second later. This is fine, but it means I have to go through all my many nibs and change all my buttons from UIButtons to FlashingUIButtons. Which isn't the end of the world, but I'd really hoped to address this as a UIButton category, and hit all birds with one stone. Any suggestions for a better approach than this one?

    Read the article

  • Prevent coersion to a single type in unlist() or c(); passing arguments to wrapper functions

    - by Leo Alekseyev
    Is there a simple way to flatten a list while retaining the original types of list constituents?.. Is there a way to programmatically construct a heterogeneous list?.. For instance, I want to create a simple wrapper for functions like png(filename,width,height) that would take device name, file name, and a list of options. The naive approach would be something like my.wrapper <- function(dev,name,opts) { do.call(dev,c(filename=name,opts)) } or similar code with unlist(list(...)). This doesn't work because opts gets coerced to character, and the resulting call is e.g. png(filename,width="500",height="500"). If there's no straightforward way to create heterogeneous lists like that, is there a standard idiomatic way to splice arguments into functions without naming them explicitly (e.g. do.call(dev,list(filename=name,width=opts["width"]))? -- Edit -- Gavin Simpson answered both questions below in his discussion about constructing wrapper functions. Let me give a summary of the answer to the title question: It is possible to construct a heterogeneous list with c() provided the arguments to c() are lists. To wit: > foo <- c("a","b"); bar <- 1:3 > c(foo,bar) [1] "a" "b" "1" "2" "3" > c(list(foo),list(bar)) [[1]] [1] "a" "b" [[2]] [1] 1 2 3 > c(as.list(foo),as.list(bar)) ## this creates a flattened heterogeneous list [[1]] [1] "a" [[2]] [1] "b" [[3]] [1] 1 [[4]] [1] 2 [[5]] [1] 3

    Read the article

  • Framework/service for hosting and managing files

    - by Peteris Caune
    Hi, in a webapp I'm building there is a planned side feature of supporting product illustrations and manuals (so pictures and PDFs), possibly arranged in galleries. As I'd rather not implement from scratch all of the uploading, managing and serving of this content, I'm looking for existing solutions which I could integrate. For example, I'm considering Flickr--users of webapp specify their Flickr username and then use some naming convention to link objects in my webapp with pictures uploaded in their Flickr account. Very little code to write from my side, maybe just some API calls that proxy Flickr APIs, since Flickr would handle picture uploading, organizing them in sets, storing them in cloud and serving them in various sizes etc. One drawback here is that either all of the the pictures are public or I have to deal with interactive Flickr authorization. Also not sure if Flickr would be happy being used in such manner. What other online services or libraries/frameworks I should look at? My webapp is written in Python/Pylons, so Python libraries would be preferred. I'm already using some of Amazon infrastructure, so frontends to Amazon S3 would be cool. For online services, RESTful API would be nice.

    Read the article

  • First Time Working With Others?

    - by cam
    I've been at my very first programming job for about 8 months now and I've learned incredible amounts so far. Unfortunately, I'm the sole developer for a small startup company for internal applications. For the first time ever though, I'll be handing off some of my projects to someone else when I leave this job. I've documented all my projects thoroughly (at least I think so), but I still feel nervous about someone else reading my code. For example, I've always done this sort of thing. for (int i = 0; i < blah.length; i++) { //Do stuff } Should I name 'i' something descriptive? It's only a temporary variable, and will only exist within that loop, and it seems that it's pretty obvious what the loop does with 'i'. This is just one example. Another one is that I name variables differently... I don't really conform to a standard of naming besides starting all private members with an underscore. Are there any resources that could show me how to make it easier for the next developer? Are there standards for this type of thing?

    Read the article

  • C#/DataSet: Create and bind a custom column/property in a DataSet`s DataTable in the XXXDataSet.cs

    - by msfanboy
    Hello, I have a DataSet with a DataTable having the columns Number and Description. I do not want to bind both properties to a BindingSource bound again to 2 controls. What I want is a 3rd column in the DataTable called NumberDescription which is a composition of Number and Description. This property is bound only to 1 control/BindingSource`s property not 2. There is the partial XXXDataSet.Designer.cs file and the partial XXXDataSet.cs file. Of course I have defined the public string NumberDescription {get (doing some checks here;set (doing also some checks here} in the XXXDataSet.cs file. But all this does not bind my new property/Column to the BindingSource the DataTable is bound to because the DataSet does not know the new column/property. To make this new property/column known I could add a new column to the DataTAble in the DataSet designer view naming it NumberDescription. At least I see know the new property in the listing of the BindingSource so I can choose it. But all that did not help?? So how do I do that stuff properly? Should I call the NumberDescription Property in the Number AND Description Property ?

    Read the article

  • How would I sort files to directories based on filenames?

    - by gnomed
    I have a huge number of files to sort all named in some terrible convention. Here are some examples: (4)_mr__mcloughlin____.txt 12__sir_john_farr____.txt (b)mr__chope____.txt dame_elaine_kellett-bowman____.txt dr__blackburn______.txt These names are supposed to be a different person (speaker) each. Someone in another IT department produced these from a ton of XML files using some script but the naming is unfathomably stupid as you can see. I need to sort literally tens of thousands of these files with multiple files of text for each person; each with something stupid making the filename different, be it more underscores or some random number. They need to be sorted by speaker. This would be easier with a script to do most of the work then I could just go back and merge folders that should be under the same name or whatever. There are a number of ways I was thinking about doing this. parse the names from each file and sort them into folders for each unique name. get a list of all the unique names from the filenames, then look through this simplified list of unique names for similar ones and ask me whether they are the same, and once it has determined this it will sort them all accordingly. I plan on using Perl, but I can try a new language if it's worth it. I'm not sure how to go about reading in each filename in a directory one at a time into a string for parsing into an actual name. I'm not completely sure how to parse with regex in perl either, but that might be googleable. For the sorting, I was just gonna use the shell command: `cp filename.txt /example/destination/filename.txt` but just cause that's all I know so it's easiest. I dont even have a pseudocode idea of what im going to do either so if someone knows the best sequence of actions, im all ears. I guess I am looking for a lot of help, I am open to any suggestions. Many many many thanks to anyone who can help. B.

    Read the article

  • Batch file recursively find files and rar them

    - by b1gf00t
    Hi there, I have a Parent Directory which hosts many sub directories, and in every sub directory there is .mpg movies. Some of the directories might contain one or more .mpg movies. I would like to automate the process below, which I have been doing manually. Step One If the directory has more than 1 .mpg file, I create separates directories for each and move each file into its directory, naming the directory as per the name of the file. Step Two I rar each video file in its directory as per one of my profiles, by that it splits the movie into 50MB parts, test the archive, delete the source, and instructs winrar to wait if another rar is executing. I am doing this so I can queue jobs manually. Step Three After having all the rars in the sub directories, I start creating a checksum for every directory, therefore leaving checksum.sfv in every directory. Step Four I copy the parent folder and its sub directories to my external drives. I was hoping that someone could assist me in creating a script. I was able to automate the process of creating directories as per the name of the file, and moving the file. However, I never succeeded in automating Step two. I am using the below software Winrar from rarlabs exf from exactfile Appreciate your assistance.

    Read the article

  • Mock implementations in C++

    - by forneo
    Hi guys, I need a mock implementation of a class - for testing purposes - and I'm wondering how I should best go about doing that. I can think of two general ways: Create an interface that contains all public functions of the class as pure virtual functions, then create a mock class by deriving from it. Mark all functions (well, at least all that are to be mocked) as virtual. I'm used to doing it the first way in Java, and it's quite common too (probably since they have a dedicated interface type). But I've hardly ever seen such interface-heavy designs in C++, thus I'm wondering. The second way will probably work, but I can't help but think of it as kind of ugly. Is anybody doing that? If I follow the first way, I need some naming assistance. I have an audio system that is responsible for loading sound files and playing the loaded tracks. I'm using OpenAL for that, thus I've called the interface "Audio" and the implementation "OpenALAudio". However, this implies that all OpenAL-specific code has to go into that class, which feels kind of limiting. An alternative would be to leave the class' name "Audio" and find a different one for the interface, e.g. "AudioInterface" or "IAudio". Which would you suggest, and why?

    Read the article

  • Ruby On Rails -

    - by Adam S
    I am trying to create a collection_select that is dependent on another collection_select, following the railscasts episode #88.. The database includes a schedule of all cruise ships arrival dates. See attached schema diagram,... (The arrow side of the lines indicate has_many, the non-pointy side indicates belongs_to) In the "booking/new" view I have a collection_select for choosing a cruiseline, then another collection_select will appear for selecting a ship, then another for selecting the date that ship is in. If I ONLY put a collection_select for shipshedule it works fine(because of the direct association with the bookings model). However, if I try to add a collection_select for cruiselines....it breaks(Im assuming because their isnt a direct association). The collection_select for cruiselines returns an undefined method error for "cruiseline_id".....if I simply use "id" the collection_select works, but of course isnt fully functional due to incorrect naming of the form field. I have tried "has_many :shipschedules, :through = :cruiseships" in the cruiseline model and "has_many :bookings, :through = :shipschedules" in the cruiseships model... As you can see in the diagram, I need to access the cruiseships and cruiselines model through the bookings model. I have the models set up so a cruiseline has_many cruiseships, a cruiseship has_many shipschedules, and a shipschedule has_many bookings. But, the bookings model cant directly access the cruiseline model. How do I accomplish this? THANKS!!! http://www.adamstockland.com/common/images/RoRf/PastedGraphic-2.png http://www.adamstockland.com/common/images/RoRf/PastedGraphic-1.png

    Read the article

  • Passive FTP on Windows Server 2008 R2 using the IIS7 FTP-Server

    - by ntor
    Hello serverFault-community! During the last few days I have been setting up a Windows Server 2008 R2 in a VMware. I installed the standard FTP-Server on it by using the Webserver (IIS)-role. Everything works fine with accessing my FTP-Site with ftp://localhost in Firefox. I can also get access to it via the local IP of my Server. Actually everything works fine in my LAN. But here's my problem: I want to get access "from outside", using the external IP or a dyndns-URL. I have a LinkSys-Router in front of my Server, therefore I'm forwarding all the important ports. If you may now think "this idiot has probably forgotten some ports", I must dissappoint you. It even works getting access to my Server-Website and messing around in some WebInterfaces. The problem is my passive FTP (active works for me). I always get a timeout, when e.g. FileZilla waits for a response to the LIST-command. The one big thing I don't get, is, why my Server sends a response to the PASV-command, naming a port like 40918, even if I have restricted the data port range for my passive FTP ( in the IIS-Manager) to e.g. [5000-5009]. I simply don't want to open and forward all possible data ports! And another thing is, I can't specify a static external IP-adress for my server, since I don't own any. I hope I have explained my problem in a comprehensible way. If not, simply ask by posting a comment! LG ntor PS: I have already mainly tried following articles: Out Of Band FTP 7 shows "Operation timed out" How to Configure Windows Firewall for a Passive Mode FTP Server ServerFault --- Passive ftp on Server 2008 --- EDIT: --- There is one idea rising up in my mind: When I use FileZilla to connect by passive mode I always get something like this: 227 Entering Passive Mode (192,168,1,102,160,86) According to a Rhinosof-article FZ tries to connect on port "160*256+86 = 41046", although I have restricted the data ports (as mentioned above). Could this be caused by the router, that doesn't forward out-ports directly, but uses different ones? (-- The IP-Adress given is the local one, since I'm not able to define a static external in the IIS-Mgr)

    Read the article

  • libpam-ldapd not looking for secondary groups

    - by Jorge Suárez de Lis
    I'm migrating from libpam-ldap to libpam-ldapd. I'm having some trouble gathering the secondary groups from LDAP. On libpam-ldap, I had this on the /etc/ldap.conf file: nss_schema rfc2307bis nss_base_passwd ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es nss_base_shadow ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es nss_base_group ou=Groups,ou=CITIUS,dc=inv,dc=usc,dc=es nss_map_attribute uniqueMember member The mapping is there because I'm using groupOfNames instead of groupOfUniqueNames LDAP class for groups, so the attribute naming the members is named member instead of uniqueMember. Now, I want to do the same using libpam-ldapd but I can't get it to work. Here's the relevant part of my /etc/nslcd.conf: base passwd ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es base shadow ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es base group ou=Groups,ou=CITIUS,dc=inv,dc=usc,dc=es map group uniqueMember member And this is the debug output from nslcd, when a user is authenticated: nslcd: [8b4567] DEBUG: connection from pid=12090 uid=0 gid=0 nslcd: [8b4567] DEBUG: nslcd_passwd_byuid(4004) nslcd: [8b4567] DEBUG: myldap_search(base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(&(objectClass=posixAccount)(uidNumber=4004))") nslcd: [8b4567] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [8b4567] DEBUG: ldap_set_rebind_proc() nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [8b4567] DEBUG: ldap_simple_bind_s("uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [8b4567] connected to LDAP server ldap://172.16.54.31/ nslcd: [8b4567] DEBUG: ldap_result(): end of results nslcd: [7b23c6] DEBUG: connection from pid=15906 uid=0 gid=2000 nslcd: [7b23c6] DEBUG: nslcd_pam_authc("jorge.suarez","","su","***") nslcd: [7b23c6] DEBUG: myldap_search(base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(&(objectClass=posixAccount)(uid=jorge.suarez))") nslcd: [7b23c6] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [7b23c6] DEBUG: ldap_set_rebind_proc() nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_simple_bind_s("uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [7b23c6] connected to LDAP server ldap://172.16.54.31/ nslcd: [7b23c6] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [7b23c6] DEBUG: ldap_set_rebind_proc() nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_simple_bind_s("uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [7b23c6] connected to LDAP server ldap://172.16.54.31/ nslcd: [7b23c6] DEBUG: myldap_search(base="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(objectClass=posixAccount)") nslcd: [7b23c6] DEBUG: ldap_unbind() nslcd: [3c9869] DEBUG: connection from pid=15906 uid=0 gid=2000 nslcd: [3c9869] DEBUG: nslcd_pam_sess_o("jorge.suarez","uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es","su","/dev/pts/7","","jorge.suarez") It seems to me that it won't even try to look for groups. What I am doing wrong? I can't see anything relevant to my problem information on the docs. I'm probably not understanding how the map option works.

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • Expectations for NTFS file recovery

    - by Fred Hamilton
    Yesterday I booted my XP system, and as I looked up a minute later I saw the light blue screen and tail-end of that pre-boot diskcheck Windows sometimes does if it finds an error (or was previously told to run a diskcheck drung the next boot). I didn't worry about it at the moment... But then I looked at my "scratch" disk, which was a 70% full, 750GB hard disk...and it now looks like it has been freshly formatted. It doesn't have a single file on it, just the hidden "System Volume Information" file and 750GB of freedom from data. I looked at some of the recovery tools from the Free NTFS partition recovery question and decided to try PC INSPECTOR™ File Recovery 4.x initially. It ran overnight and afterwards returned a list of thousands of files it could recover. The odd thing was that the filenames were lost, but the file extensions were not (WTF?). And all of the files were exactly 1,472kB in size. I recovered a dozen PDFs as a test, and 80% of them displayed OK despite being padded out to 1.5MB (though I assume any files 1472kB are hosed). My primary question is: Is this the best I can expect from any file recovery software when trying to recover NTFS files? Or is there perhaps something better out there? I assume this is as good as it gets, but wanted to check in with the experts first. Bonus questions: What might have happened to my drive? I didn't intentionally format it. I've never seen a disk error cause the drive to suddenly become a clean, reformatted drive. Could some malicious/confused software have told my PC to format my disk on reboot? Is that even a function Windows XP has? Why can the file extensions be recovered but not the filename? Does NTFS really treat them as separate entities? I thought I had 8.3 naming turned off, but maybe that had something to do with it. Or maybe it looks at the data in the file and guesses the extension?

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • ntpdate works, but ntpd can't synchronize

    - by dafydd
    This is in RHEL 5.5. First, ntpdate to the remote host works: $ ntpdate XXX.YYY.4.21 24 Oct 16:01:17 ntpdate[5276]: adjust time server XXX.YYY.4.21 offset 0.027291 sec Second, here are the server lines in my /etc/ntp.conf. All restrict lines have been commented out for troubleshooting. server 127.127.1.0 server XXX.YYY.4.21 I execute service ntpd start and check with ntpq: $ ntpq ntpq> peer remote refid st t when poll reach delay offset jitter ============================================================================== *LOCAL(0) .LOCL. 5 l 36 64 377 0.000 0.000 0.001 timeserver.doma .LOCL. 1 u 39 128 377 0.489 51.261 58.975 ntpq> opeer remote local st t when poll reach delay offset disp ============================================================================== *LOCAL(0) 127.0.0.1 5 l 40 64 377 0.000 0.000 0.001 timeserver.doma XXX.YYY.22.169 1 u 43 128 377 0.489 51.261 58.975 XXX.YYY.22.169 is the address of the host I'm working on. A reverse lookup on the IP address in my ntp.conf file validates that the ntpq output is correctly naming the remote server. However, as you can see, it appears to just roll over to my .LOCL. time server. Also, ntptrace just returns the local time server, and ntptrace XXX.YYY.4.21 times out. $ ntptrace localhost.localdomain: stratum 6, offset 0.000000, synch distance 0.948181 $ ntptrace XXX.YYY.4.21 XXX.YYY.4.21: timed out, nothing received ***Request timed out This looks like my ntp daemon is just querying itself. I am thinking about the possibility that the router-I-don't-control between my test network timeserver and the corporate network timeserver is blocking on source port. (I think ntpdate sends on port 123, which gets it around that filter and is why I can't use it while ntpd is running.) I have email in to the network folks to check that. Finally, telnet XXX.YYY.4.21 123 never times out or completes a connection. The questions: What am I missing, here? What else can I check to try to figure out where this connection is failing? Would strace ntptrace XXX.YYY.4.21 show me the source port ntptrace is sending from? I can deconstruct most strace calls, but I can't figure out the location of that datum. If I can't directly examine the gateway router between my test network and the timeserver, how might I build evidence that it's responsible for these disconnections? Alternately, how might I rule it out?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >