Search Results

Search found 15139 results on 606 pages for 'scripting interface'.

Page 535/606 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • MediaWiki: how to hide users from the user list?

    - by Dave Everitt
    I've set up Mediawiki 1.15.1 for a client who has added two users by mistake. They now want to hide these users from the user list. It seems this is done via the $wgGroupPermissions array with $wgGroupPermissions['suppress']['hideuser'] = true;, but it isn't at all clear what entry this needs for the hiding to work, or whether a new group ('hidden' or whatever) has to be created first with $wgAddGroups['bureaucrat'] = true;. For now, I've added the two users to be hidden to the 'Oversight' group which explains 'Block a username, hiding it from the public (hideuser)', but they still appear on the Special:ListUsers page. At a loss as to how the MediWiki arrays alter options displayed in the interface, so far I've added this to LocalSettings.php: $wgGroupPermissions['suppress']['hideuser'] = true; $wgAddGroups['supress'] = true; Or - since they haven't actually added anything to the wiki - could they simply be removed from the MySQL users table - although MediaWiki warns against this? Has anyone else done this successfully? Update - this is a hole in MediaWiki admin (although there are workarounds). See this thread on MediaWIki Users and the note to the reply below.

    Read the article

  • How to implement the API/SPI Pattern in Java?

    - by Adam Tannon
    I am creating a framework that exposes an API for developers to use: public interface MyAPI { public void doSomeStuff(); public int getWidgets(boolean hasRun); } All the developers should have to do is code their projects against these API methods. I also want them to be able to place different "drivers"/"API bindings" on the runtime classpath (the same way JDBC or SLF4J work) and have the API method calls (doSomeStuff(), etc.) operate on different 3rd party resources (files, servers, whatever). Thus the same code and API calls will map to operations on different resources depending on what driver/binding the runtime classpath sees (i.e. myapi-ftp, myapi-ssh, myapi-teleportation). How do I write (and package) an SPI that allows for such runtime binding, and then maps MyAPI calls to the correct (concrete) implementation? In other words, if myapi-ftp allows you to getWidgets(boolean) from an FTP server, how would I could this up (to make use of both the API and SPI)? Bonus points for concrete, working Java code example! Thanks in advance!

    Read the article

  • How do I add a where filter using the original Linq-to-SQL object in the following scenario

    - by GenericTypeTea
    I am performing a select query using the following Linq expression: Table<Tbl_Movement> movements = context.Tbl_Movement; var query = from m in movements select new MovementSummary { Id = m.DocketId, Created = m.DateTimeStamp, CreatedBy = m.Tbl_User.FullName, DocketNumber = m.DocketNumber, DocketTypeDescription = m.Ref_DocketType.DocketType, DocketTypeId = m.DocketTypeId, Site = new Site() { Id = m.Tbl_Site.SiteId, FirstLine = m.Tbl_Site.FirstLine, Postcode = m.Tbl_Site.Postcode, SiteName = m.Tbl_Site.SiteName, TownCity = m.Tbl_Site.TownCity, Brewery = new Brewery() { Id = m.Tbl_Site.Ref_Brewery.BreweryId, BreweryName = m.Tbl_Site.Ref_Brewery.BreweryName }, Region = new Region() { Description = m.Tbl_Site.Ref_Region.Description, Id = m.Tbl_Site.Ref_Region.RegionId } } }; I am also passing in an IFilter class into the method where this select is performed. public interface IJobFilter { int? PersonId { get; set; } int? RegionId { get; set; } int? SiteId { get; set; } int? AssetId { get; set; } } How do I add these where parameters into my SQL expression? Preferably I'd like this done in another method as the filtering will be re-used across multiple repositories. Unfortunately when I do query.Where it has become an IQueryable<MovementSummary>. I'm assuming it has become this as I'm returning an IEnumerable<MovementSummary>. I've only just started learning LINQ, so be gentle.

    Read the article

  • Generating python wrapper for 3ed party c++ dll using swig with

    - by MuraliK
    I am new bee to swig. I have a third party c++ dll with the following functions export. I want to call these dll functions in python. So thought of using swig to generate the wrapper using swig. I am not sure what sort of wrapper i need to generate (do i need to generate .lib or .dll to use it in python?). In case i need to generate .dll how do i do that using visual studio 2010. There are some call back function like SetNotifyHandler(void (__stdcall * nf)(int wp, void *lp)) in the bellow list. How do define such function in interface file. can someone help me plese? enter code here #ifndef DLL_H #define DLL_H #ifdef DLL_BUILD #define DLLFUNC __declspec(dllexport) #else #define DLLFUNC __declspec(dllimport) #endif #pragma pack(push) #pragma pack(1) #pragma pack(pop) extern "C" { DLLFUNC int __stdcall StartServer(void); DLLFUNC int __stdcall GetConnectionInfo(int connIndex, Info *buf); DLLFUNC void __stdcall SetNotifyWindow(HWND nw); DLLFUNC void __stdcall SetNotifyHandler(void (__stdcall * nf)(int wp, void *lp)); DLLFUNC int __stdcall SendCommand(int connIndex, Command *cmd); };

    Read the article

  • crash when using stl vector at instead of operator[]

    - by Jamie Cook
    I have a method as follows (from a class than implements TBB task interface - not currently multithreading though) My problem is that two ways of accessing a vector are causing quite different behaviour - one works and the other causes the entire program to bomb out quite spectacularly (this is a plugin and normally a crash will be caught by the host - but this one takes out the host program as well! As I said quite spectacular) void PtBranchAndBoundIterationOriginRunner::runOrigin(int origin, int time) const // NOTE: const method { BOOST_FOREACH(int accessMode, m_props->GetAccessModes()) { // get a const reference to appropriate vector from member variable // map<int, vector<double>> m_rowTotalsByAccessMode; const vector<double>& rowTotalsForAccessMode = m_rowTotalsByAccessMode.find(accessMode)->second; if (origin != 129) continue; // Additional debug constrain: I know that the vector only has one non-zero element at index 129 m_job->Write("size: " + ToString(rowTotalsForAccessMode.size())); try { // check for early return... i.e. nothing to do for this origin if (!rowTotalsForAccessMode[origin]) continue; // <- this works if (!rowTotalsForAccessMode.at(origin)) continue; // <- this crashes } catch (...) { m_job->Write("Caught an exception"); // but its not an exception } // do some other stuff } } I hate not putting in well defined questions but at the moment my best phrasing is : "WTF?" I'm compiling this with Intel C++ 11.0.074 [IA-32] using Microsoft (R) Visual Studio Version 9.0.21022.8 and my implementation of vector has const_reference operator[](size_type _Pos) const { // subscript nonmutable sequence #if _HAS_ITERATOR_DEBUGGING if (size() <= _Pos) { _DEBUG_ERROR("vector subscript out of range"); _SCL_SECURE_OUT_OF_RANGE; } #endif /* _HAS_ITERATOR_DEBUGGING */ _SCL_SECURE_VALIDATE_RANGE(_Pos < size()); return (*(_Myfirst + _Pos)); } (Iterator debugging is off - I'm pretty sure) and const_reference at(size_type _Pos) const { // subscript nonmutable sequence with checking if (size() <= _Pos) _Xran(); return (*(begin() + _Pos)); } So the only difference I can see is that at calls begin instead of simply using _Myfirst - but how could that possibly be causing such a huge difference in behaviour?

    Read the article

  • Dynamic Data Extract Tools

    - by Kevin McGovern
    I've been searching around for a few weeks now for a tool that either is fully built or a direction of something I could build for dynamically extracting data via a web interface. Basically, what I'm looking for is a way to give users a list of all available data objects from our database and then let them pick ones from the list they'd like to view and set parameters then export the results to an excel file. Right now we're doing it purely with SQL statements but we have hundreds of objects so as you might imagine, those statements are really complex and prone to errors. It would be great if there was a tool available to do this or if someone had an idea of an easy way to organize this. Any help would be greatly appreciated. We've looked at BI tools like QlikView and Tableau but that is probably overkill for what we're trying to do. The open-source BI tools we've looked at seemed really primitive in their functionality. The other thing we looked at was MSAS (our DB is SQL Server) but I'd prefer something that was more database-agnostic and lived on a web server instead of on the database.

    Read the article

  • How can I run some common code from both (a) scheduled via Windows Task & (b) manually from within W

    - by Greg
    Hi, QUESTION - How can I run some common code from both (a) scheduled via Windows Task & (b) manually from within WinForms app? BACKGROUND: This follows on from the http://stackoverflow.com/questions/2489999/how-can-i-schedule-tasks-in-a-winforms-app thread REQUIREMENTS C# .NETv3.5 project using VS2008 There is an existing function which I want to run both (a) manually from within the WinForms application, and (b) scheduled via Windows Task. APPROACHES So what I'm trying to understand is what options are there to make this work eg Is it possible for a windows task to trigger a function to run within a running/existing WinForms application? (doesn't sound solid I guess) Split code out into two projects and duplicate for both console application that the task manager would run AND code that the winforms app would run Create a common library and re-use this for both the above-mentioned projects in the bullet above Create a service with an interface that both the task manager can access plus the winforms app can manage Actually each of these approaches sounds quite messy/complex - would be really nice to drop back to have the code only once within the one project in VS2008, the only reason I ask about this is I need to have a scheduling function and the suggestion has been to use http://taskscheduler.codeplex.com/ as the means to do this, which takes the scheduling out of my VS2008 project... thanks

    Read the article

  • Coldfusion 8 and HTTP PUT - is there a way to PUT an object?

    - by ciaranarcher
    Hi all We are using EHCache with CF 8 to cache stuff on a central server using a RESTful interface over HTTP. I am trying to cache a cfquery object to the cache server. I can get this to work if I call EHCache direct (i.e. store it in a local cache) but if I try to cache on a remote server over HTTP I am running into problems. The code I am using is as follows: <cfhttp url="http://localhost:8080/myCache/myKey" method="put" result="r" timeout="2" throwonerror="true" > <cfhttpparam type="body" value="#ARGUMENTS.item#" /> </cfhttp> CF doesn't like this reference to #ARGUMENTS.item# and it complains Complex object types cannot be converted to simple values. Can anyone give me an example of how to put an object over http using CF? If this is not possible with CF then a Java example would be the next best thing. Many thanks in advance! PS: I do not want to use serialization to text/JSON etc. as this approach has issues with data integrity and most importantly it's not fast enough.

    Read the article

  • If I cast an IQueryable as an IEnumerable then call a Linq extension method, which implementation gets called?

    - by James Morcom
    Considering the following code: IQueryable<T> queryable; // something to instantiate queryable var enumerable = (IEnumerable<T>) queryable; var filtered = enumerable.Where(i => i > 3); In the final line, which extension method gets called? Is it IEnumerable<T>.Where(...)? Or will IQueryable<T>.Where(...) be called because the actual implementation is still obviously a queryable? Presumably the ideal would be for the IQueryable version to be called, in the same way that normal polymorphism will always use the more specific override. In Visual Studio though when I right-click on the Where method and "Go to Definition" I'm taken to the IEnumerable version, which kind of makes sense from a visual point-of-view. My main concern is that if somewhere in my app I use Linq to NHibernate to get a Queryable, but I pass it around using an interface that uses the more general IEnumerable signature, I'll lose the wonders of deferred database execution!

    Read the article

  • C++: Constructor/destructor unresolved when not inline?

    - by Anamon
    In a plugin-based C++ project, I have a TmpClass that is used to exchange data between the main application and the plugins. Therefore the respective TmpClass.h is included in the abstract plugin interface class that is included by the main application project, and implemented by each plugin. As the plugins work on STL vectors of TmpClass instances, there needs to be a default constructor and destructor for the TmpClass. I had declared these in TmpClass.h: class TmpClass { TmpClass(); ~TmpClass(); } and implemented them in TmpClass.cpp. TmpClass::~TmpClass() {} TmpClass::TmpClass() {} However, when compiling plugins this leads to the linker complaining about two unresolved externals - the default constructor and destructor of TmpClass as required by the std::vector<TmpClass> template instantiation - even though all other functions I declare in TmpClass.h and implement in TmpClass.cpp work. As soon as I remove the (empty) default constructor and destructor from the .cpp file and inline them into the class declaration in the .h file, the plugins compile and work. Why is it that the default constructor and destructor have to be inline for this code to compile? Why does it even maatter? (I'm using MSVC++8).

    Read the article

  • How to Sort a TreeList in Sitecore 6 in the Source

    - by Scott
    My team uses Sitecore 6 as content management system and then .Net to interface with Sitecore API. In many of our templates we make use of a Treelist. When adding a new item to the selected items Treelist it automatically puts the item at the bottom of the list. In some lists they get very large. In most cases end users would like to see these lists sorted descending by a Date field that is part of the templates that can be added as selected to the Treelist. Programmatically on the .Net side its very easy to handle this using Linq OrderByDescending and all displays great in the site to visitors. What I am trying to figure out is how to get it to display the same in Sitecore Content Editor. I've not found anything from Google search other than there seems to be a SortBy you can specify in the source but I tried this and can't get it to have any effect. Has anyone dealt with this before? Again, main goal is to sort items in a Treelist in the Sitecore Content Editor itself. Thanks for any input anyone has.

    Read the article

  • Need advice on C++ coding pattern

    - by Kotti
    Hi! I have a working prototype of a game engine and right now I'm doing some refactoring. What I'm asking for is your opinion on usage of the following C++ coding patterns. I have implemented some trivial algorithms for collision detection and they are implemented the following way: Not shown here - class constructor is made private and using algorithms looks like Algorithm::HandleInnerCollision(...) struct Algorithm { // Private routines static bool is_inside(Point& p, Object& object) { // (...) } public: /** * Handle collision where the moving object should be always * located inside the static object * * @param MovingObject & mobject * @param const StaticObject & sobject * @return void * @see */ static void HandleInnerCollision(MovingObject& mobject, const StaticObject& sobject) { // (...) } So, my question is - somebody advised me to do it "the C++" way - so that all functions are wrapped in a namespace, but not in a class. Is there some good way to preserve privating if I will wrap them into a namespace as adviced? What I want to have is a simple interface and ability to call functions as Algorithm::HandleInnerCollision(...) while not polluting the namespace with other functions such as is_inside(...) Of, if you can advise any alternative design pattern for such kind of logics, I would really appreciate that...

    Read the article

  • NSMutableArray for Object which has NSString property causes memory leak

    - by user262325
    Hello everyone I hope to add objects to a NSMutableArray "myArray", The NSMutableArray is the array for FileObj which has a NSString property "fileName" #import <UIKit/UIKit.h> @interface FileObj : NSObject { NSString *fileName; } -(void) setfileName:(NSString *)s ; -(NSString *) getfileName ; @end // // File.m// #import "File.h" @implementation FileObj -(void) setfileName:(NSString *)s ; { fileName=s; } -(NSString *) getfileName ; { return fileName; } @end I initialize the myArray here: NSMutableArray *temarray; temarray=[[NSMutableArray alloc] init]; self.myArray=temarray; [temarray release]; the codes to add object to myArray FileObj *newobj=[[FileObj alloc]init ]; NSString *fieldValue2 = [[NSString alloc] initWithUTF8String:@"aaaa"]; [newobj setfileName:fieldValue2]; [myArray addObject:newobj]; [fieldValue2 release]; //**if I enabled the line, it will cause crash** //**if I disable the line, it will cause memory leak** [newobj release]; Welcome any comment Thanks interdev

    Read the article

  • Can I control object creation using MEF?

    - by Akash
    I need to add some extension points to our existing code, and I've been looking at MEF as a possible solution. We have an IRandomNumberGenerator interface, with a default implementation (ConcreteRNG) that we would like to be swappable. This sounds like an ideal scenario for MEF, but I've been having problems with the way we instantiate the random number generators. Our current code looks like: public class Consumer { private List<IRandomNumberGenerator> generators; private List<double> seeds; public Consumer() { generators = new List<IRandomNumberGenerator>(); seeds = new List<double>(new[] {1.0, 2.0, 3.0}); foreach(var seed in seeds) { generators.Add(new ConcreteRNG(seed); } } } In other words, the consumer is responsible for instantiating the RNGs it needs, including providing the seed that each instance requires. What I'd like to do is to have the concrete RNG implementation discovered and instantiated by MEF (using the DirectoryCatalog). I'm not sure how to achieve this. I could expose a Generators property and mark it as an [Import], but how do I provide the required seeds? Is there some other approach I am missing?

    Read the article

  • Problem in calling a function from different class using protocols-iphone

    - by Muniraj
    I use cocos2d for my game. In which I play a movie and have a separate overlay view for controls.The touches are detected in the overlay view. Now when the touches are detected the function in the game code has to be invoked. But the function is not detected and there is no error. I dont know what has gone wrong. Someone please help me. The code are as follows The protocol part is @protocol Protocol @required -(void) transition1:(id) sender; @end The function which is to be invoked in the game code is (void) transition1:(id) sender { [[Director sharedDirector] replaceScene: [ [Scene node] addChild: [Layer4 node] z:0] ]; } The code in the overlay view in MovieOverlayViewController.h import "Protocol.h" @interface MovieOverlayViewController : UIViewController { UIImageView *overlay; NSObject <Protocol> *transfer; } @end The code in the overlay view in MovieOverlayViewController.m @implementation MovieOverlayViewController (id)init { if ((self = [super init])) self.view = [[[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]] autorelease]; return self; } -(void) viewWillAppear:(BOOL)animated { overlay = [[[UIImageView alloc] initWithImage:[UIImage imageNamed:@"overlay.png"]] autorelease]; [self.view addSubview:overlay]; } (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint point = [touch locationInView:self.view]; NSLog(@"pointx: %f pointy:%f", point.x, point.y); if (CGRectContainsPoint(CGRectMake(1, 440, 106, 40), point)) { // the function is called here [transfer transition1: nil]; } else if (CGRectContainsPoint(CGRectMake(107, 440, 106, 40), point)) NSLog(@"tab 2 touched"); } (void)dealloc { [overlay release]; [super dealloc]; } @end

    Read the article

  • Database (MySQL) structuring: pros and cons of multiple tables

    - by Gideon
    I am collecting data and storing it MySQL, for: 75 variables 55 countries Each year I have, at this stage since I am building this tool created a single table, of variables / countries (storing 1 year worth of data). Next year (and for several years after that) a new set of data will be input for each country. There are therefore 3 variables in controlling data returned to a user reviewing all collected data. The general form of any query would be: Show me these specifics variables, for these specific countries, for these specific years. (Show me average age and weight, for USA and Canada, for 2012 and 2009, for example) My question is, it seems that I have two options for arranging this data: -Multiple tables where I create a table of country / variable for each year data is collected - Single table and simply add a column (field) for the year that data relates to. As far as I can tell I could make these database calls with either sructure, but is one more powerful / efficient / quicker, and why? Thanks for your consideration. It's a PDO / PHP interface if that is relevent.

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Cocoa button won't display image

    - by A Mann
    Just started exploring Cocoa so pretty much a total noob. I've written a very simple game. Set it up in Interface Builder and got it working fine. It contains a number of buttons and I'm now trying to get the buttons to display images. To start with I'm trying to get an image displayed on just one of the buttons which is called tile0 . The image file (it's nothing but a green square at the moment, but I'm just trying to get that working before I attempt anything more exotic) is sitting in the same directory as the class file which controls the game. I have the following code sitting in my wakeFromNib method: NSString *myImageFileName = [[NSString alloc] init]; myImageFileName = @"greenImage.jpg"; NSImage *myImage = [[NSImage alloc] initByReferencingFile:myImageFileName]; [tile0 setImage: myImage]; Trouble is, the game runs fine, but the image isn't appearing on my button. Is there someone who could kindly tell me if I'm doing something obviously wrong? Many Thanks.

    Read the article

  • Error with custom Class definition in protocol

    - by Greg
    I'm trying to set up a custom delegate protocol and am getting a strange error that I don't understand. I wonder if someone could point out what I'm doing wrong here (I'm still new to Ob-C and protocol use)... The situation is that I've built my own URLLoader class to manage loading and parsing data from the internet. I'm now trying to set up a protocol for delegates to implement that will respond to the URLLoader's events. So, below is my protocol... #import <UIKit/UIKit.h> #import "URLLoader.h" /** * Protocol for delegates that will respond to a load. */ @protocol URLLoadResponder <NSObject> - (void)loadDidComplete:(URLLoader *)loader; - (void)loadDidFail:(URLLoader *)loader withError:(NSString *)error; @end However, I'm getting the following error for both method signatures: Expected ')' before 'URLLoader' I feel like I must be overlooking something small and silly. Any help folks could offer would be greatly appreciated! Whoops ... it was pointed out that I should include URLLoader.h. Here it is: #import <Foundation/Foundation.h> #import "URLLoadResponder.h" /** * URLLoader inferface. */ @interface URLLoader : NSObject { NSString *name; NSString *loadedData; NSMutableData *responseData; NSObject *delegate; BOOL _isLoaded; } @property (nonatomic, retain) NSString *name; @property (nonatomic, retain) NSString *loadedData; @property (nonatomic, retain) NSObject *delegate; - (void)loadFromURL:(NSString *)url; - (void)addCompleteListener:(id)observer selector:(SEL)sel; - (void)removeCompleteListener:(id)observer; - (void)parseLoadedData:(NSString *)data; - (void)complete; - (void)close; - (BOOL)isLoaded; + (NSURL *)makeUrlWithString:(NSString *)url; + (URLLoader *)initWithName:(NSString *)name; @end

    Read the article

  • How to query an .NET assembly's required framework (not CLR) version?

    - by Bonfire Burns
    Hi, we are using some kind of plug-in architecture in one of our products (based on .NET). We have to consider our customers or even 3rd party devs writing plug-ins for the product. The plug-ins will be .NET assemblies that are loaded by our product at run-time. We have no control about the quality or capabilities of the external plug-ins (apart from checking whether they implement the correct interfaces). So we need to implement some kind of safety check while loading the plug-ins to make sure that our product (and the hosting environment) can actually host the plug-in or deliver a meaningful error message ("The plug-in your are loading needs .NET version 42.42 - the hosting system is only on version 33.33."). Ideally the plug-ins would do this check internally, but our experience regarding their competence is so-so and in any case our product will get the blame, so we want to make sure that this "just works". Requiring the plug-in developers to provide the info in the metadata or to explicitly provide the information in the interface is considered "too complicated". I know about the Assembly.ImageRuntimeVersion property. But to my knowledge this tells me only the needed CLR version, not the framework version. And I don't want to check all of the assembly's dependencies and match them against a table of "framework version vs. available assemblies". Do you have any ideas how to solve this in a simple and maintainable fashion? Thanks & regards, Bon

    Read the article

  • REST design: what verb and resource name to use for a filtering service

    - by kabaros
    I am developing a cleanup/filtering service that has a method that receives a list of objects serialized in xml, and apply some filtering rules to return a subset of those objects. In a REST-ful service, what verb shall I use for such a method? I thought that GET is a natural choice, but I have to put the serialized XML in the body of the request which works but feels incorrect. The other verbs don't seem to fit semantically. What is a good way to define that Service interface? Naming the resource /Cleanup or /Filter seems weird mainly because in the examples I see online, it is always a name rather than a verb being used for resource name. Am I right to feel that REST services are better suited for CRUD operations and you start bending the rules in situations like this service? If yes, am I then making a wrong architectural choice. I've pushed to develop this service in REST-ful style (as opposed to SOAP) for simplicity, but such awkward cases happen a lot and make me feel like I am missing something. Either choosing REST where it shouldn't be used or may be over-thinking some stuff that doesn't really matter? In that case, what really matters?

    Read the article

  • memory leak when removing objects in NSMutableArray

    - by user262325
    Hello everyone I hope to store MYFileObj to NSMutableArray (fileArray) and display data on an UITavleView(tableview). //----------------------------------MYFileObj #import <UIKit/UIKit.h> @interface MYFileObj : NSObject { NSString *fileName; } -(void) setFileName:(NSString *)s ; -(NSString *) fileName ; @end the array I want to store data NSMutableArray *fileArray; I created new object and add to fileArray MYFileObj *newobj=[[MYFileObj alloc] init ]; NSString *ss=[[NSString alloc] initWithFormat:@"%@",path] ; [newobj setFileName:ss]; [ss release]; [fileArray addObject:newobj]; [newobj release]; [atableview reloadData]; After the first time relaodData and do something, I want to reload fileArray and redraw atableview. //code to remove all object in atableview if([fileArray count]>0) { [fileArray removeAllObjects]; [atableview reloadData]; } I notice that there are memory leak. I hope to know the method "removeAllObjects" removes only MYFileObj themselves or also removes MYFileObj's member property "fileName"? Thanks interdev

    Read the article

  • How is covariance cooler than polymorphism...and not redundant?

    - by P.Brian.Mackey
    .NET 4 introduces covariance. I guess it is useful. After all, MS went through all the trouble of adding it to the C# language. But, why is Covariance more useful than good old polymorphism? I wrote this example to understand why I should implement Covariance, but I still don't get it. Please enlighten me. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Sample { class Demo { public delegate void ContraAction<in T>(T a); public interface IContainer<out T> { T GetItem(); void Do(ContraAction<T> action); } public class Container<T> : IContainer<T> { private T item; public Container(T item) { this.item = item; } public T GetItem() { return item; } public void Do(ContraAction<T> action) { action(item); } } public class Shape { public void Draw() { Console.WriteLine("Shape Drawn"); } } public class Circle:Shape { public void DrawCircle() { Console.WriteLine("Circle Drawn"); } } public static void Main() { Circle circle = new Circle(); IContainer<Shape> container = new Container<Circle>(circle); container.Do(s => s.Draw());//calls shape //Old school polymorphism...how is this not the same thing? Shape shape = new Circle(); shape.Draw(); } } }

    Read the article

  • Large Switch statements: Bad OOP?

    - by Mystere Man
    I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case. I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable. I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days. Are there any patterns for this sort of problem? Any suggestions on possible implementations? One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands. However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go? I'd appreciate your thoughts, opinions, or comments.

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >