Search Results

Search found 3300 results on 132 pages for 'dependency walker'.

Page 117/132 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • Developing on a windows machine that interacts with a linux system

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • What is a reasonable OSGi development workflow?

    - by levand
    I'm using OSGi for my latest project at work, and it's pretty beautiful as far as modularity and functionality. But I'm not happy with the development workflow. Eventually, I plan to have 30-50 separate bundles, arranged in a dependency graph - supposedly, this is what OSGi is designed for. But I can't figure out a clean way to manage dependencies at compile time. Example: You have bundles A and B. B depends on packages defined in A. Each bundle is developed as a separate Java project. In order to compile B, A has to be on the javac classpath. Do you: Reference the file system location of project A in B's build script? Build A and throw the jar into B's lib directory? Rely on Eclipse's "referenced projects" feature and always use Eclipse's classpath to build (ugh) Use a common "lib" directory for all projects and dump the bundle jars there after compilation? Set up a bundle repository, parse the manifest from the build script and pull down the required bundles from the repository? No. 5 sounds the cleanest, but also like a lot of overhead.

    Read the article

  • Castle Windsor: Reuse resolved component in OnCreate, UsingFactoryMethod or DynamicParameters

    - by shovavnik
    I'm trying to execute an action on a resolved component before it is returned as a dependency to the application. For example, with this graph: public class Foo : IFoo { } public class Bar { IFoo _foo; IBaz _baz; public Bar(IFoo foo, IBaz baz) { _foo = foo; _baz = baz; } } When I create an instance of IFoo, I want the container to instantiate Bar and pass the already-resolved IFoo to it, along with any other dependencies it requires. So when I call: var foo = container.Resolve<IFoo>(); The container should automatically call: container.Resolve<Bar>(); // should pass foo and instantiate IBaz I've tried using OnCreate, DynamicParameters and UsingFactoryMethod, but the problem they all share is that they don't hold an explicit reference to the component: DynamicParameters is called before IFoo is instantiated. OnCreate is called after, but the delegate doesn't pass the instance. UsingFactoryMethod doesn't help because I need to register these components with TService and TComponent. Ideally, I'd like a registration to look something like this: container.Register<IFoo, Foo>((kernel, foo) => kernel.Resolve<Bar>(new { foo })); Note that IFoo and Bar are registered with the transient life style, which means that the already-resolved instance has to be passed to Bar - it can't be "re-resolved". Is this possible? Am I missing something?

    Read the article

  • named type not used for constructor injection

    - by nmarun
    Hi, I have a simple console application where I have the following setup: public interface ILogger { void Log(string message); } class NullLogger : ILogger { private readonly string version; public NullLogger() { version = "1.0"; } public NullLogger(string v) { version = v; } public void Log(string message) { Console.WriteLine("NULL " + version + " : " + message); } } The configuration details are below: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole"> My calling code looks as below: IUnityContainer container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Containers.Default.Configure(container); ILogger nullLogger = container.Resolve(); nullLogger.Log("hello"); This works fine, but once I give a name to this type something like: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole" name="NullLogger"> The above calling code does not work even if I explicitly register the type using container.RegisterType<ILogger, NullLogger>(); I get the error: {"Resolution of the dependency failed, type = \"UnityConsole.ILogger\", name = \"\". Exception message is: The current build operation (build key Build Key[UnityConsole.NullLogger, null]) failed: The parameter v could not be resolved when attempting to call constructor UnityConsole.NullLogger(System.String v). (Strategy type BuildPlanStrategy, index 3)"} Why doesn't unity look into named instances? To get it to work, I'll have to do: ILogger nullLogger = container.Resolve("NullLogger"); Where is this behavior documented? Arun

    Read the article

  • Powershell finding services using a cmdlet dll

    - by bartonm
    I need to upgrade a dll assemblies, written in C#, in our installation. Before I replace the DLL file, I want to check if the file has a lock and if so display a message. How do I implement this in powershell? I was thinking iterate through Get-Process checking dependencies. Solved. I iterated through list looking a file path match. function IsCaradigmPowershellDLLFree() { # The list of DLLs to check for locks by running processes. $DllsToCheckForLocks = "$env:ProgramFiles\Caradigm Platform\System 3.0\Platform\PowerShell\Caradigm.Platform.Powershell.dll", "$env:ProgramFiles\Caradigm Platform\System 3.0\Platform\PowerShell\Caradigm.Platform.Powershell.InternalPlatformSetup.dll"; # Assume true, then check all process dependencies $result = $true; # Iterate through each process and check module dependencies foreach ($p in Get-Process) { # Iterate through each dll used in a given process foreach ($m in Get-Process -Name $p.ProcessName -Module -ErrorAction SilentlyContinue) { # Check if dll dependency match any DLLs in list foreach ($dll in $DllsToCheckForLocks) { # Compare the fully-qualified file paths, # if there's a match then a lock exists. if ( ($m.FileName.CompareTo($dll) -eq 0) ) { $pName = $p.ProcessName.ToString() Write-Error "$dll is locked by $pName. This dll must be have zero locked prior to upgrade. Stop this service to release this lock on $m1." $result = $false; } } } } return $result; }

    Read the article

  • What is the difference between calling Stream.Write and using a StreamWriter?

    - by John Nelson
    What is the difference between instantiating a Stream object, such as MemoryStream and calling the memoryStream.Write() method to write to the stream, and instantiating a StreamWriter object with the stream and calling streamWriter.Write()? Consider the following scenario: You have a method that takes a Stream, writes a value, and returns it. The stream is read from later on, so the position must be reset. There are two possible ways of doing it (both seem to work). // Instantiate a MemoryStream somewhere // - Passed to the following two methods MemoryStream memoryStream = new MemoryStream(); // Not using a StreamWriter private static Stream WriteToStream(Stream stream, string value) { stream.Write(Encoding.Default.GetBytes(value), 0, value.Length); stream.Flush(); stream.Position = 0; return stream; } // Using a StreamWriter private static Stream WriteToStreamWithWriter(Stream stream, string value) { StreamWriter sw = new StreamWriter(stream); sw.Write(value, 0, value.Length); sw.Flush(); stream.Position = 0; return stream; } This is partially a scope problem, as I don't want to close the stream after writing to it since it will be read from later. I also certainly don't want to dispose it either, because that will close my stream. The difference seems to be that not using a StreamWriter introduces a direct dependency on Encoding.Default, but I'm not sure that's a very big deal. What's the difference, if any?

    Read the article

  • GNU C++ how to check when -std=c++0x is in effect?

    - by TerryP
    My system compiler (gcc42) works fine with the TR1 features that I want, but trying to support newer compiler versions other than the systems, trying to accessing TR1 headers an #error demanding the -std=c++0x option because of how it interfaces with library or some hub bub like that. /usr/local/lib/gcc45/include/c++/bits/c++0x_warning.h:31:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. Having to supply an extra switch is no problem, to support GCC 4.4 and 4.5 under this system (FreeBSD), but obviously it changes the picture! Using my system compiler (g++ 4.2 default dialect): #include <tr1/foo> using std::tr1::foo; Using newer (4.5) versions of the compiler with -std=c++0x: #include <foo> using std::foo; Is there anyway using the pre processor, that I can tell if g++ is running with C++0x features enabled? Something like this is what I'm looking for: #ifdef __CXX0X_MODE__ #endif but I have not found anything in the manual or off the web. At this rate, I'm starting to think that life would just be easier, to use Boost as a dependency, and not worry about a new language standard arriving before TR4... hehe.

    Read the article

  • SQL Server service broker reporting as off when I have written a query to turn it on

    - by dotnetdev
    I have made a small ASP.NET website. It uses sqlcachedependency The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Source Error: Line 12: System.Data.SqlClient.SqlDependency.Start(connString); This is the erroneous line in my global.asax. However, in sql server (2005), I enabled service broker like so (I connect and run the SQL Server service when I debug my site): ALTER DATABASE mynewdatabase SET ENABLE_BROKER with rollback immediate And this was successful. What am I missing? I am trying to use sql caching dependency and have followed all procedures. Thanks

    Read the article

  • Improve this generic abstract class

    - by Keivan
    I have the following abstract class design, I was wondering if anyone can suggest any improvements in terms of stronger enforcement of our requirements or simplifying implementing of the ControllerBase. //Dependency Provider base public abstract class ControllerBase<TContract, TType> where TType : TContract, class { public static TContract Instance { get { return ComponentFactory.GetComponent<TContract, TType>(); } } public TContract GetComponent<TContract, TType>() where TType : TContract, class { component = (TType)Activator.CreateInstance(typeof(TType), true); RegisterComponentInstance<TContract>(component); } } //Contract public interface IController { void DoThing(); } //Actual Class Logic public class Controller: ControllerBase<IController,Controller> { public void DoThing(); //internal constructor internal Controller(){} } //Usage public static void Main() { Controller.Instance.DoThing(); } The following facts should always be true, TType should always implement TContract (Enforced using a generic constraint) TContract must be an interface (Can't find a way to enforce it) TType shouldn't have public constructor, just an internal one, is there any way to Enforce that using ControllerBase? TType must be an concrete class (Didn't include New() as a generic constrain since the constructors should be marked as Internal)

    Read the article

  • Selenium Webdriver Java - looking for alternatives for Actions and Robot when performing drag-and-drop

    - by Ja-ke Alconcel
    I first tried Actions class and the drag-and-drop does work on different elements, however it was unable to locate the a specific draggable element on it's exact screen/webpage position. Here's the code I've used: Point loc = driver.findElement(By.id("thiselement")).getLocation(); System.out.println(loc); WebElement drag = driver.findElement(By.id("thiselement")); Actions test = new Actions(driver); test.dragAndDropBy(drag, 0, 60).build().perform(); I checked the element with it's pixel location and it prints (837, -52), which was somewhere on top of the webpage and was pixels away from the actual element. Then I tried using the Robot class and works perfectly fine on my script, but can only provide constant successful runs on a single test machine, running it with a different machine with different screen resolution and screen size will render the script to fail due to the dependency of Robot on the pixel location of the element. The sample code of the Robot script I'm using: Robot dragAndDrop = new Robot(); dragAndDrop.mouseMove(945, 166); //actual pixel location of the draggable element dragAndDrop.mousePress(InputEvent.BUTTON1_MASK); sleep(3000); dragAndDrop.mouseMove(945, 226); dragAndDrop.mouseRelease(InputEvent.BUTTON1_MASK); sleep(3000); Is there any alternative for Actions and Robot to automate drag-and-drop? Or maybe a help on working the script to work on Actions as I really can't use Robot. Thanks in advance.

    Read the article

  • Installing a group of .deb files in Ubuntu

    - by p00ya
    Hi. I have a directory of .deb files which I copied from the cache folder of apt. There are many applications and Ubuntu updates among them. but there's no dependency failure because they were all downloaded by 'add/remove applications' and 'update manager' automatically. Now I have installed the same version of Ubuntu (9.04) and I want to install those apps and updates again(though they are not new versions). In other words, I want to make this fresh Ubuntu install exactly like the old one but without downloading any thing and using only those .deb files that I copied. All I have is an archive folder containing the .deb files and a 'pkgcache.bin' file. I know I can double-click the .deb files and install them manually but then I have to find out and follow the dependencies one by one from the installer errors. I have also tried adding an offline repository but it didn't work. I think because all of my .deb's are in on folder, and there is no separate 'main', 'restricted', ... folder?! Is there a way to do all of this automatically? thanks

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • Python: why does this code take forever (infinite loop?)

    - by Rosarch
    I'm developing an app in Google App Engine. One of my methods is taking never completing, which makes me think it's caught in an infinite loop. I've stared at it, but can't figure it out. Disclaimer: I'm using http://code.google.com/p/gaeunitlink text to run my tests. Perhaps it's acting oddly? This is the problematic function: def _traverseForwards(course, c_levels): ''' Looks forwards in the dependency graph ''' result = {'nodes': [], 'arcs': []} if c_levels == 0: return result model_arc_tails_with_course = set(_getListArcTailsWithCourse(course)) q_arc_heads = DependencyArcHead.all() for model_arc_head in q_arc_heads: for model_arc_tail in model_arc_tails_with_course: if model_arc_tail.key() in model_arc_head.tails: result['nodes'].append(model_arc_head.sink) result['arcs'].append(_makeArc(course, model_arc_head.sink)) # rec_result = _traverseForwards(model_arc_head.sink, c_levels - 1) # _extendResult(result, rec_result) return result Originally, I thought it might be a recursion error, but I commented out the recursion and the problem persists. If this function is called with c_levels = 0, it runs fine. The models it references: class Course(db.Model): dept_code = db.StringProperty() number = db.IntegerProperty() title = db.StringProperty() raw_pre_reqs = db.StringProperty(multiline=True) original_description = db.StringProperty() def getPreReqs(self): return pickle.loads(str(self.raw_pre_reqs)) def __repr__(self): return "%s %s: %s" % (self.dept_code, self.number, self.title) class DependencyArcTail(db.Model): ''' A list of courses that is a pre-req for something else ''' courses = db.ListProperty(db.Key) def equals(self, arcTail): for this_course in self.courses: if not (this_course in arcTail.courses): return False for other_course in arcTail.courses: if not (other_course in self.courses): return False return True class DependencyArcHead(db.Model): ''' Maintains a course, and a list of tails with that course as their sink ''' sink = db.ReferenceProperty() tails = db.ListProperty(db.Key) Utility functions it references: def _makeArc(source, sink): return {'source': source, 'sink': sink} def _getListArcTailsWithCourse(course): ''' returns a LIST, not SET there may be duplicate entries ''' q_arc_heads = DependencyArcHead.all() result = [] for arc_head in q_arc_heads: for key_arc_tail in arc_head.tails: model_arc_tail = db.get(key_arc_tail) if course.key() in model_arc_tail.courses: result.append(model_arc_tail) return result Am I missing something pretty obvious here, or is GAEUnit acting up?

    Read the article

  • WPF binding to a boolean on a control

    - by Jose
    I'm wondering if someone has a simple succinct solution to binding to a dependency property that needs to be the converse of the property. Here's an example I have a textbox that is disabled based on a property in the datacontext e.g.: <TextBox IsEnabled={Binding CanEdit} Text={Binding MyText}/> The requirement changes and I want to make it ReadOnly instead of disabled, so without changing my ViewModel I could do this: In the UserControl resources: <UserControl.Resources> <m:NotConverter x:Key="NotConverter"/> </UserControl.Resources> And then change the TextBox to: <TextBox IsReadOnly={Binding CanEdit,Converter={StaticResource NotConverter}} Text={Binding MyText}/> Which I personally think is EXTREMELY verbose I would love to be able to just do this(notice the !): <TextBox IsReadOnly={Binding !CanEdit} Text={Binding MyText}/> But alas, that is not an option that I know of. I can think of two options. Create an attached property IsNotReadOnly to FrameworkElement(?) and bind to that property If I change my ViewModel then I could add a property CanEdit and another CannotEdit which I would be kind of embarrassed of because I believe it adds an irrelevant property to a class, which I don't think is a good practice. The main reason for the question is that in my project the above isn't just for one control, so trying to keep my project as DRY as possible and readable I am throwing this out to anyone feeling my pain and has come up with a solution :)

    Read the article

  • How to Inserting message into View that depends on session value. ASP.NET MVC. Best practice

    - by Andrew Florko
    User have to populate multistep questionnaire web-forms and step messages depend on the option chosen by user at the very beginning. Messages are stored in web.config file. I use asp.net mvc project, strong typed views and keep business logic separated from controller in static class. I don't want to make business logic dependency on web.config. Well, I have to insert message into view that depends on session value. There are at least 2 options how to implement this: View model has property that is populated in controller/businessLogic and rendered in view like <%: Model.HelpMessage1 %>. I have to pass web.config values from controller to businessLogic that makes business logic methods signature too complex. I don't want to make configuration source abstract (in order to let business logic read configuration values from its methods directly) also. Create static helper class that is called from view like <%: ViewHelper.HelpMessage1(Model.Option1) %>. But in this case logic what to show seems to be separated into two classes: business logic & viewHelper. What will you suggest? Thank you in advance!

    Read the article

  • As an Agile Java developer, what should I be looking for when hiring a C++ developer?

    - by agoudzwaard
    I come from an effective team of Agile Java developers. We've had a lot of success in hiring more people like ourselves - people passionate about technology with experience primarily in the Agile Java/J2EE space. We're looking to hire our first C++ developer to serve as an on-shore resource for maintaining and adding to the C++ portion of our code base. Up until now the entirety of our C++ development has been done out of an off-shore location. We consider our interview process to be fairly thorough: A phone screen centered on Object-Oriented Programming and Java A non-trivial at-home code project using Java An in-person interview covering technical and behavioral competency We look for a demonstration of Agile best practices (expressive code, test-driven development, continuous integration) throughout the entire process, however there is a common conception that Agility is primarily practiced by Java developers. If we retrofit our interview process for C++, should we still expect Agile qualities when interviewing for a C++ role? I'm asking on behalf of a team that has worked with Java too long to know what a good C++ developer looks like. Specifically we're looking to answer the following questions: Can we expect a demonstrated understanding of OO design and Separation of Concerns? In the code project we want the candidate to write unit tests. Would a good C++ developer be surprised by this expectation? Are there any "extra" competencies we can look for? For example with Java developers we always look for a familiarity with Dependency Injection.

    Read the article

  • Xcode Unit Testing - Accessing Resources from the application's bundle?

    - by Ben Scheirman
    I'm running into an issue and I wanted to confirm that I'm doing things the correct way. I can test simple things with my SenTestingKit tests, and that works okay. I've set up a Unit Test Bundle and set it as a dependency on the main application target. It successfully runs all tests whenever I press cmd+B. Here's where I'm running into issues. I have some XML files that I need to load from the resources folder as part of the application. Being a good unit tester, I want to write unit tests around this to make sure that they are loading properly. So I have some code that looks like this: NSString *filePath = [[NSBundle mainBundle] pathForResource:@"foo" ofType:@"xml"]; This works when the application runs, but during a unit test, mainBundle points to the wrong bundle, so this line of code returns nil. So I changed it up to utilize a known class like this: NSString *filePath = [[NSBundle bundleForClass:[Config class]] pathForResource:@"foo" ofType:@"xml"]; This doesn't work either, because in order for the test to even compile code like this, it Config needs to be part of the Unit Test Target. If I add that, then the bundle for that class becomes the Unit Test bundle. (Ugh!) Am I approaching this the wrong way?

    Read the article

  • Pattern for version-specific implementations of a Java class

    - by Mike Monkiewicz
    So here's my conundrum. I am programming a tool that needs to work on old versions of our application. I have the code to the application, but can not alter any of the classes. To pull information out of our database, I have a DTO of sorts that is populated by Hibernate. It consumes a data object for version 1.0 of our app, cleverly named DataObject. Below is the DTO class. public class MyDTO { private MyWrapperClass wrapper; public MyDTO(DataObject data) { wrapper = new MyWrapperClass(data); } } The DTO is instantiated through a Hibernate query as follows: select new com.foo.bar.MyDTO(t1.data) from mytable t1 Now, a little logic is needed on top of the data object, so I made a wrapper class for it. Note the DTO stores an instance of the wrapper class, not the original data object. public class MyWrapperClass { private DataObject data; public MyWrapperClass(DataObject data) { this.data = data; } public String doSomethingImportant() { ... version-specific logic ... } } This works well until I need to work on version 2.0 of our application. Now DataObject in the two versions are very similar, but not the same. This resulted in different sub classes of MyWrapperClass, which implement their own version-specific doSomethingImportant(). Still doing okay. But how does myDTO instantiate the appropriate version-specific MyWrapperClass? Hibernate is in turn instantiating MyDTO, so it's not like I can @Autowire a dependency in Spring. I would love to reuse MyDTO (and my dozens of other DTOs) for both versions of the tool, without having to duplicate the class. Don't repeat yourself, and all that. I'm sure there's a very simple pattern I'm missing that would help this. Any suggestions?

    Read the article

  • Is it getting to be time for C# to support compile-time macros?

    - by Robert Rossney
    Thus far, Microsoft's C# team has resisted adding formal compile-time macro capabilities to the language. There are aspects of programming with WPF that seem (to me, at least) to be creating some compelling use cases for macros. Dependency properties, for instance. It would be so nice to just be able to do something like this: [DependencyProperty] public string Foo { get; set; } and have the body of the Foo property and the static FooProperty property be generated automatically at compile time. Or, for another example an attribute like this: [NotifyPropertyChanged] public string Foo { get; set; } that would make the currently-nonexistent preprocessor produce this: private string _Foo; public string Foo { get { return _Foo; } set { _Foo = value; OnPropertyChanged("Foo"); } } You can implement change notification with PostSharp, and really, maybe PostSharp is a better answer to the question. I really don't know. Assuming that you've thought about this more than I have, which if you've thought about it at all you probably have, what do you think? (This is clearly a community wiki question and I've marked it accordingly.)

    Read the article

  • Using variables before include()ing them

    - by phenry
    I'm using a file, page.php, as an HTML container for several content files; i.e., page.php defines most of the common structure of the page, and the content files just contain the text that's unique to every page. What I would like to do is include some PHP code with each content file that defines metadata for the page such as its title, a banner graphic to use, etc. For example, a content file might look like this (simplified): <?php $page_title="My Title"; ?> <h1>Hello world!</h1> The name of the file would be passed as a URL parameter to page.php, which would look like this: <html> <head> <title><?php echo $page_title; ?></title> </head> <body> <?php include($_GET['page']); ?> </body> </html> The problem with this approach is that the variable gets defined after it is used, which of course won't work. Output buffering also doesn't seem to help. Is there an alternate approach I can use? I'd prefer not to define the text in the content file as a PHP heredoc block, because that smashes the HTML syntax highlighting in my text editor. I also don't want to use JavaScript to rewrite page elements after the fact, because many of these pages don't otherwise use JavaScript and I'd rather not introduce it as a dependency if I don't have to.

    Read the article

  • Why is there so much XML in Java these days?

    - by BD at Rivenhill
    This is really more of a philosophy/design issue. I did some work in Java back in the middle 90's and again in the early 2000's and now I'm coming back to it after spending a lot of time in C/C++ and it seems like there was an explosion of XML dependency while I was gone. Major build system tools like ant and maven depend on XML for their configuration, but I'm actually more concerned with all the frameworks, such as Spring, Hibernate, etc. My experience is that powerful supporting libraries like these are where a developer can really get leverage for building programs with lots of features without writing a lot of code, but it really seems like I'm getting one language for the price of two here. I write a bunch of Java classes, but then I also write a bunch of XML files to glue them together. The things that get done in the XML are things that I can see reasonable ways of doing in straight code without the middleman, and they don't really seem to be treated exactly like configuration files: they change rarely and they end up getting committed to source code control like the Java code itself, but they are distributed with the resulting application and need to be unpacked and installed in the classpath in order to get the application to work. I'm working with server applications that are not web-based, so maybe the domain is a bit different from what most people are doing, but I just can't help feeling that I must be doing something wrong here. Can someone point me to a good source of information for why these design choices were made and what problems they are meant to solve so that I can analyze my own experiences in this context?

    Read the article

  • Problems with Unity BuildUp Method

    - by Voice
    Hi everybody) I'm using Unity App Block for my project (version 1.2.0.0). I have a problem with Unity Container BuildUp method which I'm using for my ascx controls. Here is some code (that's pretty simple) public class BaseUserControl<T>:UserControl where T:class { protected override void OnInit(EventArgs e) { InjectDependencies(); base.OnInit(e); } protected virtual void InjectDependencies() { var context = HttpContext.Current; if (context == null) { return; } var accessor = context.ApplicationInstance as IContainerAccessor; if (accessor == null) { return; } var container = accessor.Container; if (container == null) { throw new InvalidOperationException("No Unity container found"); } container.BuildUp<T>(this as T); } } This method is called in base control for ascx controls in my solution. And here the property that should be injected in child control: [Dependency] private IStock Stock { get; set; } So after buildup Stock property is still empty. Resolve method works fine for IStock with the same container and configuration. I've tried buildup with simple test class with only one property IStock and got the same result. So what can be wrong with buildup?

    Read the article

  • How can I specify dependencies in the manifest file and then to include it into my .jar file?

    - by Roman
    I generated .class files by the following command: javac -cp \directoryName\external.jar myPackageDirectory\First.java myPackageDirectory\Second.java I needed to use -cp during compilation and name of .jar file of an "external" library (external.jar) to be able to use this library from my code. Using my .class files I have generated my .jar file in the following way: jar cfm app.jar manifest.txt myPackageDirectory\*.class manifest.txt contains just one line: Main-Class: myPackageName.First My problem is that I am not sure that I will be able to run my .jar file on other computers. I think so because during the compilation I specified the location of the .jar file of the external library. So, my .class files (included into the .jar file will try to find the .jar file of the external library in a specific directory and there is no guaranty that that the .jar file of the external library will be in the same directory as on the my computer. I heard that the above problem can be solved by a usage of a MANIFEST file that I include in my own jar, and which will list dependency locations but I do not understand how it works. I do need to specify location of the "external.jar" at the compilation stage (otherwise the compiler complains).

    Read the article

  • where are the frameworks for creating libraries?

    - by fayer
    whenever i create a php library (not a framework) i tend to reinvent everything everytime. "where to put configuration options" "which design pattern to use here" "how should all the classes extend each other" and so on... then i think, isn't there a good library framework to use anywhere? it's like a framework for a web application (symfony, cakephp...) but instead of creating a web application, this framework will help coder to create a library, providing all the standard structure and classes (observer pattern, dependency injection etc). i think that will be the next major thing if not available right now. in this way there will be a standard to follow when creating libraries, or else, it's like a djungle when everyone creates their own structure, and a lot of coders just code without thinking of reusability etc. there isn't any framework for creating libraries at the moment? if not, don't u agree with me that this is the way to do it, with a library framework? cause i am really throwing a lot of time (weeks!) just thinking about how to organize things, both in code and file level, when i should just start to code the logic. share your thoughts!

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >