Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 257/307 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Finding the width of a directed acyclic graph... with only the ability to find parents

    - by Platinum Azure
    Hi guys, I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list. The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table. That is the ONLY ability I have at this point, without changing the code severely. The code has been in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node. I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed. Thanks! EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • multiple clients - one server connection with sockets tcp/ip c# .net

    - by jagse
    Hello guys, I need to develop a client server system where I can have multiple clients communicating with one server at the same time. I want to communicate xml serialized objects and also need to send and receive other commands to invoke methods. Now, I am just starting with socket programming in C# and .Net and found that the asynchronous I/O is the way to go so that the methods dont block the execution of code. Also there are many examples of how to make a simple client server system. So I have a basic understanding of how that works. Anyway, what still is not clear to me is how I can set up a server which can manage connections to multiple clients? Can I just create a new socket per connection and then store those in some kind of list? Do I need some kind of multiplexing to achieve this? Do I have to listen at multiple ports? What`s the best way here? And the other thing is if I need to develop my own protocol to differentiate between what I am actually sending over the network -- xml serialized object or a command which might be just a string encoded in ascII or something. Or would I develop my own protocol just to send these commands? Any kind of help is apreciated! If someone knows a good book which covers this sort of stuff, let me know. Cheers I forgot to mention that some of my clients which are supposed to communicate with my server will be pda and I therefore use the compact framework... So this might bring in some restrictions...

    Read the article

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • Android Signal analysis + some filters.

    - by Profete162
    Hello, as the world cup is the main sport event and the Vuvuzelas are the most annoying sound in the world, I had an idea to remove them definitively by reading this new ( http://www.popsci.com/diy/article/2010-06/simple-software-can-filter-out-vuvuzela-whine) that told us that the sound has some frequencies at 233Hz + 466,932,1864Hz. I have already made a lot of Android application by myself but never touching the signal analysis and filtering part, so here are a few questions, I do not ask for precise answer but maybe links and tutorial to find something to work on. I guess that a new Android phone has the CPU and power to make real-time filtering. 1) How can I intercept the sound coming from the Jack microphone - Line-IN plug- ( I plan to link my TV to my phone with Jack to Jack plug). My question is totally software and coding, I have all the wires and adapters to plug a jack into my android phone Line IN. 2) Are there some Fourier analysis librairies, may I have a look to Java libraries on the web and import them to my Android project? I really apologize because my question seem not precise, but I think that would be something great. Thank you for your answers.

    Read the article

  • Is there an easier way to do Classic ASP "relative path"?

    - by Alex.Piechowski
    Right now, I'm having trouble. First of all I have a page, let's call it "http://blah.com/login". That obviously goes strait to "index.asp" A line of Main.asp: <!--#include file="resource/menu.asp"--> Page top includes all of what I need for my menu... so: Part of resource/menu.htm: <div id="colortab" class="ddcolortabs"> <ul> <li><a href="main.asp" title="Main" rel="dropmain"><span>Main</span></a></li> ... </ul> </div> <!--Main drop down menu --> <div id="dropmain" class="dropmenudiv_a"> <a href="main/announcements.asp">Announcements</a> <a href="main/contacts.asp">Contact Information</a> <a href="main/MeetingPlans.asp">Meeting Plan</a> <a href="main/photos.asp">Photo Gallery</a> <a href="main/events.asp">Upcoming Events</a> </div> Let's say I click on the "announcements" (http://blah.com/login/main/announcements.asp) link... Now I'm at the announcements page! But wait, I include the same menu file. Guess what happens: I get sent to "http://blah.com/login/main/main/announcements.asp Which doesn't exist... My solution: Make a menu_sub.asp include for any subpages. But wait a second... this WORKS, but it gets REALLY REALLY messy... What can I do to use just one main "menu.asp" instead of "menu_sub.asp"? using "/main/announcements.asp" WON'T be an option because this is a web application that will be on different directories per server. Any ideas? PLEASE

    Read the article

  • use Ghostscript to convert pcl to postscript

    - by Bryon
    So I want to use Ghostscript to convert files that are created in pcl format to postscript. That's the gist of my problem. I am simply trying to run it on the command line, but in the final stage it will have to be run on a lp command like lp -d < gs something something GPL Ghostscript 9.00 (2010-09-14) I will be running this on a solaris 10 server but I believe any unix system should work similar. bash-3.00# /usr/local/bin/gs -sDEVICE=pswrite -dLanguageLevel=1 -dNOPAUSE -dBATCH -dSAFER -sOutputFile=output.ps cms-form.pcl GPL Ghostscript 9.00 (2010-09-14) Copyright (C) 2010 Artifex Software, Inc. All rights reserved. This software comes with NO WARRANTY: see the file PUBLIC for details. Error: /undefined in &k2G-210z100u0l6d0e63fa0V Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1910 1 3 %oparray_pop 1909 1 3 %oparray_pop 1893 1 3 %oparray_pop 1787 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- Dictionary stack: --dict:1154/1684(ro)(G)-- --dict:0/20(G)-- --dict:77/200(L)-- Current allocation mode is local Current file position is 30 GPL Ghostscript 9.00: Unrecoverable error, exit code 1

    Read the article

  • Which web framework or technologies would suit me?

    - by Suraj Chandran
    Hi, I had been working on desktop apps and server side(non web) for some time and now I am diving in to web first time. I plan to write a scalable enterprise level app. I have worked with Java, Javascript, Jquery etc. but I absolutely hate jsp. So is there any framework that focuses on developing enterprise level web apps without jsp. I liked Wicket's approach, but I think there is a little lack of support of dynamic html in it and jquery(yes i looked at wiquery). Also I feel making wicket apps scalable would take some sweat. Can Spring MVC, Struts2 etc. help me make with this with just using say Java, JavaScript, and JQuery. Or are there any other options for me like Wicket. Please do forgive if anything above looks insane, I am still working on my understanding with enterprise web apps. NOTE: If you think that I should take a different direction or approach, please do suggest!

    Read the article

  • Deploying DotNetNuke and separate ASP.NET Application together - Possible Issues?

    - by TheTXI
    I am making this in a proactive attempt to head off any potential problems which could arise from this. The situation is that we are developing an ASP.NET application for a client which will handle the online ordering from their customers. This application is going to be using the same database that their current WinForms application uses (no real issue here). At the same time we are developing a new front-end website for them using DotNetNuke. The DotNetNuke app will simply be linking to the ASP.NET application for the customers to submit their orders (no need for them to communicate back and forth, etc.) The plan is to host both applications on the same box at the client location. What I am looking for are potential problems or setup tips which would prevent possible conflict between the two apps (web.config conflicts, etc.) Is there a problem with having both hosted on the same location, how should IIS be set up, etc.? If there are any external resources also available which could address this, please feel free to link them as well.

    Read the article

  • Double hashing passwords - client & server

    - by J. Stoever
    Hey, first, let me say, I'm not asking about things like md5(md5(..., there are already topics about it. My question is this: We allow our clients to store their passwords locally. Naturally, we don't want them stored in plan text, so we hmac them locally, before storing and/or sending. Now, this is fine, but if this is all we did, then the server would have the stored hmac, and since the client only needs to send the hmac, not the plain text password, an attacker could use the stored hashes from the server to access anyone's account (in the catastrophic scenario where someone would get such an access to the database, of course). So, our idea was to encode the password on the client once via hmac, send it to the server, and there encode it a second time via hmac and match it against the stored, two times hmac'ed password. This would ensure that: The client can store the password locally without having to store it as plain text The client can send the password without having to worry (too much) about other network parties The server can store the password without having to worry about someone stealing it from the server and using it to log in. Naturally, all the other things (strong passwords, double salt, etc) apply as well, but aren't really relevant to the question. The actual question is: does this sound like a solid security design ? Did we overlook any flaws with doing things this way ? Is there maybe a security pattern for something like this ?

    Read the article

  • Preventing ActiveRecord save() on an instance

    - by Craig Walker
    I have an ActiveRecord model object Foo; it represents a standard database row. I want to be able to display modified versions of instances of this object. I'd like to reuse the class itself, as it already has all the hooks & aspects I'll need. (For example: I already have a view that displays the appropriate attributes). Basically I want to clone the model instance, modify some of its properties, and feed it back to the caller (view, test, etc). I do not want these attribute modifications getting back into the database. However, I do want to include the id attribute in the cloned version, as it makes dealing with the route-helpers much easier. Thus, I plan on calling ActiveRecord::Base.clone(), manually setting the ID of the cloned instance, and then making the appropriate attribute changes to the new instance. This has me worried though; one save() on the modified instance and my original data will get clobbered. So, I'm looking to lock down the new instance so that it won't hurt anything else. I'm already planning on calling freeze() (on the understanding that this prevents further modification to the object, though the documentation isn't terribly clear). However, I don't see any obvious way to prevent a save(). What would be the best approach to achieving this?

    Read the article

  • Planning and coping with deadlines in SCRUM

    - by John
    From wikipedia: During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software. I was reading this and two questions immediately popped into my head: 1)If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it! 2)Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint? The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? kind of like a salesman promising something without consulting the developers.

    Read the article

  • OpenCV. cvFnName() works, but cv::FunName() doesn't work

    - by Innuendo
    I'm using OpenCV to write a plugin for a simulator. I've made an OpenCV project (single - not a plugin) and it works fine. When I added OpenCV libs to the plugin project, I added all libs required. Visual Studio 2010 doesn't highlight any code line with red. All looks fine and compiles fine. But in execution, the program halts with a Runtime Error on any cv::function. For example: cv::imread, or cv::imwrite. But if I replace them with cvLoadImage() and cvSaveImage(), it works fine. Why does this happen? I don't want to rewrite the whole script in old-api-style (cvFnName). It means I should change all Mat objects to IplImages, and so on. UPDATE: // preparing template ifstream ifile(tmplfilename); if ( !FILE_LOADED && ifile ) { // loading template file Mat tmpl = cv::imread(tmplfilename, 1); // << here occurs error FILE_LOADED = true; } Mat src; Bmp2Mat(hDC, hBitmap, src); TargetDetector detector(src, tmpl); detector.detectTarget(); If I change to: if ( !FILE_LOADED && ifile ) { IplImage* tmpl = 0; tmpl = cvLoadImage(tmplfilename, 1); // no error occurs } And then no error occurs. Early it displayed some Runtime Error. Now, I wanted to copy exact message and it just crashes the application (simulator, what I am pluginning). It displays window error - to kill process or no. (I can't show exact message, because I'm using russian windows now)

    Read the article

  • Memory leaks after using typeinfo::name()

    - by icabod
    I have a program in which, partly for informational logging, I output the names of some classes as they are used (specifically I add an entry to a log saying along the lines of Messages::CSomeClass transmitted to 127.0.0.1). I do this with code similar to the following: std::string getMessageName(void) const { return std::string(typeid(*this).name()); } And yes, before anyone points it out, I realise that the output of typeinfo::name is implementation-specific. According to MSDN The type_info::name member function returns a const char* to a null-terminated string representing the human-readable name of the type. The memory pointed to is cached and should never be directly deallocated. However, when I exit my program in the debugger, any "new" use of typeinfo::name() shows up as a memory leak. If I output the information for 2 classes, I get 2 memory leaks, and so on. This hints that the cached data is never being freed. While this is not a major issue, it looks messy, and after a long debugging session it could easily hide genuine memory leaks. I have looked around and found some useful information (one SO answer gives some interesting information about how typeinfo may be implemented), but I'm wondering if this memory should normally be freed by the system, or if there is something i can do to "not notice" the leaks when debugging. I do have a back-up plan, which is to code the getMessageName method myself and not rely on typeinfo::name, but I'd like to know anyway if there's something I've missed.

    Read the article

  • Programming Environment for a Motorola 68000 in Linux

    - by Nick Presta
    Greetings all, I am taking a Structure and Application of Microcomputers course this semester and we're programming with the Motorola 68000 series CPU/board. The course syllabus suggests running something like Easy68K or Teesside Motorola 68000 Assembler/Emulator at home to test our programs. I told my prof I run x64 Linux and asked what sort of environment I would need to complete my coursework. He said that the easiest environment to use is a Windows XP 32bit VM with one of the two suggested applications installed, however, he doesn't really care what I use as long as I can test what I write at home. So I'm asking if there exists some sort of emulator or environment for Linux so I can test my code, and what sort of caveats I will run into by writing and testing my code in Linux. Also, I plan to do my editing in Vim, which probably isn't a problem, but I would like any insight into editors for 68000 assembly, if you have any. Thanks! EDIT: Just to clarify - I don't want to install Linux on the board at all - I want to program on my home machine, test the code locally, and then bring it onto the board for grading/running.

    Read the article

  • How to store an interger value of 4 bytes in a memory of chunk which is malloced as type char

    - by Adi
    Dear all, Hello Guys!! This is my first post in the forum . I am really looking forward to having good fun in this site. My question is : int mem_size = 10; char *start_ptr; if((start_ptr= malloc(mem_size*1024*1024*sizeof(char)))==NULL) {return -1;} I have allocated a chunk of memory of type char and size is say 10 MB (i.e mem_size = 10 ); Now I want to store the size information in the header of the memory chunk, To make myself more clear Lets Say : start_ptr = 0xaf868004 (This is the value I got from my execution, it changes every time) Now I want to put the size information in the start of this pointer.. i.e *start_ptr = mem_size*1024*1024; But I am not able to put this information in the start_ptr. I think the reason is because my ptr is of type char which only takes one byte but I am trying to store int which takes 4 bytes, is the problem . I am not sure how to fix this problem.. I would greatly appreciate your suggestions. Cheers!! Aditya

    Read the article

  • JavaScript window object element properties

    - by Timothy
    A coworker showed me the following code and asked me why it worked. <span id="myspan">Do you like my hat?</span> <script type="text/javascript"> var spanElement = document.getElementById("myspan"); alert("Here I am! " + spanElement.innerHTML + "\n" + myspan.innerHTML); </script> I explained that a property is attached to the window object with the name of the element's id when the browser parses the document which then contains a reference to the appropriate dom node. It's sort of as if window.myspan = document.getElementById("myspan") is called behind the scenes as the page is being rendered. The ensuing discussion we had raised a few of questions: The window object and most of the DOM are not part of the official JavaScript/ECMA standards, but is the above behavior documented in any other official literature, perhaps browser-related? The above works in a browser (at least the main contenders) because there is a window object, but fails in something like rhino. Is writing code that relys on this considered bad practice because it makes too many assumptions about the execution environment? Are there any browsers in which the above would fail, or is this considered standard behavior across the board? Does anyone here know the answers to those questions and would be willing to enlighten me? I tried a quick internet search, but I admit I'm not sure how to even properly phrase the query. Pointers to references and documentation are welcome.

    Read the article

  • Unserializing an API return object (PHP/Ebay API)

    - by DavidYell
    I have been working with the Ebay api for a project and have found it great. I have however found a problem now, more PHP related. When I read my items from Ebay, I store a bunch of details in the database. Currently, just for the sake of it really, I serialize the whole return object and store it in the database in a related table. The idea being, that when I display my information, I have all the details to hand should I need them. The problem arises in that the pricing information is always in a sub object. [ConvertedAdjustmentAmount] => __PHP_Incomplete_Class Object ( [__PHP_Incomplete_Class_Name] => eBayAmountType [_] => 0 [currencyID] => USD ) As you can see when I unserialize my object, my cunning plan falls foul of the Incomplete class problem. I have checked the following question, without success. http://stackoverflow.com/questions/965611/forcing-access-to-php-incomplete-class-object-properties The main issue lies, as far as I can see, in that the price class is stored in the Ebay api, so how do I recreate it? I have been reading this page, http://uk3.php.net/manual/en/function.unserialize.php and trying to figure out, unserialize_callback_func which I can't figure out either, so any help would be appreciated!

    Read the article

  • Should I pass a SqlDataReader by reference or not when passing it out to multiple threads.

    - by deroby
    Hi all, being new to c# I've run into this 'conundrum' when passing around a SqlDataReader between different threads. Without going into too much detail, the idea is to have a main thread fetching data from the database (a large recordset) and then have a helper-task run through this record by record and doing some stuff based upon the contents of this. There is no feedback to the recordset, it simply wades through until no records are left. This works fine, but given the nature of the job at hand it should be possible to have this job spread over different threads (CPUs) to maximize throughput (the order of execution is of no significance). The question then becomes, when I pass this recordset in a SqlDataReader, do I have to use ref or not ? It kind of boils down to the question : if I pass the object around without specifying ref, won't it create new copies in memory and have records processed n times ? Or, don't I risk having the record-position being moved forward while not all fields have been fully read yet ? The latter seems more like a 'data racing' issue and probably is covered by the lock()ing mechanism (or not?). My initial take on the problem was that it doesn't really hurt passing the variable using ref, yet as a colleague put it : "you only need ref when you're doing something wrong" =) Additionally using ref restricts me from applying a Using() construction too which isn't very nice either. I thus create a "basic" project that tackles the same approach but without the ref notation. Tests so far show that it works flawlessly on a Core2Duo (2cpu) using any number of threads, yet I'm still a bit wary... What do you experts think about this ? Use ref or not ? You can find the test-project here as it seems I can't upload it to this question directly ?!? ps: it's just a test-project and I'm new to c#, so please be gentle on me when breaking down the code =P

    Read the article

  • MVVM: Do I need Inheritance with ViewModels A + B ?

    - by Lisa
    Hello guys my first post on SO because EE sucks in the meantime ;P I am using wpf and mvvm in my desktop application. Scenario: I have a calendar with week A and week B which are rotating by every X week depending on the user settings. But the UserControl "week B" is only visible when the user sets the option "rotating weeks"... The UserControl with week A has a DataGrid and for week B I want to use the same UserControl of course. What I want to achieve is that all data entered/choosen by the user in the Week A is saved/backed by a ViewModel A and Model C. When the user wants a rotating weekly calendar plan I need also a ViewModel B and again Model C. The reason why I need to know what data entered by the user belongs to week A or week B is because I have to write the entered data in a certain order into the database = db.Write(weekA),db.Write(weekB),db.Write(weekA),etc... I am unsure how a solution could look like... What would you do to identify a ViewModel A or B so you know the order of how to write the data in the proper order into database? Any other suggestions are also welcome of course, maybe I think in the wrong direction its late here :) I am new to mvvm so please be patient.

    Read the article

  • Efficient way to maintain a sorted list of access counts in Python

    - by David
    Let's say I have a list of objects. (All together now: "I have a list of objects.") In the web application I'm writing, each time a request comes in, I pick out up to one of these objects according to unspecified criteria and use it to handle the request. Basically like this: def handle_request(req): for h in handlers: if h.handles(req): return h return None Assuming the order of the objects in the list is unimportant, I can cut down on unnecessary iterations by keeping the list sorted such that the most frequently used (or perhaps most recently used) objects are at the front. I know this isn't something to be concerned about - it'll make only a miniscule, undetectable difference in the app's execution time - but debugging the rest of the code is driving me crazy and I need a distraction :) so I'm asking out of curiosity: what is the most efficient way to maintain the list in sorted order, descending, by the number of times each handler is chosen? The obvious solution is to make handlers a list of (count, handler) pairs, and each time a handler is chosen, increment the count and resort the list. def handle_request(req): for h in handlers[:]: if h[1].handles(req): h[0] += 1 handlers.sort(reverse=True) return h[1] return None But since there's only ever going to be at most one element out of order, and I know which one it is, it seems like some sort of optimization should be possible. Is there something in the standard library, perhaps, that is especially well-suited to this task? Or some other data structure? (Even if it's not implemented in Python) Or should/could I be doing something completely different?

    Read the article

  • Zend Framework Application: module dependencies

    - by takeshin
    How do you handle dependencies between modules in Zend Framework to create reusable, drop-in modules? E.g. I have Newsletter module, which allows users to subscribe, providing e-mail address. Next, I plan to add Blog module, which allows to subscribe to posts by e-mail (it obviously duplicates some functionality of the newsletter, but the e-mails addresses are stored in User model). Next one is the Forum module, with the same subscribe to post functionality. But I want to have ability to use these modules independent to each one, i.e. application with newsletter alone, newsletter with blog, comibnation two or three modules at once. This is pretty common, e.g. the same story with search feature. I want to have search module, with options to search in all data, blog data or forum data if available. Is there any design pattern for this? Do I have to add some moduleExists($moduleMame), or provide some interface or abstract classes, some base controller pattern, similar for each module?

    Read the article

  • C# -Fluent interface implementation Help

    - by nettguy
    I am implementing the following piece of code using Fluent Interface design in C# 3.0. The code is working fine. public interface ITrainable { ITrainable AddSkill(string _skill); } public interface ISearchSkill { ISearchSkill SearchSkill(SoftwareEngineer emp,string[] _skills); } public abstract class Person { public Person(){} protected string Name { get; set; } } public class SoftwareEngineer:Person,ITrainable { protected internal List<string> skillSet { get; set; } public SoftwareEngineer() { } public SoftwareEngineer(string name) { Name=name; skillSet = new List<string>(); } public ITrainable AddSkill(string _skill) { skillSet.Add(_skill); return this; } } public class HRExecutive :Person,ISearchSkill { SoftwareEngineer _employee; public HRExecutive() { _employee=new SoftwareEngineer(); } public ISearchSkill SearchSkill(SoftwareEngineer _employee,string[] skills) { this._employee= _employee; foreach (string _skill in skills) { if (_employee.skillSet.Contains(_skill)) { Console.WriteLine(Name + " is trained on " + _skill); } else { Console.WriteLine(Name + " is not trained on " + _skill); } } return this; } } Execution SoftwareEngineer emp1 = new SoftwareEngineer("JonSkeet"); emp1.AddSkill("java").AddSkill("C#").AddSkill("F#"); HRExecutive hr = new HRExecutive(); hr.SearchSkill(emp1, new string[] { "java", "C#" }). SearchSkill(emp1, new string[] { "Oracle", "F#" }); Question : I don't want the skillSet of SoftwareEngineer being accessed by some XXX class.It could be accessed by limited classes.But protected internal List<string> skillSet { get; set; } is the only option (i think) i can declare in order to access the skillSet from HRExecutive.If i do so other XXX class can still access it. How to rewrite the code to prevent it?

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >