Search Results

Search found 13639 results on 546 pages for 'design principles'.

Page 490/546 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Git as mercurial client? Why no git-hg?

    - by aapeli
    This is a question that's been bothering me for a while. I've done my homework and checked stackoverflow and found at least these two topics about my question: Git for Mercurial like git-svn and Git interoperability with a Mercurial repository I've done some serious googling to solve this issue, but so far with no luck. I've also read the Git Internals book, and the Mercurial Definitive Behind the Scenes to try to figure this out. I'm still a bit puzzled why I haven't been able to find any suitable git-hg type of a tool. From my perspective hg-svn is one of the main features, why I've chosen to use git over mercurial also at work. It allows me to use a workflow I like, and nobody else needs to bother, if they don't care. I just don't see the point in using the intermediate hg repo to convert back and forth, as suggested in one of the chains. So anyway, from what I've read hg and git seem very similar in conceptual design. There are differences under the hood, but none of those should prevent creating a git client for hg. As it seems to me, remote tracking branches and octopus merges make git even more powerful than hg is. So, the real question, is there any real reason why git-hg does not exist (or at least is very hard to find)? Is there some animosity from git users (and developers) towards their hg counterparts that has caused the lack of the git-hg tool? Do any of you have any plans to develop something like this, and go public with it? I could volunteer (although with very feeble C-skills) to participate to get this done. I just don't possess the full knowledge to start this up myself. Could this be the tool to end all DVCS wars for good?

    Read the article

  • Wondering where to begin

    - by Cat
    Hello all. After being interested for years and years (and years), I have finally decided to start learning how to create software and web applications. Base on recommendations, I have started with learning the basics of web design first (which I am almost done with) and then will move on to the meat of my process: learning the languages. Problem is, I don't know where to start :/ PHP, Ruby, Perl...and where would SQL, JavaScript and .NET fit into the mix? I am assuming they build on each other/play off of each other somewhat so following some sort of 'order' will make the process more logical and digestible. You're probably thinking, "Just go to school for computer engineering, duh!" But I already have a degree and don't plan on going back to school. I believe I have an adequate aptitude for this sort of thing, and although it will be challenging, with the support of the community I know I can do it on my own. Thank in advance everyone and I am very sorry for the length. I look forward to hearing what all you have to say. Warm Regards, Cat

    Read the article

  • Presenting an image cropping interface

    - by wkw
    I'm trying to engineer a UI for cropping images in iphone OS and suspect I'm going about things the hard way. My goal is pretty much what the Tapbots duo have done with Pastebot. In that app, they dim the source image but provide a movable and resizable cropping view and the image you're cropping is in a zoomable scrollview; when you resize or move the underlying image, the cropping view adjusts appropriately. I mocked up a composite image which will give a sense of the design I'm after, along with how I presently have my view hierarchy setup, viewable here The approach I've started with is the following: UIImageView with the image to crop is in a scrollview, a plain UIView with black fill and suitable transparency/alpha setting is added in front of the imageview. I then use a custom UIView which is a sibling to the scrollview at a higher level, which implements the drawRect: method and for the most part calls CGImageCreateWithImageInRect to get the portion of the image's bitmap that matches the position of the cropping view and draws that to the CGContext. in the viewcontroller I'm using the UIScrollViewDelegate methods to track scrolling and passing those changes to the custom cropping UIView so it stays in sync with the scroll contentOffset. That much is finally working. But trying to keep in sync as the scrollview zoomScale changes is when I figured I should ask for help. Looking for suggestions or guidance. My initial approach just seems like more work than is required. Could this be done with a masking layer in the ImageView? And if so, how would I setup the tracking for moving and resizing the cropping rect? My experience working with layers is non-nil, but very limited thus far.

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish Error

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • How can I improve this search usability?

    - by Craig Whitley
    This is the first real programming attempt of mine, and theres some major flaws. It's a learning project, and I'm currently re-writing the entire thing as my php is is really messy. I really want to get an idea on how I can improve the actual usability and accesibility of the site at the same time though - so I know how to implement it correctly. The website is basically a comparison website for gameserver hosting. As I mentioned, its a learning project and I don't actually expect any revenue from it. At the moment theres only test data in it, so in the game input box select either 'Battlefied Bad Company 2' or 'Call of Duty 4: Modern Warfare' and ignore the actual search results. http://www.laglessfrag.com I wasn't really sure how to work the search functionality. Basically when you click a game in the drop down box, it sends an ajax request and finds all the locations available to that specific game in the database. After selecting the country theres another ajax to find all the citys available to the game in that country - which gives me the two unique identifiers I need to create the search results. One major and fundamental flaw is that without javascript enabled, the site ceases to function. I'll overcome that in the next re-write, but without the ajax functionality stopping the user 'going wrong' - how can I implement a search that requires two fields without creating extra steps in new pages after form submissions? I'm also no designer so my whole layout and css is a bit rubbish, but this was mainly a learning project as I'm interested in applications / programming rather than design. It's also slow as its on shared hosting, but if I can get it to work correctly then I'm not impartial to chucking a bit of money at it for faster hosting and maybe a bit of advertising and seeing where it goes (if anywhere!). Any info appreciated.

    Read the article

  • Architecture for new ASP.NET web application

    - by Anders Abel
    I'm maintaining an application which currently is just a web service (built with WCF) and a database backend. The web service is built in layers with a linq-to-sql data access part with core functionality in an own assembly and on top of that the web service assembly which contains the WCF code. The core assembly also handles all business logic rules (very few actually). The customer now wants a Web interface for the application instead of just accessing it through other applications which are consuming the web service. I'm quite lost on modern web application design, so I would like some advice on what architecture and frameworks to use for the web application. The web application will be using the same core assembly with business rules and the linq-to-sql data access layer as the web service. Some concepts I've thought about are: ASP.NET MVC Webforms AJAX controls - possibly leting the AJAX controls access the existing web service through JSON. Are there any more concepts I should look into? Which one is the best for a fresh project? The development tools are Visual Studio 2008 Team Edition for Developers targeting .NET 3.5. An upgrade to Visual Studio 2010 Premium (or maybe even Ultimate) is possible if it gives any benefits.

    Read the article

  • Starting a process in one HTTP call and getting results in another

    - by KillianDS
    Hi, I'm writing a very simple testing framework for my application, the design isn't perfect, but I don't have time to write something more complex. Essentially, I have a client and server-application, on my server I want a small python web server to start the server application with given test sequences on a GET or POST call. Also, the application prints some testdata to stderr which I'd like to catch and return in another HTTP call. At the moment I have this: from subprocess import Popen, PIPE from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer p = None class MyHandler(BaseHTTPRequestHandler): def do_GET(self): global p if self.path.endswith("start/"): p = Popen(["./bin/Release/simplex264","BBB-360","127.0.0.1"], stderr=PIPE) print 'started' return elif self.path.endswith("getResults/"): self.wfile.write(p.stderr.read()) return self.send_error(404,'File Not Found: %s' % self.path) def main(): try: server = HTTPServer(('localhost', 9876), MyHandler) print 'Started server...' server.serve_forever() except KeyboardInterrupt: print 'Shutting down...' server.socket.close() if __name__ == '__main__': main() Which 'works', except for one part, when I try to open http://localhost:9876/start/, it does not return before the process ended. However, the 'started' appears in my shell immediately (I added this because I thought the Popen call would only return after execution). I do not know the perfect inner workings of Popen and BaseHTTPRequestHandler however and do not really know where it goes wrong. Is there any way to make this work asynchronously?

    Read the article

  • Question on boost array initializer

    - by ArunSaha
    I am trying to understand the boost array. The code can be read easily from author's site. In the design rationale, author (Nicolai M. Josuttis) mentioned that the following two types of initialization is possible. boost::array<int,4> a = { { 1, 2, 3 } }; // Line 1 boost::array<int,4> a = { 1, 2, 3 }; // Line 2 In my experiment with g++ (version 4.1.2) Line 1 is working but Line 2 is not. (Line 2 yields the following: warning: missing braces around initializer for 'int [4]' warning: missing initializer for member 'boost::array<int, 4ul>::elems' ) Nevertheless, my main question is, how Line 1 is working? I tried to write a class similar to array.hpp and use statement like Line 1, but that did not work :-(. Can somebody explain me? Is there some boost specific thing happening in Line 1 that I need to be aware of? Thanks in advance. Regards,

    Read the article

  • .NET: How to know when serialization is completed?

    - by Ian Boyd
    When I construct my control (which inherits DataGrid), I add specific rows and columns. This works great at design time. Unfortunately, at runtime I add my rows and columns in the same constructor, but then the DataGrid is serialized (after the constructor runs) adding more rows and columns. After serialization is complete, I need to clear everything and re-initialize the rows and columns. Is there a protected method that I can override to know when the control is done serializing? Of course, I'd prefer to not have to do the work in the constructor, throw it away, and do it again after (potential) serialization. Is there a preferred event that is the equivalent of "set yourself up now", so that it is called once whether I'm serialized or not? The serialization i speak of comes from the InitializeComponent() method in the form's code-behind file. #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { ... } It would have been perfect if InitializeComponent was a virtual method defined by Control, then i could just override it and then perform my processing after i call base: protected override void InitializeComponent() { base.InitializeComponent(); InitializeMe(); } But it's not an ancestor method, it's declared only in the code-behind file. i notice that InitializeComponent calls SuspendLayout and ResumeLayout on various Controls. i thought it could override ResumeLayout, and perform my initialization then: public override void ResumeLayout() { base.ResumeLayout(); InitializeMe(); } But ResumeLayout is not virtual, so that's out. Anymore ideas? i can't be the first person to create a custom control.

    Read the article

  • Exclude specific value from a Min/Max agregate funcion using ICriteria.

    - by sparks
    I have a schedule (Voyages) table like this: ID Arrival Departure OrderIndex 1 01/01/1753 02/10/2009 0 1 02/11/2009 02/15/2009 1 1 02/16/2009 02/19/2009 2 1 02/21/2009 01/01/1753 3 2 01/01/1753 03/01/2009 0 2 03/04/2009 03/07/2009 1 2 03/09/2009 01/01/1753 2 By design i save '01/01/1753' as a default value if the user doesn't fill a the field on the capture screen and for the very first Arrival and the very last Departure which are never provided. Im using Nhibernate and Criteria, and im wondering whats the best way to query this data if i want to know the First departure and last arrival for each voyage in the table. My first thought was a groupby (ID) and then do some Min and Max with the arrival and departure but the `'01/01/1753' VALUE is messing aronud. ... .SetProjection(Projections.ProjectionList() .Add(Projections.GroupProperty("ID"), "ID") .Add(Projections.Min("DepartureDate"), "DepartureDate") .Add(Projections.Max("ArrivalDate"), "ArrivalDate") ) ... So is there a way to skip this value in the Min function comparison (without losing the whole row of data), or there is a better way to do this, maybe utilizing the OrderIndex that always indicate the correct order of the elements, maybe ordering ASC taking the 1st and then Order DESC and taking the 1 st again, but im not quite sure how to do that with criteria syntax.

    Read the article

  • C# - Naming a value combined "getter/setter" method (WebForms & Binding)

    - by tyndall
    Looking for some help on some names for a project I'm currently working on. I don't have a compsci degree so I don't know what to call this. I have a method called TryToGetSetValue(Direction direction, object value, object valueOnFail) Then there would be a Direction enum public enum Direction { ModelToForm, FormToModel } Background This is a legacy ASP.NET application. The models, database, and mainframe are designed poorly. I can't put in MVP or MVC patterns yet (too much work). ASP.NET code is a ridiculous mess (partial pages, single-page design, 5x the normal amount of jQuery, everything is a jQuery UI dialog). I'm just trying to put in a bridge so then I can do more refactoring over the next year. I have ~200 fields that need to be set on a GET and written back on a POST. I trying not to x2 these 200 fields and have 400 lines of code to support. What would you call my method? enum? Is there so other form of binding that would be easy to use instead? I'm not a fan of the DetailsView or FormView built-ins of ASP.NET WebForms.

    Read the article

  • How to identify the type of socket data?

    - by Nitesh Panchal
    Hello, May be i am not able to express my doubt properly in this question but still i will try. Basically i created a simple socket based chat program and everything works fine. But i think i have made many patches in it from the design point of view. I have used ObjectInputStream and ObjectOutputStreams in my program. The question i want to ask is how do i identify the different type of data that i send across the network? say if it is simple String type object i directly add to List<String> chatMessages. Now if want to ban certain users i created an another class :- public class User{ private String name; private String id; //getters and setters } This User class means no importance to me till now but i only created it to properly identify the action. Thus if i receive an instanceOf User i can be sure that some user is to be banned. That way i dont have to hardcode strings. I mean first i thought of sending something like "Banned User :" + userName and then i used to check if string startsWith "Banned User :" then i take some action :p. I've created a User class but it means no importance to me in my program. I want to know whether directly sending strings is good way or create a class for every action that is good. If i am not clear please let me know. If i have hundreds of action do i have to create hundreds of classes so i can check via instanceOf? Say now if i plan to create a BUZZ like facility that is available in yahoo messenger. Should i again create an another class named BUZZ? so it can be identified easily?

    Read the article

  • JavaScript snippet to read and output XML file on page load?

    - by Banderdash
    Hey guys, hoping I might get some help. Have XML file here of a list of books each with unique id and numeral value for whether they are checked out or not. I need a JavaScript snippet that requests the XML file after the page loads and displays the content of the XML file. XML file looks like this: <?xml version="1.0" encoding="UTF-8" ?> <response> <library name="My Library"> <book id="1" checked-out="1"> <authors> <author>John Resig</author> </authors> <title>Pro JavaScript Techniques (Pro)</title> <isbn-10>1590597273</isbn-10> </book> <book id="2" checked-out="0"> <authors> <author>Erich Gamma</author> <author>Richard Helm</author> <author>Ralph Johnson</author> <author>John M. Vlissides</author> </authors> <title>Design Patterns: Elements of Reusable Object-Oriented Software</title> <isbn-10>0201633612</isbn-10> </book> ... </library> </response> Would LOVE any and all help!

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • How to debug PHP?

    - by NutMotion
    Anyone's been trying himself at object oriented programming ? Most probably every developer I guess:D I for one have never studied OO design patterns thoroughly, and trying to put it all together now does prove at times thrilling, and many times frustrating also. Even more so when trying to do it in : PHP! All-in-all, my boss asked me to add some Database persistence functions to her server, but most of all, she asked me to translate her already working procedural code into a working Object Oriented code. Here I am, still standing on my PHP OO project. I'm (already) fed up with this "file logging only" PHP capability. I believe there must be some (free or not too much expansive) PHP debugging utility ? I've heard about Zend Studio and PHPEd so far, which didn't quite do the trick for whatever reasons. WIRCW("Which I don't Remember Correctly Why" lol) What say yé? on debugging PHP ? Is there a tool that provides a good debug mode? what's more, don't forget I'm not speaking about the classical web Request/response model. Talking about a debugging facility which can enable you to trigger a web service (aka client request) and go into debug mode on the SOAP web service side. Thks for any input.

    Read the article

  • Data synchronization using XMPP

    - by Jason
    Hi: I'm looking for some insight/advice on synchronizing data over XMPP. I've never developed anything for XMPP before so excuse me if some of my questions seem ridiculous. Basically, what I have is a decentralized social network. Each person has it's own Web site (or server) with a unique URI (one domain could host many servers). Each of these servers can have many clients. E.g., a desktop application, mobile application, etc. What I would like to accomplish is near real-time synchronization/communication between client and server, e.g., I update something on my desktop application, I see it change on my Web site. My server and client code is Python. So, I would like to make use of SleekXMPP if possible (it's license seems to have changed to MIT). I was thinking, and here is where I need advice, that each server would register an account at a dedicated XMPP server, e.g., [email protected]. and then I could use different resources for clients [email protected]/client1, [email protected]/client2, etc. If anyone can register any username, then maybe I also need some intermediate service (since it's decentralized, i'm not sure how to control registrations). Another option, I guess, is that each server runs it's own xmpp server. Assuming, that was all worked out, if I want to broadcast messages to all my resources (except the sending one), how do I do that? Do I have to subscribe to myself? This also seems like a good candidate for publish-subscribe, let me know if you think that could work and what the design/flow of that process would be. thanks :)

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • Should a new language compiler target the JVM?

    - by Pindatjuh
    I'm developing a new language. My initial target was to compile to native x86 for the Windows platform, but now I am in doubt. I've seen some new languages target the JVM (most notable Scala and Clojure). Ofcourse it's not possible to port every language easily to the JVM; to do so, it may lead to small changes to the language and it's design. So that's the reason behind this doubt, and thus this question: Is targetting the JVM a good idea, when creating a compiler for a new language? Or should I stick with x86? I have experience in generating JVM bytecode. Are there any workarounds to JVM's GC? The language has deterministic implicit memory management. How to produce JIT-compatible bytecode, such that it will get the highest speedup? Is it similar to compiling for IA-32, such as the 4-1-1 muops pattern on Pentium? I can imagine some advantages (please correct me if I'm wrong): JVM bytecode is easier than x86. Like x86 communicates with Windows, JVM communicates with the Java Foundation Classes. To provide I/O, Threading, GUI, etc. Implementing "lightweight"-threads.I've seen a very clever implementation of this at http://www.malhar.net/sriram/kilim/. Most advantages of the Java Runtime (portability, etc.) The disadvantages, as I imagined, are: Less freedom? On x86 it'll be more easy to create low-level constructs, while JVM has a higher level (more abstract) processor. Most disadvantages of the Java Runtime (no native dynamic typing, etc.)

    Read the article

  • Are MEF's ComposableParts contracts instance-based?

    - by Dave
    I didn't really know how to phrase the title of my questions, so my apologies in advance. I read through parts of the MEF documentation to try to find the answer to my question, but couldn't find it. I'm using ImportMany to allow MEF to create multiple instances of a specific plugin. That plugin Imports several parts, and within calls to a specific instance, it wants these Imports to be singletons. However, what I don't want is for all instances of this plugin to use the same singleton. For example, let's say my application ImportManys Blender appliances. Every time I ask for one, I want a different Blender. However, each Blender Imports a ControlPanel. I want each Blender to have its own ControlPanel. To make things a little more interesting, each Blender can load BlendPrograms, which are also contained within their own assemblies, and MEF takes care of this loading. A BlendProgram might need to access the ControlPanel to get the speed, but I want to ensure that it is accessing the correct ControlPanel (i.e. the one that is associated with the Blender that is associated with the program!) This diagram might clear things up a little bit: As the note shows, I believe that the confusion could come from an inherently-poor design. The BlendProgram shouldn't touch the ControlPanel directly, and instead perhaps the BlendProgram should get the speed via the Blender, which will then delegate the request to its ControlPanel. If this is the case, then I assume the BlendProgram needs to have a reference to a specific Blender. In order to do this, is the right way to leverage MEF and use an ImportingConstructor for BlendProgram, i.e. [ImportingConstructor] public class BlendProgram : IBlendProgram { public BlendProgram( Blender blender) {} } And if this is the case, how do I know that MEF will use the intended Blender plugin?

    Read the article

  • How to specify a parameter as part of every web service call?

    - by LES2
    Currently, each web service for our application has a user parameter that is added for every method. For example: @WebService public interface FooWebService { @WebMethod public Foo getFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); @WebMethod public Result deletetFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); // ... } There could be twenty methods in a service, each with the first parameter as user. And there could be twenty web services. We don't actually use the 'user' argument in the implementations - in fact, I don't know why it's there - but I wasn't involved in the design, and the person that put it there had a reason (I hope). Anyway, I'm trying to straighten out this Big Ball of Mud. I have already come a long way by wrapping the web services by a Spring proxy, which allows me to do some before-and-after processing in an interceptor (before there were at least 20 lines of copy-pasted boiler plate per method). I'm wondering if there's some kind of "message header" I can apply to the method or package and that can be accessed by some type of handler or something outside of each web service method. Thanks in advance for the advice, LES

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • How to shrink the ASPX page

    - by salvationishere
    I am developing a C#/ASP.NET web application in VS 2008. Currently this page is too tall. The buttons appear on top and then there is a large gap between these buttons and the resultLabel text. The following code is from my ASPX file. I have tried switching to the Design tab of this file and manually moving this label, but there is still a large gap. I'm sure this is simple. How do I correct this? Text="Now select from the dropdownlists which table columns from my database you want to map these fields to"     <table align="center"><tr> <td style="text-align: center;width: 300px;"> <asp:Label ID="resultLabel" runat="server" style="position:absolute; text-align:center; top:148px; left: 155px;" Visible="False"></asp:Label> </td></tr></table> <p>

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

  • Injecting Dependencies into Domain Model classes with Nhibernate (ASP.NET MVC + IOC)

    - by Sunday Ironfoot
    I'm building an ASP.NET MVC application that uses a DDD (Domain Driven Design) approach with database access handled by NHibernate. I have domain model class (Administrator) that I want to inject a dependency into via an IOC Container such as Castle Windsor, something like this: public class Administrator { public virtual int Id { get; set; } //.. snip ..// public virtual string HashedPassword { get; protected set; } public void SetPassword(string plainTextPassword) { IHashingService hasher = IocContainer.Resolve<IHashingService>(); this.HashedPassword = hasher.Hash(plainTextPassword); } } I basically want to inject IHashingService for the SetPassword method without calling the IOC Container directly (because this is suppose to be an IOC Anti-pattern). But I'm not sure how to go about doing it. My Administrator object either gets instantiated via new Administrator(); or it gets loaded via NHibernate, so how would I inject the IHashingService into the Administrator class? On second thoughts, am I going about this the right way? I was hoping to avoid having my codebase littered with... currentAdmin.Password = HashUtils.Hash(password, Algorithm.Sha512); ...and instead get the domain model itself to take care of hashing and neatly encapsulate it away. I can envisage another developer accidently choosing the wrong algorithm and having some passwords as Sha512, and some as MD5, some with one salt, and some with a different salt etc. etc. Instead if developers are writing... currentAdmin.SetPassword(password); ...then that would hide those details away and take care of those problems listed above would it not?

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >