Search Results

Search found 8056 results on 323 pages for 'external'.

Page 283/323 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Using JAXB to customise the generation of java enums

    - by belltower
    I'm using an external bindings file when using jaxb against an XML schema. I'm mostly using the bindings file to map from the XML schema primitives to my own types. This is a snippet of the bindings file <jxb:bindings version="1.0" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:ai="http://java.sun.com/xml/ns/jaxb/xjc" extensionBindingPrefixes="ai"> <jxb:bindings schemaLocation="xsdurl" node="xs:schema"> <jxb:globalBindings> <jxb:javaType name="com.companyname.StringType" xmlType="xs:string" parseMethod="parse" printMethod="print" hasNsContext="true"> </jxb:javaType> </jxb:globalBindings> </jxb:bindings> </jxb:bindings> So whenever a xs:string is encountered, the com.companyname.StringType the methods print / parse are called for marshalling/unmarshalling etc. Now if JAXB encounters an xs:enumeration it will generate a java enum. For example: <xs:simpleType name="Address"> <xs:restriction base="xs:string"> <xs:enumeration value="ADDR"/> <xs:enumeration value="PBOX"/> </xs:restriction> </xs:simpleType> public enum Address { ADDR, PBOX, public String value() { return name(); } public static Address fromValue(String v) { return valueOf(v); } } Does anyone know if it is possible to customise the creation of an enum like it is for a primitive? I would like to be able to: Add a standard member variable / other methods to every enum generated by jaxb. Specify the static method used to create the enum.

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • jQuery > Update inline script on form submission

    - by Andrew Kirk
    I am using the ChemDoodle Web Components to display molecules on a web page. Basically, I can just insert the following script in my page and it will create an HTML5 canvas element to display the molecule. <script> var transform1 = new TransformCanvas('transform1', 200, 200, true); transform1.specs.bonds_useJMOLColors = true; transform1.specs.bonds_width_2D = 3; transform1.specs.atoms_useJMOLColors = true; transform1.specs.atoms_circles_2D = true; transform1.specs.backgroundColor = 'black'; transform1.specs.bonds_clearOverlaps_2D = true; transform1.loadMolecule(readPDB(molecule)); </script> In this example, "molecule" is a variable that I have defined in an external script by using the jQuery.ajax() function to load a PDB file. This is all fine and good. Now, I would like to include a form on the page that will allow a user to paste in a PDB molecule definition. Upon submitting the form, I want to update the "molecule" variable with the form data so that the ChemDoodle Web Components script will work its magic and display molecule defined by the PDB definition pasted into the form. I am using the following jQuery code to process the the form submission. $(".button").click(function() { // validate and process form here //hide previous errors $('.error').hide(); //validate pdb textarea field var pdb = $("textarea#pdb").val(); if (pdb == "") { $("label#pdb_error").show(); $("textarea#pdb").focus(); return false; } molecule = pdb; }); This code is setting the "molecule" variable upon the form submission but it is not being passed back to the inline script as I had hoped. I have tried a number of variations on this but can't seem to get it right. Any clues as to where I might be going wrong would be much appreciated.

    Read the article

  • No Debug information of an iPhone app

    - by Markus Pilman
    Hi all, I wrote an iPhone app which uses a third party library. I crosscompiled this library successfully and everything works smoothly. But when I want to debug the application, it would make sense to also be able to debug the library. So I compiled also the external library with debuging information (usign the gcc option -ggdb). But when I want to debug it, I get the correct symbol names, but the positions are always wrong/extremly wierd (locale_facets.tcc:2505 or iostream:76). For example a stack trace could look like this: #0 0x000045e8 in zorba::serialization::SerializeBaseClass::SerializeBaseClass () at iostream:76 #1 0x0001d990 in zorba::RCObject::RCObject () at iostream:76 #2 0x00025187 in zorba::xqpStringStore::xqpStringStore () at iostream:76 #3 0x000719e4 in zorba::String::String () at locale_facets.tcc:2505 #4 0x00030513 in iphone::iLabelModule::getURI (this=0x533f710) at /Users/sausalito/eth/izorba/sandbox/ilabel.cpp:19 #5 0x00356766 in zorba::static_context::bind_external_module () at locale_facets.tcc:2505 #6 0x0006139d in zorba::StaticContextImpl::registerModule () at locale_facets.tcc:2505 #7 0x000333e5 in -[ZorbaCaller init] (self=0x53405c0, _cmd=0x95583398) at /Users/sausalito/eth/izorba/sandbox/ZorbaCaller.mm:61 #8 0x00033180 in +[ZorbaCaller instance] (self=0x11dc2bc, _cmd=0x93679591) at /Users/sausalito/eth/izorba/sandbox/ZorbaCaller.mm:37 #9 0x0003d998 in -[testOne execute:] (self=0x530d560, _cmd=0x9366b126, sender=0x5121da0) at /Users/sausalito/eth/izorba/sandbox/generator/testOne.mm:13 #10 0x01a21405 in -[UIApplication sendAction:to:from:forEvent:] () #11 0x01a84b4e in -[UIControl sendAction:to:forEvent:] () #12 0x01a86d6f in -[UIControl(Internal) _sendActionsForEvents:withEvent:] () #13 0x01a85abb in -[UIControl touchesEnded:withEvent:] () #14 0x01a3addf in -[UIWindow _sendTouchesForEvent:] () #15 0x01a247c8 in -[UIApplication sendEvent:] () #16 0x01a2b061 in _UIApplicationHandleEvent () #17 0x03b6fd59 in PurpleEventCallback () #18 0x034a8b80 in CFRunLoopRunSpecific () #19 0x034a7c48 in CFRunLoopRunInMode () #20 0x03b6e615 in GSEventRunModal () #21 0x03b6e6da in GSEventRun () #22 0x01a2bfaf in UIApplicationMain () #23 0x0002dd7e in main (argc=1, argv=0xbffff044) at /Users/sausalito/eth/izorba/sandbox/main.m:16 Does anybody have an idea, where these wrong locations come from?

    Read the article

  • Unmanaged DLL (exporting Dialog) + Class Library (DLL) + no Windows Style/Themes

    - by Gohlool
    Hi, I have a managed application TestApplication.exe in C# and Application.EnableVisualStyles() is allready called. I have a Class Library MySharedCode.dll also in C# which uses [DLLImport()] to import some External dialogs out of an unmanaged dll. Well, now I am using (add reference) MySharedCode.dll in my TestApplication.exe and call a function MyTestConfigDlg() out of it. TestClass.MyTestConfigDlg(); OK, everything works fine and I get my dialog, but the dialog has NO XP style/themes? I just wanted to see if it's general problem with managed/unmanged modules, so I used the [DLLImport()] to call the same MyTestConfigDlg() dialog but this time directly in my TestApplication.exe! WOW! Worked as I expected. The Dialog was in XP Style/Themes! so, anybody here who can help me out? FYI: I also tried (just for test) to call MessageBoxA() API call in my Class Library Dll which later called by my TestApplication.exe and the MessageBoxA() had also no Style/Themes! Thanks in advance!

    Read the article

  • Question about the evolution of interaction paradigm between web server program and content provider program?

    - by smwikipedia
    Hi experts, In my opinion, web server is responsible to deliver content to client. If it is static content like pictures and static html document, web server just deliver them as bitstream directly. If it is some dynamic content that is generated during processing client's request, the web server will not generate the conetnt itself but call some external proram to genearte the content. AFAIK, this kind of dynamice content generation technologies include the following: CGI ISAPI ... And from here, I noticed that: ...In IIS 7, modules replace ISAPI filters... Is there any others? Could anyone help me complete the above list and elabrate on or show some links to their evolution? I think it would be very helpful to understand application such as IIS, TomCat, and Apache. I once wrote a small CGI program, and though it serves as a content generator, it is still nothing but a normal standalone program. I call it normal because the CGI program has a main() entry point. But with the recenetly technology like ASP.NET, I am not writing complete program, but only some class library. Why does such radical change happens? Many thanks.

    Read the article

  • How to detect a gamepad button press on OSX 10.5 and higher?

    - by Steph Thirion
    How do I detect a button press on a USB gamepad on OSX 10.5 and higher? I can't wrap my head around the ridiculously complex HID Manager (even though apparently it was simplified with 10.5), and the code samples at Apple have thousands of lines of code that would take days to understand and isolate what I need, so I'd appreciate if someone posts a simple, and fully coded solution for this isolated problem. EDIT: so far all answers are links to source code or semi obscure libraries for all kinds of HID devices, which will require more research time than what I'd like to invest on this. I am starting a bounty to get an actual snippet of code that solves this simple problem (using an external library or not). EDIT POS BOUNTY: thanks to all for you help; but unfortunately the answer that has been automatically selected by the system is not working for me, can't figure out why; and the author has not yet replied to my comments. Any insight would be appreciated, but until a fix is found, anyone looking for resources on this topic should take this answer with a pinch of salt.

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • Where do I put the logic of my MFC program?

    - by Matthew
    I created an application core, in C++, that I've compiled into a static library in Visual Studio. I am now at the process of writing a GUI for it. I am using MFC to do this. I figured out how to map button presses to execute certain methods of my application core's main class (i.e. buttons to have it start and stop). The core class however, should always be sampling data from an external source every second or two. The GUI should then populate some fields after each sample is taken. I can't seem to find a spot in my MFC objects like CDialog that I can constantly check to see if my class has grabbed the data.. then if it has put that data into some of the text boxes. A friend suggested that I create a thread on the OnInit() routine that would take care of this, but that solution isn't really working for me. Is there no spot where I can put an if statement that keeps being called until the program quits? i.e. if( coreapp.dataSampleReady() ) { // put coreapp.dataItem1() in TextBox1 // set progress bar to coreapp.dataItem2() // etc. // reset dataSampleReady }

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • Best way to catch a WCF exception in Silverlight?

    - by wahrhaft
    I have a Silverlight 2 application that is consuming a WCF service. As such, it uses asynchronous callbacks for all the calls to the methods of the service. If the service is not running, or it crashes, or the network goes down, etc before or during one of these calls, an exception is generated as you would expect. The problem is, I don't know how to catch this exception. Because it is an asynchronous call, I can't wrap my begin call with a try/catch block and have it pick up an exception that happens after the program has moved on from that point. Because the service proxy is automatically generated, I can't put a try/catch block on each and every generated function that calls EndInvoke (where the exception actually shows up). These generated functions are also surrounded by External Code in the call stack, so there's nowhere else in the stack to put a try/catch either. I can't put the try/catch in my callback functions, because the exception occurs before they would get called. There is an Application_UnhandledException function in my App.xaml.cs, which captures all unhandled exceptions. I could use this, but it seems like a messy way to do it. I'd rather reserve this function for the truly unexpected errors (aka bugs) and not end up with code in this function for every circumstance I'd like to deal with in a specific way. Am I missing an obvious solution? Or am I stuck using Application_UnhandledException? [Edit] As mentioned below, the Error property is exactly what I was looking for. What is throwing me for a loop is that the fact that the exception is thrown and appears to be uncaught, yet execution is able to continue. It triggers the Application_UnhandledException event and causes VS2008 to break execution, but continuing in the debugger allows execution to continue. It's not really a problem, it just seems odd.

    Read the article

  • Where do I attach the StoreKit delegate and observer in a Cocos2d App?

    - by Jeff B
    I have figured out how all of the StoreKit stuff works and have actually tested working code... however, I have a problem. I made my "store" layer/scene the SKProductsRequestDelegate. Is this the correct thing to do? I get the initial product info like so: SKProductsRequest *productRequest = [[SKProductsRequest alloc] initWithProductIdentifiers: productIDs]; [productRequest setDelegate: self]; [productRequest start]; The problem is that if I transition to a new scene when a request is in progress, the current layer is retained by the productRequest. This means that touches on my new scene/layer are handled by both the new layer and the old layer. I could cancel the productRequest when leaving the scene, but: I do not know if it is in progress at that point. I cannot release it because it may or may not have been released by the request delegates. There has got to be a better way to do this. I could make the delegate a class external to the current layer, but then I do not know how to easily update the layer with the product information when the handler is called.

    Read the article

  • redirecting the root domain - SEO and other issues, need some guidance!

    - by Jim Sp
    I'm not familiar with some of these forwarding methods and I need help. My issue is this: I have a site hosted on discountasp.net. My domain was registered through 1&1 and I redirected the DNS to what discountasp.net wanted. So when a user types www.mydomain.com, he/she sees the ASP.NET site hosted on discountasp.net, which is all fine My main page is Index.aspx, I really suck at html page design and I don't have time or the talent to fiddle with it (or money to get it done by a pro). The rest of the pages are fine. I want to use a good theme from tumblr or bloggr - one of the blog sites and create a page that I want to use as the first page - directly on blogger or tumblr - say yyy.blogspot.com (I have many reasons, so for now please don't bash my decision - let's just say that's what I want). That means when a user types www.mydomain.com, it should redirect it to the blogger or tumblr page. Everything else stays the sme - the links on the blogger page will say www.mydomain.com/xxxx and show up what's on the hosted website. I have setup the IIS rewrite rules etc. etc. so that all works just fine The bottom line is I want to show an external site's web page as my root page. I suppose I'm struggling to even explain what I want! I can of course do a response.redirect on the Index.aspx page - which is the simplest way to manage this, but the big question is will this hurt SEO in some way? If not, that would be what I do and leave the rest of the infrastructure intact (I have already done this to test and it works fine) Thank you very much j

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • MSBUILD batch targets

    - by Darthg8r
    I want to deploy an application to a list of servers. I have all of the build issues taken care of, but I'm having trouble publishing to a list of servers. I want to read the list of servers from an external file and call a target passing the name of each server in. <ItemGroup> <File Include="$(SolutionFolder)CP\Build\DenormDevServers.txt" /> </ItemGroup> <Target Name="DeployToServer" Inputs="Servers" Outputs="Nothing"> <Message Text="Deployment to server done here. Deploying to server: @(Servers)" /> </Target> <Target Name="Test"> <ReadLinesFromFile File="@(File)"> <Output TaskParameter="Lines" ItemName="Servers" /> </ReadLinesFromFile> <CallTarget Targets="DeployToServer" ContinueOnError="true"></CallTarget> </Target> I can't seem to get it to "Deploy" to each server in the list. The output looks like this: Deployment to server done here. Deploying to server: Notice there is no server name, nor is done more than once. There are 2 lines in DenormDevServers.txt

    Read the article

  • Project Euler #14 and memoization in Clojure

    - by dbyrne
    As a neophyte clojurian, it was recommended to me that I go through the Project Euler problems as a way to learn the language. Its definitely a great way to improve your skills and gain confidence. I just finished up my answer to problem #14. It works fine, but to get it running efficiently I had to implement some memoization. I couldn't use the prepackaged memoize function because of the way my code was structured, and I think it was a good experience to roll my own anyways. My question is if there is a good way to encapsulate my cache within the function itself, or if I have to define an external cache like I have done. Also, any tips to make my code more idiomatic would be appreciated. (use 'clojure.test) (def mem (atom {})) (with-test (defn chain-length ([x] (chain-length x x 0)) ([start-val x c] (if-let [e (last(find @mem x))] (let [ret (+ c e)] (swap! mem assoc start-val ret) ret) (if (<= x 1) (let [ret (+ c 1)] (swap! mem assoc start-val ret) ret) (if (even? x) (recur start-val (/ x 2) (+ c 1)) (recur start-val (+ 1 (* x 3)) (+ c 1))))))) (is (= 10 (chain-length 13)))) (with-test (defn longest-chain ([] (longest-chain 2 0 0)) ([c max start-num] (if (>= c 1000000) start-num (let [l (chain-length c)] (if (> l max) (recur (+ 1 c) l c) (recur (+ 1 c) max start-num)))))) (is (= 837799 (longest-chain))))

    Read the article

  • Design question for windows application, best approach?

    - by Jamie Keeling
    Hello, I am in the process of designing an application that will allow you to find pictures (screen shots) made from certain programs. I will provide the locations of a few of the program in the application itself to get the user started. I was wondering how I should go about adding new locations as the time goes on, my first thought was simply hard coding it into the application but this will mean the user has to reinstall it to make the changes take affect. My second idea was to use an XML file to contain all the locations as well as other data, such as the name of the application. This also means the user can add their own locations if they wish as well as sharing them over the internet. The second option seemed the best approach but then I had to think how would it be managed on the users computer. Ideally I'd like just a single .exe without the reliance on any external files such as the XML but this would bring me back to point one. Would it be best to simply use the ClickOnce deployment to create an entry in the start menu and create a folder containing the .exe and the file names? Thanks for the feedback, I don't want to start implementing the application until the design is nailed.

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • Linking a template class using another template class (error LNK2001)

    - by Luís Guilherme
    I implemented the "Strategy" design pattern using an Abstract template class, and two subclasses. Goes like this: template <class T> class Neighbourhood { public: virtual void alter(std::vector<T>& array, int i1, int i2) = 0; }; and template <class T> class Swap : public Neighbourhood<T> { public: virtual void alter(std::vector<T>& array, int i1, int i2); }; There's another subclass, just like this one, and alter is implemented in the cpp file. Ok, fine! Now I declare another method, in another class (including neighbourhood header file, of course), like this: void lSearch(/*parameters*/, Neighbourhood<LotSolutionInformation> nhood); It compiles fine and cleanly. When starting to link, I get the following error: 1>SolverFV.obj : error LNK2001: unresolved external symbol "public: virtual void __thiscall lsc::Neighbourhood<class LotSolutionInformation>::alter(class std::vector<class LotSolutionInformation,class std::allocator<class LotSolutionInformation> > &,int,int)" (?alter@?$Neighbourhood@VLotSolutionInformation@@@lsc@@UAEXAAV?$vector@VLotSolutionInformation@@V?$allocator@VLotSolutionInformation@@@std@@@std@@HH@Z)

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • How do I use a jQuery not selector to select relative URLs?

    - by Matt
    I'm working on a little jQuery script to add Google Analytics pageTracker onclick data to all relative URLs on my forum, allowing me to track clicks to external sites. I don't want to add the onclick to internal links on forum.sitename or sitename, and I don't want to add them to any hrefs marked # or that start with /. My script below works nicely, but for one minor problem! All of the forum's URLs are relative and don't start with /. I appear to have no way to change that, so need to modify the jQuery below to prevent it adding the onclick to links like as it currently does. What I want to do, is to write a .not() function like .not("[href!^=http") to prevent jQuery from adding the onclick to any hrefs which do not start with http. However, .not() appears not to support this. I'm new to jQuery and can't figure this out. Any pointers would be massively appreciated. $(document).ready(function(){ // Get URL from a href var URL = $("a").attr('href'); // Add pageTracker data for GA tracking $("a") .not("[href^=#]") .not("[href^=http://forum.sitename]") .not("[href^=http://www.sitename]") .attr("onclick","pageTracker._trackEvent('Outgoing_Links', 'Forum', " + URL + ");") ; }); Thanks!

    Read the article

  • CSS Table Formatting to a HTML Table

    - by Rurigok
    I am attempting to provide CSS formating to two HTML tables, but I cannot. I am setting up a webpage in HTML & CSS (with the CSS in an external sheet) and the layout of the website depends on the tables. There are 2 tables, one for the head and another for the body. They are set up whereas content is situated in one middle column of 60% width, with one column on each side of the center with 20% width each, along with other table formatting. My question is - how can I format the tables in CSS? I successfully formatted them in HTML, but this will not do. This is the CSS code for the tables - each table has the id layouttable: #layouttable{border:0px;width:100%;} #layouttable td{width:20%;vertical-align:top;} #layouttable td{width:60%;vertical-align:top;background-color:#E8E8E8;} #layouttable td{width:20%;vertical-align:top;} The tables in the html document both each have, in respective order, these elements (with content inside not shown): <table id="layouttable"><tr><td></td><td></td><td></td></tr></table> Does anyone have any idea why this CSS is not working, or can write some code to fix it? If further explanation is needed, please, ask.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >