Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 84/278 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • How to write this function as a pL/pgSQl function ?

    - by morpheous
    I am trying to implement some business logic in a PL/pgSQL function. I have hacked together some pseudo code that explains the type of business logic I want to include in the function. Note: This function returns a table, so I can use it in a query like: SELECT A.col1, B.col1 FROM (SELECT * from some_table_returning_func(1, 1, 2, 3)) as A, tbl2 as B; The pseudocode of the pl/PgSQL function is below: CREATE FUNCTION some_table_returning_func(uid int, type_id int, filter_type_id int, filter_id int) RETURNS TABLE AS $$ DECLARE where_clause text := 'tbl1.id = ' + uid; ret TABLE; BEGIN switch (filter_type_id) { case 1: switch (filter_id) { case 1: where_clause += ' AND tbl1.item_id = tbl2.id AND tbl2.type_id = filter_id'; break; //other cases follow ... } break; //other cases follow ... } // where clause has been built, now run query based on the type ret = SELECT [COL1, ... COLN] WHERE where_clause; IF (type_id <> 1) THEN return ret; ELSE return select * from another_table_returning_func(ret,123); ENDIF; END; $$ LANGUAGE plpgsql; I have the following questions: How can I write the function correctly to (i.e. EXECUTE the query with the generated WHERE clause, and to return a table How can I write a PL/pgSQL function that accepts a table and an integer and returns a table (another_table_returning_func) ?

    Read the article

  • Is it possible to replace values ina queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • How to change the date/time in Python for all modules?

    - by Felix Schwarz
    When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?

    Read the article

  • Why do C# containers and GUI classes use int and not uint for size related members ?

    - by smerlin
    I usually program in C++, but for school i have to do a project in C#. So i went ahead and coded like i was used to in C++, but was surprised when the compiler complained about code like the following: const uint size = 10; ArrayList myarray = new ArrayList(size); //Arg 1: cannot convert from 'uint' to 'int Ok they expect int as argument type, but why ? I would feel much more comfortable with uint as argument type, because uint fits much better in this case. Why do they use int as argument type pretty much everywhere in the .NET library even if though for many cases negative numbers dont make any sense (since no container nor gui element can have a negative size). If the reason that they used int is, that they didnt expect that the average user cares about signedness, why didnt they add overloads for uint additonally ? Is this just MS not caring about sign correctness or are there cases where negative values make some sense/ carry some information (error code ????) for container/gui widget/... sizes ?

    Read the article

  • Is it possible to replace values in a queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • Box2D in Flash runs quicker when drawing debug data than not

    - by bowdengm
    I've created a small game with Box2d for AS3 - I have sprites attached to the stage that take their position from the underlying Box2d world. These sprites are mostly PNGs. When the game runs with DrawDebugData() bening called every update, it runs nice and smoothly. However when I comment this out, it runs choppily. In both cases all my sprites are being rendered. So it seems that it's running faster when it's drawing the debug data additionaly (i.e. my sprites are on the screen in both cases!) What's going on? Does drawing the debug data flick some sort of 'render quick' switch? If so, what's the switch!? I can't see it in the Box2D code. function Update(e){ m_world.Step(m_timeStep, m_velocityIterations, m_positionIterations); // draw debug? m_world.DrawDebugData(); // with the above line in, I get 27fps, without it, I get 19fps. // that's the only change that's causing such a huge difference. doStuff(); } Interestingly, If i set the debug draw scale to something different to my world scale, it slows down to 19fps. So there's something happening when it draws the boxes under my sprites causing it to run quicker.. Cheers, Guy

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • Whose fault is a NullReferenceException?

    - by stefan.at.wpf
    I'm currently working on a class which exposes an internal List through a property. The List shall and can be modified. The problem is, entries in the internal list could be set to null from outside the class. My code actually looks like this: class ClassWithList { List<object> _list = new List<object>(); // get accessor, which however returns the reference to the list, // therefore the list can be modified (this is intended) public List<object> Data { get { return _list; } } private void doSomeWorkWithTheList() { foreach(object obj in _list) // do some work with the objects in the list without checking for null. } } So now in the doSomeWorkWithTheList() I could always check whether the current list entry is null or I could just asume that the person using this class doesn't have the great idea to set entries to null. So finally the questions end up in: Whose fault is a NullReferenceException in this case? Is it the fault of the class developer not checking everything for null (which would make code generally - not only in this class - more complex) or is it the fault of the user of this class, as setting a List entry to null doesn't really make sense? I'd tend to generally not check values for null except in some really special cases. Is this a bad style or de facto standard / standard in praxis? I know there's probably no ultimate answer for this, I'm just missing enough experience for such thing and therefore wondering what other developers think about such cases and want to hear what's done in reality about checking null (or not).

    Read the article

  • Is it possible to navigate to the parent node of a matched node during XSLT processing?

    - by Darin
    I'm working with an OpenXML document, processing the main document part with some XSLT. I've selected a set of nodes via <xsl:template match="w:sdt"> </xsl:template> In most cases, I simply need to replace that matched node with something else, and that works fine. BUT, in some cases, I need to replace not the w:sdt node that matched, but the closest w:p ancestor node (ie the first paragraph node that contains the sdt node). The trick is that the condition used to decide one or the other is based on data derived from the attributes of the sdt node, so I can't use a typical xslt xpath filter. I'm trying to do something like this <xsl:template match="w:sdt"> <xsl:choose> <xsl:when test={first condition}> {apply whatever templating is necessary} </xsl:when> <xsl:when test={exception condition}> <!-- select the parent of the ancestor w:p nodes and apply the appropriate templates --> <xsl:apply-templates select="(ancestor::w:p)/.." mode="backout" /> </xsl:when> </xsl:choose> </xsl:template> <!-- by using "mode", only this template will be applied to those matching nodes from the apply-templates above --> <xsl:template match="node()" mode="backout"> {CUSTOM FORMAT the node appropriately} </xsl:template> This whole concept works, BUT no matter what I've tried, It always applies the formatting from the CUSTOM FORMAT template to the w:p node, NOT it's parent node. It's almost as if you can't reference a parent from a matching node. And maybe you can't, but I haven't found any docs that say you can't Any ideas?

    Read the article

  • Synth LaF JLabel DISABLED color

    - by mmoris
    Hi all, Using the Synth LaF, I am unable to set a JLabel's FOREGROUND color for the DISABLED state. has anybody succeeded in doing this? Here is my label's style definition in my LaF.xml file. <style id="whiteLabelStyle"> <opaque value="false"/> <font name="Bitstream Vera Sans" size="16" /> <state> <color type="FOREGROUND" value="WHITE"/> </state> <state value="DISABLED"> <color type="FOREGROUND" value="BLACK"/> </state> </style> <bind style="whiteLabelStyle" type="name" key="WhiteOrbitLabel"/> Please not that all the other styles defined in my LaF.xml file are rendered properly in my application including my label's WHITE normal state color (it just never goes to black when I do lbl.setEnabled(false) Also, going through the Synth code, I have found the following comment in SynthStyle.getColor if ((context.getComponentState() & SynthConstants.DISABLED) != 0) { //This component is disabled, so return the disabled color. //In some cases this means ignoring the color specified by the //developer on the component. In other cases it means using a //specified disabledTextColor, such as on JTextComponents. //For example, JLabel doesn't specify a disabled color that the //developer can set, yet it should have a disabled color to the //text when the label is disabled. This code allows for that. if (c instanceof JTextComponent) { JTextComponent txt = (JTextComponent)c; Color disabledColor = txt.getDisabledTextColor(); if (disabledColor == null || disabledColor instanceof UIResource) { return getColorForState(context, type); } } else if (c instanceof JLabel && (type == ColorType.FOREGROUND || type == ColorType.TEXT_FOREGROUND)){ return getColorForState(context, type); } But I could not figure out how to set a disabled color for a JLabel Thanks for your help!

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • Optimizing this "Boundarize" method for Numerics in Ruby

    - by mstksg
    I'm extending Numerics with a method I call "Boundarize" for lack of better name; I'm sure there are actually real names for this. But its basic purpose is to reset a given point to be within a boundary. That is, "wrapping" a point around the boundary; if the area is betweeon 0 and 100, if the point goes to -1, -1.boundarize(0,100) = 99 (going one too far to the negative "wraps" the point around to one from the max). 102.boundarize(0,100) = 2 It's a very simple function to implement; when the number is below the minimum, simply add (max-min) until it's in the boundary. If the number is above the maximum, simply subtract (max-min) until it's in the boundary. One thing I also need to account for is that, there are cases where I don't want to include the minimum in the range, and cases where I don't want to include the maximum in the range. This is specified as an argument. However, I fear that my current implementation is horribly, terribly, grossly inefficient. And because every time something moves on the screen, it has to re-run this, this is one of the bottlenecks of my application. Anyone have any ideas? module Boundarizer def boundarize min=0,max=1,allow_min=true,allow_max=false raise "Improper boundaries #{min}/#{max}" if min >= max new_num = self if allow_min while new_num < min new_num += (max-min) end else while new_num <= min new_num += (max-min) end end if allow_max while new_num > max new_num -= (max-min) end else while new_num >= max new_num -= (max-min) end end return new_num end end class Numeric include Boundarizer end

    Read the article

  • JavaScript: How is "function x() {}" different from "x = function() {}" ?

    - by jleedev
    In the answers to this question, we read that function f() {} defines the name locally, while [var] f = function() {} defines it globally. That makes perfect sense to me, but there's some strange behavior that's different between the two declarations. I made an HTML page with the script onload = function() { alert("hello"); } and it worked as expected. When I changed it to function onload() { alert("hello"); } nothing happened. (Firefox still fired the event, but WebKit, Opera, and Internet Explorer didn't, although frankly I've no idea which is correct.) In both cases (in all browsers), I could verify that both window.onload and onload were set to the function. In both cases, the global object this is set to the window, and I no matter how I write the declaration, the window object is receiving the property just fine. What's going on here? Why does one declaration work differently from the other? Is this a quirk of the JavaScript language, the DOM, or the interaction between the two?

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • File.Replace throwing IOException

    - by WebDevHobo
    I have an app that can make modify images. In some cases, this makes the filesize smaller, in some cases bigger. The program doesn't have an option to "not replace the file if result has a bigger filesize". So I wrote a little C# app to try and solve this. Instead of overwriting the files, I make the app write the result to a folder under the current one and name that folder Test. The C# app I wrote compares grabs the contents of both folders and puts the full path to the file(s) in two List objects. I then compare and replace. The replacing isn't working however. I get the following IOException: Unable to remove the file to be replaced The location is on an external hard-drive, on which I have full rights. Now, I know I can just do File.Delete and File.Move in that order, but this exception has gotten me interested in why this particular setup wont work. Here's the source code: http://pastebin.com/4Vq82Umu And yes, the file specified as last argument of the Replace function does exist.

    Read the article

  • Where to start when doing a Domain Model?

    - by devoured elysium
    Let's say I've made a list of concepts I'll use to draw my Domain Model. Furthermore, I have a couple of Use Cases from which I did a couple of System Sequence Diagrams. When drawing the Domain Model, I never know where to start from: Designing the model as I believe the system to be. This is, if I am modelling a the human body, I start by adding the class concepts of Heart, Brain, Bowels, Stomach, Eyes, Head, etc. Start by designing what the Use Cases need to get done. This is, if I have a Use Case which is about making the human body swallow something, I'd first draw the class concepts for Mouth, Throat, Stomatch, Bowels, etc. The order in which I do things is irrelevant? I'd say probably it'd be best to try to design from the Use Case concepts, as they are generally what you want to work with, not other kind of concepts that although help describe the whole system well, much of the time might not even be needed for the current project. Is there any other approach that I am not taking in consideration here? How do you usually approach this? Thanks

    Read the article

  • Is there any point in using a volatile long?

    - by Adamski
    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class: public class MyClass { private volatile int id; public MyClass() { ScheduledExecutorService execService = Executors.newScheduledThreadPool(1); execService.scheduleAtFixedRate(new Runnable() { public void run() { ++id; } }, 0L, 30L, TimeUnit.SECONDS); } public int getId() { return id; } } My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit). Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

    Read the article

  • How to structure this Symfony web project?

    - by James William
    I am new to Symfony and am not sure how to best structure my web project. The solution must accommodate 3 use cases: Public access to www.mydomain.com for general use Member only access to member.mydomain.com Administrator access to admin.mydomain.com All three virtual hosts point to the Symfony /web directory Questions: Is this 3 separate applications in my Symfony project (e.g. "frontend", "backend" and "admin" or "public", "member", "admin")? Is this a good approach if there is to be some duplicate code (e.g. generating a member list would be common across all 3 applications, but presented differently)? How would I route to the various applications based on the subdomain when a user accesses *.mydomain.com? Where in Symfony should this routing logic be placed? Or, is this one application with modules for each of the above use cases? EDIT: I do not have access to httpd.conf in apache to specify a default page for virtual hosts. I can only specify a directory for each subdomain using the hostin provider's cPanel.

    Read the article

  • How do you prove a function works?

    - by glenn I.
    I've recently gotten the testing religion and have started primarily with unit testing. I code unit tests which illustrate that a function works under certain cases, specifically using the exact inputs I'm using. I may do a number of unit tests to exercise the function. Still, I haven't actually proved anything other than the function does what I expect it to do under the scenarios I've tested. There may be other inputs and scenarios I haven't thought of and thinking of edge cases is expensive, particularly on the margins. This is all not very satisfying to do me. When I start to think of having to come up with tests to satisfy branch and path coverage and then integration testing, the prospective permutations can become a little maddening. So, my question is, how can one prove (in the same vein of proving a theorem in mathematics) that a function works (and, in a perfect world, compose these 'proofs' into a proof that a system works)? Is there a certain area of testing that covers an approach where you seek to prove a system works by proving that all of its functions work? Does anybody outside of academia bother with an approach like this? Are there tools and techniques to help? I realize that my use of the word 'work' is not precise. I guess I mean that a function works when it does what some spec (written or implied) states that it should do and does nothing other than that. Note, I'm not a mathematician, just a programmer.

    Read the article

  • How do I compare two PropertyInfos or methods reliably?

    - by Rob Ashton
    Same for methods too: I am given two instances of PropertyInfo or methods which have been extracted from the class they sit on via GetProperty or GetMember etc, (or from a MemberExpression maybe). I want to determine if they are in fact referring to the same Property or the same Method so (propertyOne == propertyTwo) or (methodOne == methodTwo) Clearly that isn't going to actually work, you might be looking at the same property, but it might have been extracted from different levels of the class hierarchy (in which case generally, propertyOne != propertyTwo) Of course, I could look at DeclaringType, and re-request the property, but this starts getting a bit confusing when you start thinking about Properties/Methods declared on interfaces and implemented on classes Properties/Methods declared on a base class (virtually) and overridden on derived classes Properties/Methods declared on a base class, overridden with 'new' (in IL world this is nothing special iirc) At the end of the day, I just want to be able to do an intelligent equality check between two properties or two methods, I'm 80% sure that the above bullet points don't cover all of the edge cases, and while I could just sit down, write a bunch of tests and start playing about, I'm well aware that my low level knowledge of how these concepts are actually implemented is not excellent, and I'm hoping this is an already answered topic and I just suck at searching. The best answer would give me a couple of methods that achieve the above, explaining what edge cases have been taken care of and why :-)

    Read the article

  • Data Warehouse: Modelling a future schedule

    - by Pat
    I'm creating a DW that will contain data on financial securities such as bonds and loans. These securities are associated with payment schedules. For example, a bond could pay quarterly, while a mortage would usually pay monthly (sometimes biweekly). The payment schedule is created when the security is traded and, in the majority of cases, will remain unchanged. However, the design would need to accomodate those cases where it does change. I'm currently attempting to model this data and I'm having difficulty coming up with a workable design. One of the most commonly queried fields is "next payment date". Users often want to know when a security will pay next. Therefore, I want to make it as easy as possible for them to get the next payment date and amount for each security. Also, users often run historical queries in which case they'd want the next payment date and amount as of a specific point in time. For example, they may want to look back at 1/31/09 and query the next payment dates (which would usually be in February 2009 for mortgages). It's also common that they want to query a security's entire payment schedule, which might consist of 360 records (30 year mortgage x 12 payments/year). Since the next payment date and amount would be changing each month or even biweekly, these fields wouldn't seem to fit into a slow-changing dimension very well. It would probably make more sense to use a fact table, but I'm unsure of how to model it. Any ideas would be greatly appreciated.

    Read the article

  • StructureMap Configuration Per Thread/Request for the Full Dependency Chain

    - by Phil Sandler
    I've been using Structuremap for a few months now, and it has worked great on some smaller (green field) projects. Most of the configurations I have had to set up involve a single implementation per interface. In the cases where I needed to choose a specific implementation at runtime, I created a factory class that used ObjectFactory.GetNamedInstance<(). In the smaller projects, there were few enough of these cases where I was comfortable with the references to ObjectFactory. My understanding is that you want to limit these references as much as possible, and ideally only reference the ObjectFactory once. I am working to refactor a larger codebase to use IOC/StructureMap, and am finding that I may need many of these factory classes with ObjectFactory references to get what I need. Essentially, I am creating a "root service" with the ObjectFactory, so that everything in the dependency chain is managed by the container. The root service is created by name (i.e. "BuildCar", "BuildTruck"), and the services needed deeper in the dependency chain could also be constructed using the same name--so the "IAttachWheels" service could vary based on whether a car or truck is being built. Since the class that depends on IAttachWheels is the same in both configurations, I don't think I can use ConstructedBy in the registry to choose the implementation. Also, to be clear, the IAttachWheels implementations need to be managed by the container as well, because the dependency chain runs fairly deep. I looked briefly at Profiles as an option, but read (here on StackOverflow) that changing profiles essentially changes implementations for all threads. Is there a feature that is similar to profiles that is thread/request specific? Is the factory class that references ObjectFactory approach the right way to go? Any thoughts would be appreciated.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >