Search Results

Search found 244 results on 10 pages for 'disadvantage'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • When to use RDLC over RDL reports?

    - by Daan
    I have been studying SSRS 2005 / 2008 in the past weeks and have created some server side reports. For some application, a colleague suggested that I look into RDLC for that particular situation. I am now trying to get my head around the main difference between RDL and RDLC. Searching for this information yields fragmented information at best. I have learned that: RDLC reports do not store information about how to get data. RDLC reports can be executed directly by the ReportViewer control. But I still don't fully understand the relation between the RDLC file and the other related systems (the Reporting Server, the source database, the client). In order to get a good grasp on RDLC files, I would like to know how their use differs from RDL files and in what situation one would choose RDLC over RDL. Links to resources are also welcome. Update: A thread on the ASP.NET forums discusses this same issue. From it, I have gained some better understanding on the issue. A feature of RDLC is that it can be run completely client-side in the ReportViewer control. This removes the need for a Reporting Services instance, and even removes the need for any database connection whatsoever, but: It adds the requirement that the data that is needed in the report has to be provided manually. Whether this is an advantage or a disadvantage depends on the particular application. In my application, an instance of Reporting Services is available anyway and the required data for the reports can easily be pulled from a database. Is there any reason left for me to consider RDLC, or should I simply stick with RDL?

    Read the article

  • Progress Bar design patterns?

    - by shoosh
    The application I'm writing performs a length algorithm which usually takes a few minutes to finish. During this time I'd like to show the user a progress bar which indicates how much of the algorithm is done as precisely as possible. The algorithm is divided into several steps, each with its own typical timing. For instance- initialization (500 milli-sec) reading inputs (5 sec) step 1 (30 sec) step 2 (3 minutes) writing outputs (7 sec) shutting down (10 milli-sec) Each step can report its progress quite easily by setting the range its working on, say [0 to 150] and then reporting the value it completed in its main loop. What I currently have set up is a scheme of nested progress monitors which form a sort of implicit tree of progress reporting. All progress monitors inherit from an interface IProgressMonitor: class IProgressMonitor { public: void setRange(int from, int to) = 0; void setValue(int v) = 0; }; The root of the tree is the ProgressMonitor which is connected to the actual GUI interface: class GUIBarProgressMonitor : public IProgressMonitor { GUIBarProgressMonitor(ProgressBarWidget *); }; Any other node in the tree are monitors which take control of a piece of the parent progress: class SubProgressMonitor : public IProgressMonitor { SubProgressMonitor(IProgressMonitor *parent, int parentFrom, int parentLength) ... }; A SubProgressMonitor takes control of the range [parentFrom, parentFrom+parentLength] of its parent. With this scheme I am able to statically divide the top level progress according to the expected relative portion of each step in the global timing. Each step can then be further subdivided into pieces etc' The main disadvantage of this is that the division is static and it gets painful to make changes according to variables which are discovered at run time. So the question: are there any known design patterns for progress monitoring which solve this issue?

    Read the article

  • Sql Querying, group relationships

    - by Jordan
    Hi, Suppose I have two tables: Group ( id integer primary key, someData1 text, someData2 text ) GroupMember ( id integer primary key, group_id foreign key to Group.id, someData text ) I'm aware that my SQL syntax is not correct :) Hopefully is clear enough. My problem is this: I want to load a group record and all the GroupMember records associated with that group. As I see it, there are two options. A single query: SELECT Group.id, Group.someData1, Group.someData2 GroupMember.id, GroupMember.someData FROM Group INNER JOIN GroupMember ... WHERE Group.id = 4; Two queries: SELECT id, someData2, someData2 FROM Group WHERE id = 4; SELECT id, someData FROM GroupMember WHERE group_id = 4; The first solution has the advantage of only being one database round trip, but has the disadvantage of returning redundant data (All group data is duplicated for every group member) The second solution returns no duplicate data but involves two round trips to the database. What is preferable here? I suppose there's some threshold such that if the group sizes become sufficiently large, the cost of returning all the redundant data is going to be greater than the overhead involved with an additional database call. What other things should I be thinking about here? Thanks, Jordan

    Read the article

  • Violating 1st normal form, is it okay for my purpose?

    - by Nick
    So I'm making a running log, and I have the workouts stored as entries in a table. For each workout, the user can add intervals (which consist of a time and a distance), so I have an array like this: [workout] => [description] => [comments] => ... [intervals] => [0] => [distance] => 200m [time] => 32 [1] => [distance] => 400m [time] => 65 ... I'm really tempted to throw the "intervals" array into serialize() or json_encode() and put it in an "intervals" field in my table, however this violates the principles of good database design (which, incidentally, I know hardly anything about). Is there any disadvantage to doing this? I never plan on querying my table based on the contents of "intervals". Creating a separate table just for intervals seems like a lot of unnecessary complexity, so if anyone with more experience has had a situation like this, what route did you take and how did it work out?

    Read the article

  • Installing Mercurial on Mac OS X 10.6 Snow Leopard

    - by Matthew Rankin
    Installing Mercurial on Mac OS X 10.6 Snow Leopard I installed Mercurial 1.3.1 on Mac OS X 10.6 Snow Leopard from source using the following: cd ~/src curl -O http://mercurial.selenic.com/release/mercurial-1.3.1.tar.gz tar xzvf mercurial-1.3.1.tar.gz cd mercurial-1.3.1 make ALL sudo make install This installs the site-packages files for Mercurial in /usr/local/lib/python2.6/site-packages/. I know that installing Mercurial from the Mac Disk Image will install the files into /Library/Python/2.6/site-packages/, which is the site-packages directory for the Mac OS X default Python install. I have Python 2.6.2+ installed as a Framework with its site-packages directory in: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages With Mercurial installed this way, I have to issue: PYTHONPATH=/usr/local/lib/python2.6/site-packages:"${PYTHONPATH}" in order to get Mercurial to work. Questions How can I install Mercurial from source with the site-packages in a different directory? Is there an advantage or disadvantage to having the site-packages in the current location? Would it be better in one of the Python site-package directories that already exist? Do I need to be concerned about virtualenv working correctly since I have modified PYTHONPATH (or any other conflicts for that matter)? Reasons for Installing from Source Dan Benjamin of Hivelogic provides the benefits of and instructions for installing Mercurial from source in his article Installing Mercurial on Snow Leopard.

    Read the article

  • Implementing the ‘defer’ statement from Go in Objective-C?

    - by zoul
    Hello! Today I read about the defer statement in the Go language: A defer statement pushes a function call onto a list. The list of saved calls is executed after the surrounding function returns. Defer is commonly used to simplify functions that perform various clean-up actions. I thought it would be fun to implement something like this in Objective-C. Do you have some idea how to do it? I thought about dispatch finalizers, autoreleased objects and C++ destructors. Autoreleased objects: @interface Defer : NSObject {} + (id) withCode: (dispatch_block_t) block; @end @implementation Defer - (void) dealloc { block(); [super dealloc]; } @end #define defer(__x) [Defer withCode:^{__x}] - (void) function { defer(NSLog(@"Done")); … } Autoreleased objects seem like the only solution that would last at least to the end of the function, as the other solutions would trigger when the current scope ends. On the other hand they could stay in the memory much longer, which would be asking for trouble. Dispatch finalizers were my first thought, because blocks live on the stack and therefore I could easily make something execute when the stack unrolls. But after a peek in the documentation it doesn’t look like I can attach a simple “destructor” function to a block, can I? C++ destructors are about the same thing, I would create a stack-based object with a block to be executed when the destructor runs. This would have the ugly disadvantage of turning the plain .m files into Objective-C++? I don’t really think about using this stuff in production, I’m just interested in various solutions. Can you come up with something working, without obvious disadvantages? Both scope-based and function-based solutions would be interesting.

    Read the article

  • Adding a guideline to the editor in Visual Studio

    - by xsl
    Introduction I've always been searching for a way to make Visual Studio draw a line after a certain amount of characters: Below is a guide to enable these so called guidelines for various versions of Visual Studio. Visual Studio 2010 Install Paul Harrington's Editor Guidelines extension. Open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. Or install the Guidelines UI extension, which will add entries to the editor's context menu for adding/removing the entries without needing to edit the registry directly. The current disadvantage of this method is that you can't specify the column directly. Visual Studio 2008 and Other Versions If you are using Visual Studio 2008 open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. The vertical line will appear, when you restart Visual Studio. This trick also works for various other version of Visual Studio, as long as you use the correct path: 2003: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\7.1\Text Editor 2005: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor 2008: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor 2008 Express: HKEY_CURRENT_USER\Software\Microsoft\VCExpress\9.0\Text Editor This also works in SQL Server 2005 and probably other versions.

    Read the article

  • VS2010 and CSS: What is the best strategy to individually position form controls

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • Calling PHP functions within HEREDOC strings

    - by Doug Kavendek
    In PHP, the HEREDOC string declarations are really useful for outputting a block of html. You can have it parse in variables just by prefixing them with $, but for more complicated syntax (like $var[2][3]), you have to put your expression inside {} braces. In PHP 5, it is possible to actually make function calls within {} braces inside a HEREDOC string, but you have to go through a bit of work. The function name itself has to be stored in a variable, and you have to call it like it is a dynamically-named function. For example: $fn = 'testfunction'; function testfunction() { return 'ok'; } $string = <<< heredoc plain text and now a function: {$fn()} heredoc; As you can see, this is a bit more messy than just: $string = <<< heredoc plain text and now a function: {testfunction()} heredoc; There are other ways besides the first code example, such as breaking out of the HEREDOC to call the function, or reversing the issue and doing something like: ?> <!-- directly outputting html and only breaking into php for the function --> plain text and now a function: <?PHP print testfunction(); ?> The latter has the disadvantage that the output is directly put into the output stream (unless I'm using output buffering), which might not be what I want. So, the essence of my question is: is there a more elegant way to approach this? Edit based on responses: It certainly does seem like some kind of template engine would make my life much easier, but it would require me basically invert my usual PHP style. Not that that's a bad thing, but it explains my inertia.. I'm up for figuring out ways to make life easier though, so I'm looking into templates now.

    Read the article

  • Multiple consecutive alerts in Java ME

    - by Casebash
    According to the documentation, Display.setCurrent doesn't work if the current displayable is an alert. Does anyone know how to work around this so that we can go from one alert to another? I am using CLDC 1.0 and MIDP 2.0. My attempt The spec does allow us to edit an alert while it is on screen, but some Nokia phones don't handle it well at all. So I am now trying to go from the alert to a canvas, then back to the alert. Of course I don't want the user to interact with the previous canvas, so it seems that I am forced to create a new blank canvas. As a sidenote, this has the slight disadvantage of looking worse on phones which still have the previous screen when an alert is shown. The bigger problem is how to transition from the blank canvas back to an alert once the canvas is loaded. Testing on the Motorola emulator revealed that showNotify is not called after returning from an alert to the previous screen. I guess I could create the next alert in the paint method, but this seems like a ugly hack.

    Read the article

  • How should I avoid memoization causing bugs in Ruby?

    - by Andrew Grimm
    Is there a consensus on how to avoid memoization causing bugs due to mutable state? In this example, a cached result had its state mutated, and therefore gave the wrong result the second time it was called. class Greeter def initialize @greeting_cache = {} end def expensive_greeting_calculation(formality) case formality when :casual then "Hi" when :formal then "Hello" end end def greeting(formality) unless @greeting_cache.has_key?(formality) @greeting_cache[formality] = expensive_greeting_calculation(formality) end @greeting_cache[formality] end end def memoization_mutator greeter = Greeter.new first_person = "Bob" # Mildly contrived in this case, # but you could encounter this in more complex scenarios puts(greeter.greeting(:casual) << " " << first_person) # => Hi Bob second_person = "Sue" puts(greeter.greeting(:casual) << " " << second_person) # => Hi Bob Sue end memoization_mutator Approaches I can see to avoid this are: greeting could return a dup or clone of @greeting_cache[formality] greeting could freeze the result of @greeting_cache[formality]. That'd cause an exception to be raised when memoization_mutator appends strings to it. Check all code that uses the result of greeting to ensure none of it does any mutating of the string. Is there a consensus on the best approach? Is the only disadvantage of doing (1) or (2) decreased performance? (I also suspect freezing an object may not work fully if it has references to other objects) Side note: this problem doesn't affect the main application of memoization: as Fixnums are immutable, calculating Fibonacci sequences doesn't have problems with mutable state. :)

    Read the article

  • Automatic music rating based on listening habits

    - by marco92w
    I've created a Winamp-like music player in Delphi. Not so complex, of course. Just a simple one. But now I would like to add a more complex feature: Songs in the library should be automatically rated based on the user's listening habits. This means: The application should "understand" if the user likes a song or not. And not only whether he/she likes it but also how much. My approach so far (data which could be used): Simply measure how often a song was played per time. Start counting time when the song was added to the library so that recent songs don't have any disadvantage. Measure how long a song was played on average (minutes). Starting a song but directly change to another one should have a bad influence on the ranking since the user didn't seem to like the song. ... Could you please help me with this problem? I would just like to have some ideas. I don't need the implementation in Delphi.

    Read the article

  • Will Apple bundle the Mono Touch runtime with every iPhone?

    - by Zoran Simic
    It strikes me as a good idea for Apple to negotiate with Novell and bundle the Mono Touch runtime (only the runtime of course) into every iPhone and iPod Touch. Perhaps even make it a "one time install" that automatically gets downloaded from the App Store the first time one downloads an app build with Mono Touch, making every subsequent Mono Touch app much lighter to download (without the runtime). Doing so would be similar in a way to adding Bootcamp to OS X: it would make it easier for C# developers to join the party, but that wouldn't mean these developers will all stick to C#... What convinced me to buy a Mac is Bootcamp - I figured I could always install Windows if I didn't like OS X (and I liked the hardware, so no problem there). 6 months later, I'm using OS X full time... Would there be any technical issues in doing so? I see only advantages for all parties, not one disadvantage to anyone (except maybe for the few unfortunate Apple employees who would have to test the crap out of the Mono Touch runtime before bundling it): Novell wins because Mono Touch becomes much more viable (Mono Touch apps become much lighter all of the sudden) Developers win because now there's one more tool in the tool belt Many C# Developers would be very interested by this Apple wins because that would bring even more attention to the platform, more revenue in developer fees, more potential great apps, etc Users win because less space is used by different Apps having copies of the same runtime accumulating on their devices Would there be a major technical obstacle in bundling Mono Touch to iPhone OS? Edit: Changed the title from "Should" to "Will Apple bundle the runtime?", I think the consensus on predicting that means a lot to those considering going with Mono Touch.

    Read the article

  • heterogeneous comparisons in python3

    - by Matt Anderson
    I'm 99+% still using python 2.x, but I'm trying to think ahead to the day when I switch. So, I know that using comparison operators (less/greater than, or equal to) on heterogeneous types that don't have a natural ordering is no longer supported in python3.x -- instead of some consistent (but arbitrary) result we raise TypeError instead. I see the logic in that, and even mostly think its a good thing. Consistency and refusing to guess is a virtue. But what if you essentially want the python2.x behavior? What's the best way to go about getting it? For fun (more or less) I was recently implementing a Skip List, a data structure that keeps its elements sorted. I wanted to use heterogeneous types as keys in the data structure, and I've got to compare keys to one another as I walk the data structure. The python2.x way of comparing makes this really convenient -- you get an understandable ordering amongst elements that have a natural ordering, and some ordering amongst those that don't. Consistently using a sort/comparison key like (type(obj).__name__, obj) has the disadvantage of not interleaving the objects that do have a natural ordering; you get all your floats clustered together before your ints, and your str-derived class separates from your strs. I came up with the following: import operator def hetero_sort_key(obj): cls = type(obj) return (cls.__name__+'_'+cls.__module__, obj) def make_hetero_comparitor(fn): def comparator(a, b): try: return fn(a, b) except TypeError: return fn(hetero_sort_key(a), hetero_sort_key(b)) return comparator hetero_lt = make_hetero_comparitor(operator.lt) hetero_gt = make_hetero_comparitor(operator.gt) hetero_le = make_hetero_comparitor(operator.le) hetero_ge = make_hetero_comparitor(operator.gt) Is there a better way? I suspect one could construct a corner case that this would screw up -- a situation where you can compare type A to B and type A to C, but where B and C raise TypeError when compared, and you can end up with something illogical like a > b, a < c, and yet b > c (because of how their class names sorted). I don't know how likely it is that you'd run into this in practice.

    Read the article

  • Jrails and jquery-rails together?

    - by ecoologic
    I hope you forgive me if I'm a little confused about this. Jrails rewrites all the helper methods for rjs otherwise in prototype. Jquery-rails overrides rails.js (so I taught the rjs helpers as well, but I see it's not) and the rest of the libraries in jquery instead of prototype. Does it means that with just Jquery-rails I can't use rjs at all? Then do these gems work well together and let me completely forget prototype? Is there any disadvantage in using them together? I read some complaining about the fact that jrails uses jq 1.5 and now there is 1.8, but jquery-rails gives me the last version (is there retro compatibility?), but how will this affect my development (using both the gems together)? I mean have you got some examples of jquery-ui plugins I can't use or I could have problems with? Any help is appreciated. Don't get me wrong, I'm not against prototype, just I keep on having conflicts over conflicts using both of them, and I have to chose I pick Jquery.

    Read the article

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • VS2010 and CSS: What is the best way to position a single form control

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • Efficient way to remove empty lists from lists without evaluating held expressions?

    - by Alexey Popkov
    In previous thread an efficient way to remove empty lists ({}) from lists was suggested: Replace[expr, x_List :> DeleteCases[x, {}], {0, Infinity}] Using the Trott-Strzebonski in-place evaluation technique this method can be generalized for working also with held expressions: f1[expr_] := Replace[expr, x_List :> With[{eval = DeleteCases[x, {}]}, eval /; True], {0, Infinity}] This solution is more efficient than the one based on ReplaceRepeated: f2[expr_] := expr //. {left___, {}, right___} :> {left, right} But it has one disadvantage: it evaluates held expressions if they are wrapped by List: In[20]:= f1[Hold[{{}, 1 + 1}]] Out[20]= Hold[{2}] So my question is: what is the most efficient way to remove all empty lists ({}) from lists without evaluating held expressions? The empty List[] object should be removed only if it is an element of another List itself. Here are some timings: In[76]:= expr = Tuples[Tuples[{{}, {}}, 3], 4]; First@Timing[#[expr]] & /@ {f1, f2, f3} pl = Plot3D[Sin[x y], {x, 0, Pi}, {y, 0, Pi}]; First@Timing[#[pl]] & /@ {f1, f2, f3} Out[77]= {0.581, 0.901, 5.027} Out[78]= {0.12, 0.21, 0.18} Definitions: Clear[f1, f2, f3]; f3[expr_] := FixedPoint[ Function[e, Replace[e, {a___, {}, b___} :> {a, b}, {0, Infinity}]], expr]; f1[expr_] := Replace[expr, x_List :> With[{eval = DeleteCases[x, {}]}, eval /; True], {0, Infinity}]; f2[expr_] := expr //. {left___, {}, right___} :> {left, right};

    Read the article

  • Why should I prefer OSGi Services over exported packages?

    - by Jens
    Hi, I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages? As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me. Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too. In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies. So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages. To sum my questions up: What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >