Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 211/371 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Hibernate: same generated value in two properties

    - by Markos Fragkakis
    Hi, I have an entity A with fields: aId (the system id) bId I want the first to be generated: @Id @Column(name = "PRODUCT_ID", unique = true, nullable = false, precision = 12, scale = 0) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "PROD_GEN") @BusinessKey public Long getAId() { return this.aId; } I want the bId to be initially exactly as the aId. One approach is to insert the entity, then get the aId generated by the DB (2nd query) and then update the entity, setting the bId to be equal to aId (3rd query). Is there a way to get the bId to get the same generated value as aId? Note that afterwards, I want to be able to update bId from my gui. If the solution is JPA, even better.

    Read the article

  • Android - Looking for an AOP solution

    - by Serj Lotutovici
    I'm writing an application that on the bottom line uses it's internal API for some manipulations. The problem is that to call any method provided by that class first I (or anybody who uses the API) have to call #prepare() and after that #cleanup(). It all worked fine until the application and the API started to grow. And the risk of not calling one of the supplied methods before or after the API is now to big to be ignored (which makes it a bug risky application). Searching for a solution I found this question. I use Google Guice in my app for other purposes, but Android doesn't support AOP, that's why a use only guice-no_aop-x.jar. So I end-up with two questions: Is there an AOP solution for android to implement the same approach that is shown in the link above? Or may be someone has an idea that will be suitable for my case? Thanks in advice!

    Read the article

  • Find out 20th, 30th, nth prime number. (I'm getting 20th but not 30th?) [Python]

    - by gsin
    The question is to find the 1000th prime number. I wrote the following python code for this. The problem is, I get the right answer for the 10th , 20th prime but after that each increment of 10 leaves me one off the mark. I can't catch the bug here :( count=1 #to keep count of prime numbers primes=() #tuple to hold primes candidate=3 #variable to test for primes while count<20: for x in range(2,candidate): if candidate%x==0: candidate=candidate+2 else : pass primes=primes+(candidate,) candidate=candidate+2 count=count+1 print primes print "20th prime is ", primes[-1] In case you're wondering, count is initialised as 1 because I am not testing for 2 as a prime number(I'm starting from 3) and candidate is being incremented by 2 because only odd numbers can be prime numbers. I know there are other ways of solving this problem, such as the prime number theorem but I wanna know what's wrong with this approach. Also if there are any optimisations you have in mind, please suggest. Thank You

    Read the article

  • Changing <img src="XXX" />, js event when new image has finished loading?

    - by carillonator
    I have a photo gallery web page where a single <img src="XXX" /> element's src is changed (on a click) with javascript to show the next image—a poor man's ajax I guess. Works great on faster connections when the new image appears almost immediately. Even if it takes a few seconds to load, every browser I've tested it on keeps the old image in place until the new one is completely loaded. It's a little confusing waiting those few seconds on a slow connection, though, and I'm wondering if there's some javascript event that fires when the new image is done loading, allowing me to put a little working... animated gif or something up in the meantime. I know I could use AJAX for real (I'm using jQuery already), but this is such a nice and simple solution. Besides this lag, is there any other reason I should stay away from this approach to changing images? thanks.

    Read the article

  • Performance statistics hooks

    - by tinny
    Lets be honest, most software that developers produce has quite modest performance requirements. E.g. Systems perhaps serving 100's of requests per second, if that. But lets assume for a moment (or even dream) that you where perhaps involved in the "next big thing" (whatever that means) and you wanted to put some sort of performance statistics logging in place to help you out when all those users come flying in. Performance statistics logging, how would you approach this requirement? Perhaps you would use some sort of generic framework for this? Or roll your own solution? What would you log? How granular? Or would you not even bother putting anything in place and rather deal with this issue when it actually became an issue? It would be really interesting to hear your thoughts on this topic.

    Read the article

  • Data-separation in a Symfony Multi-tenant app using Doctrine

    - by Prasad
    I am trying to implement a multi-tenant application, that is - data of all clients in a single database - each shared table has a tenant_id field to separate data I wish to achieve data separation by adding where('tenant_id = ', $user->getTenantID()) {pseudoc-code} to all SELECT queries I could not find any solution up-front, but here are possible approaches I am considering. 1) crude approach: customizing all fetchAll and fetchOne functions in every class (I will go mad!) 2) using listeners: possibly coding for the preDqlSelect event and adding the 'where' to all queries 3) override buildQuery(): could not find an example of this for front-end 4) implement contentformfilter: again need a pointer Would appreciate if someone could validate these & comment on efficieny, suitability. Also, if anyone has achieved multitenancy using another strategy, pl share. Thanks

    Read the article

  • Web design process with CSS - during or after?

    - by SyaZ
    Which is the better practice? Add CSS during web designing you can see the result (or close) as early as possible and make required changes. You also know how many divs or spans you might need (eg to make curved cross-browser hover background). But as you add more and more components to the page sometimes things get hack-ish as you need to patch here and there to get the exact design required. Add CSS after finishing page design you can see the page overall structure as it is well, without styles. You get to see how accessible your site is, and modify it right away if it's not good enough (unlike the former case where you may break multiple CSS rules). Plus after you finished it, you only need to spend most of the time to alter only the CSS file, which is good to get the momentum going. Granted I have never tried the latter approach, but am seriously considering it for my next project if I can see convincing reasons -- or if it's no good at all. Thanks.

    Read the article

  • bash: hwo to know NUM option in grep -A -B "on the fly" ?

    - by Michael Mao
    Hello everyone: I am trying to analyze my agent results from a collection of 20 txt files here. If you wonder about the background info, please go see my page, what I am doing here is just one step. Basically I would like to take only my agent's result out of the messy context, so I've got this command for a single file: cat run15.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' This means : after the regex match, continue forward by 50 lines, stop, then match a line separator starts with "==", go back by 50 lines, if possible (This would certainly clash the very first line). This approach depends on the fact that the hard-coded line number counter 50, would be just fine to get exactly one line separator. And this would not work if I do the following code: cat run*.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' The output would be a mess... My question is: how to make sure grep knows exactly when to stop going forward, and when to stop getting backward? Any suggestion or hint is much appreciated.

    Read the article

  • LINQ to SQL: Issue with concurrency

    - by Gib
    I’m working on a sandwich ordering app in ASP.NET MVC, C# and LINQ to SQL. The app revolves around the user creating multiple custom-made sandwiches from a selection of ingredients. When it comes to confirming the order I need to know that there’s enough portions of each ingredient to fulfil all the sandwiches in the user’s order before I commit to the DB as it is possible that an ingredient will go out of stock between adding it to their basket and confirming the order. A bit about the database: Ingredient – Stores ingredient details including number of portions Order – Header table for an order, simply stores the order time OrderDetail – Stores a record of each sandwich in an order OrderDetailItem – Stores each ingredient in each sandwich in an order So basically I’m wondering what the best approach to ensuring that before I add records to Order, OrderDetail and OrderDetailItem I can ensure that the order can be met.

    Read the article

  • Using thread inter-communication to increase my server app's IO throughput; not sure how

    - by Howard Guo
    My server application creates a new thread for each incoming connection. Incoming requests are serialized in a BlockingQueue. There is one worker thread taking items from the queue, produce a response and send the response through socket. I have noticed a throughput issue: Currently, worker thread is responsible of sending the response message through socket, thus severely wasting processing power and throughput. I am considering: rather than sending the response itself, why not telling network IO threads to send the response? However, when I think about thread inter-communication, I cannot yet figure out how to approach it: Worker thread will produce a response, but how will it inform the response message to IO thread? Is there a standard/best practice? Thank you.

    Read the article

  • Should I use DTOs as my data models in MVVM?

    - by JonC
    I'm currently working on what will be my first real foray into using MVVM and have been reading various articles on how best to implement it. My current thoughts are to use my data models effectively as data transfer objects, make them serializable and have them exist on both the client and server sides. It seems like a logical step given that both object types are really just collections of property getters and setters and another layer in between seems like complete overkill. Obviously there would be issues with INotifyPropertyChanged not working correctly on the server side as there is no ViewModel to which to communicate, but as long as we are careful about constructing our proper domain model objects from data models in the service layer and not dealing the the data models on the server side I don't think it should be a big issue. I haven't found too much info about this approach in my reading, so I would like to know if this is a pretty standard thing, is this just assumed to be the de facto way of doing MVVM in a multi-tier environment? If I've got completely the wrong idea about things then thoughts on other approaches would be appreciated too.

    Read the article

  • Semantic Fish-eye zoom on table cells in UIKit?

    - by niblha
    How would I go about implementing a table view that looks and works something as illustrated in the link below, with UIKit for the iPhone? http://img442.imageshack.us/img442/4177/uifisheyeview.png I was thinking of using UITableView, and was looking a bit at the UITableViewDelegate methods tableView:willDisplayCell:forRowAtIndexPath: tableView:heightForRowAtIndexPath: But it seems as the UITableView will modify the cell frames after these methods are called and just before the cells are drawn? Maybe skipping the UITableView and go straight for some subclassing of a UIScrollView would be a better approach? So my question is basically that I would just like some overall thought in what might be the best ways to use existing UIKit components to implement this type of table view.

    Read the article

  • locking database record for editing

    - by sd_dracula
    I have a SQL 2008 DB and an asp.net frontend. I would like to implement a lock when a user is currently editing a record but unsure of which is the best approach. My idea is to have a isLocked column for the records and it gets set to true when a user pulls that record, meaning all other users have read only access until the first user finishes the editing. However, what if the session times out and he/she never saves/updates the record, the record will remain with isLocked = true, meaning others cannot edit it, right? How can I implement some sort of session time out and have isLocked be automatically set to false when the session times out (or after a predefined period) Should this be implemented on the asp.net side or the SQL side?

    Read the article

  • Implement a Cellular Automaton ? "Rule 110"

    - by ZaZu
    I was wondering how to use the Rule 110, with 55 lines and 14 cells. I have to then display that in an LED matrix display. Anyway my question is, how can I implement such automaton ?? I dont really know where to start, can someone please shed some light on how can I approach this problem ? Is there a specific METHOD I must follow ? Thanks --PROGRAM USED IS - C EDIT char array[54][14]; for(v=0;v<55;v++){ for(b=0;b<15;b++){ if(org[v][b-1]==0 && org[v][b]==0 && org[v][b+1] == 0) { array[v][b]=0; } array[v][b]=org[v][b]; } } Does that make sense ?? org stands for original

    Read the article

  • What signing method to use for public open-source projects?

    - by Irchi
    I'm publishing an open-source library on CodePlex, and want the dll files to have strong names so that they can be added to GAC. What's the best option for signing? Should I use SNK? If so, everyone have access to the key. I don't have a problem with everyone having access, but is it a good approach? Should I use PFX? If so, does it mean that other people downloading the source code are not able to build the solution? What I like to do is that I am the only one person to have access to the key, so that the signed assemblies also have a level of authenticity, but meanwhile don't prevent other developers to download, build, or change the source code for themselves, and be able to post changes for the main project.

    Read the article

  • Caching asp.net viewdata

    - by Tomh
    Hey guys, I'm currently thinking about caching most of my viewdata excpt user specific data after a user logs on. I thought the simplest way was caching the ViewData object itself and adding the user specific data after it was loaded. Are there any downsides of this approach? Are there better ways? string cacheKey = "Nieuws/show/" + id; if (HttpRuntime.Cache[cacheKey] != null) { ViewData = HttpRuntime.Cache[cacheKey] as ViewDataDictionary; } else { // add stuff to view data HttpRuntime.Cache.Insert(cacheKey, ViewData, null, DateTime.Now.AddSeconds(180), Cache.NoSlidingExpiration, CacheItemPriority.NotRemovable, null); }

    Read the article

  • Soql query to get all related contacts of an account in an opportunity

    - by Prady
    i have SOQL query which queries for some records based on a where condition. select id, name,account.name ... <other fields> from opportunity where eventname__c='Test Event' i also need to get the related contact details for the account in the opportunity. ie i need to add the email ids of contact who all are part of the account in the opportunity. For each opportunity, i need to get all the contacts emailids who are associated with the account in opportunity. I cant really figure out how to approach this. referring the documentation i can get the contact info of a account using the query SELECT Name, ( SELECT LastName FROM Contacts ) FROM Account How can i use this along with opportunity? Thanks

    Read the article

  • Using SortableRows and know when rows have been moved

    - by DW
    I want to take advantage of the sortableRows property of the jqGrid. How do I detect when a row has been moved. I have studied the documentation and looked for examples but haven't found much. I do believe it is something like jQuery("#grid").sortableRows({connectWith:'#gird', ondrop: function(){ alert("row moved") }}); but that does not work. I can move the rows, but don't seemed to have trapped the event. Is there something wrong with my syntax or my approach in general. Basically, I need to know that the rows have been rearranged so I can be sure they get saved with their new order. Thanks

    Read the article

  • How to Command Query Responsibility Segregation (CQRS) with ASP.NET MVC?

    - by Jeffrey
    I have been reading about Command Query Responsibility Segregation (CQRS). I sort of wonder how would this work with ASP.NET MVC? I get the idea of CQRS conceptually it sounds nice and sure does introduce some complexities (event and messaging pattern) compared to the "normal/common" approach . Also the idea of CQRS sort of against the use of ORM in some ways. I am trying to think how I could use this pattern in the coming projects so if anyone has experience in combining CQRS with ASP.NET MVC and NHibernate please give some concrete examples to help me better understand CQRS and use with ASP.NET MVC. Thanks!

    Read the article

  • How to use QMetaMethod with QObject::connect

    - by VestniK
    I have two instances of QObject subclasses and two QMetaMethod instances of signal in one of objects and slot in another object. I want to connect this signal and slot with each other. I've looked through the qobject.h file and find that SIGNAL() and SLOT() macro are just add "1" or "2" character to the beginning of method signature so it looks like it should be possible to add the same character to the beginning of string returned by QMetaMethod::signature() but this approach depends on some undocumented internals of toolkit and may be broken at any time by a new version of Qt. Does anybody know reliable way to connect signals and slots through their QMetaMethod reflection representation?

    Read the article

  • Performance of .NET ILMerged assemblies

    - by matt
    I have two .NET libraries: "Foo.Bar" and "Foo.Baz". "Foo.Bar" is self-contained, while "Foo.Baz" references "Foo.Bar". Assuming I do the following: Use ILMerge to merge "Foo.Bar.dll" with "Foo.Baz.dll" into "Foo1.dll". Create a new solution containing the entirity of both "Foo.Bar" and "Foo.Baz" (since I have access to their source code), and compile this into "Foo2.dll". Will there be any differences in the performance of Foo1.dll and Foo2.dll when using their functionality from an external project? If so, how significant is this performance difference, and is it a once-off (on load?) or ongoing difference? Are there any other pros or cons with either approach?

    Read the article

  • Working with multiple input and output files in Python

    - by Morlock
    I need to open multiple files (2 input and 2 output files), do complex manipulations on the lines from input files and then append results at the end of 2 output files. I am currently using the following approach: in_1 = open(input_1) in_2 = open(input_2) out_1 = open(output_1, "w") out_2 = open(output_2, "w") # Read one line from each 'in_' file # Do many operations on the DNA sequences included in the input files # Append one line to each 'out_' file in_1.close() in_2.close() out_1.close() out_2.close() The files are huge (each potentially approaching 1Go, that is why I am reading through these input files one at a time. I am guessing that this is not a very Pythonic way to do things. :) Would using the following form good? with open("file1") as f1: with open("file2") as f2: # etc. If yes, could I do it while avoiding the highly indented code that would result? Thanks for the insights!

    Read the article

  • Understanding run time code interpretation and execution

    - by Bob
    I'm creating a game in XNA and was thinking of creating my own scripting language (extremely simple mind you). I know there's better ways to go about this (and that I'm reinventing the wheel), but I want the learning experience more than to be productive and fast. When confronted with code at run time, from what I understand, the usual approach is to parse into a machine code or byte code or something else that is actually executable and then execute that, right? But, for instance, when Chrome first came out they said their JavaScript engine was fast because it compiles the JavaScript into machine code. This implies other engines weren't compiling into machine code. I'd prefer not compiling to a lower language, so are there any known modern techniques for parsing and executing code without compiling to low level? Perhaps something like parsing the code into some sort of tree, branching through the tree, and comparing each symbol and calling some function that handles that symbol? (Wild guessing and stabbing in the dark)

    Read the article

  • Perl, efficient parsing of csv file

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although that might work effectively.

    Read the article

  • abstract class MouseAdapter vs. interface

    - by Stefano Borini
    I noted this (it's a java.awt.event class). public abstract class MouseAdapter implements MouseListener, MouseWheelListener, MouseMotionListener { .... } Then you are clearly forced to extend from this adapter public class MouseAdapterImpl extends MouseAdapter {} the class is abstract and implements no methods. Is this a strategy to combine different interfaces into a single "basically interface" ? I assume in java it's not possible to combine different interfaces into a single one without using this approach. In other words, it's not possible to do something like this in java public interface MouseAdapterIface extends MouseListener, MouseWheelListener, MouseMotionListener { } and then eventually public class MouseAdapterImpl implements MouseAdapterIface {} Is my understanding of the point correct ? what about C# ?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >