Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 291/371 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • (x86) Assembler Optimization

    - by Pindatjuh
    I'm building a compiler/assembler/linker in Java for the x86-32 (IA32) processor targeting Windows. High-level concepts of a "language" (in essential a Java API for creating executables) are translated into opcodes, which then are wrapped and outputted to a file. The translation process has several phases, one is the translation between languages: the highest-level code is translated into the medium-level code which is then translated into the lowest-level code (probably more than 3 levels). My problem is the following; if I have higher-level code (X and Y) translated to lower-level code (x, y, U and V), then an example of such a translation is, in pseudo-code: x + U(f) // generated by X + V(f) + y // generated by Y (An easy example) where V is the opposite of U (compare with a stack push as U and a pop as V). This needs to be 'optimized' into: x + y (essentially removing the "useless" code) My idea was to use regular expressions. For the above case, it'll be a regular expression looking like this: x:(U(x)+V(x)):null, meaning for all x find U(x) followed by V(x) and replace by null. Imagine more complex regular expressions, for more complex optimizations. This should work on all levels. What do you suggest? What would be a good approach to optimize in these situations?

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • binding nested json object value to a form field

    - by Jack
    I am building a dynamic form to edit data in a json object. First, if something like this exists let me know. I would rather not build it but I have searched many times for a tool and have found only tree like structures that require entering quotes. I would be happy to treat all values as strings. This edit functionality is for end users so it needs to be easy an not intimidating. So far I have code that generates nested tables to represent a json object. For each value I display a form field. I would like to bind the form field to the associated nested json value. If I could store a reference to the json value I would build an array of references to each value in a json object tree. I have not found a way to do that with javascript. My last resort approach will be to traverse the table after edits are made. I would rather have dynamic updates but a single submit would be better than nothing. Any ideas? // the json in files nests only a few levels. Here is the format of a simple case, { "researcherid_id":{ "id_key":"researcherid_id", "description":"Use to retrieve bibliometric data", "url_template" :[ { "name": "Author Detail", "url": "http://www.researcherid.com/rid/${key}" } ] } } $.get('file.json',make_json_form); function make_json_form(response) { dataset = $.secureEvalJSON(response); // iterate through the object and generate form field for string values. } // Then after the form is edited I want to display the raw updated json (then I want to save it but that is for another thread) // now I iterate through the form and construct the json object // I would rather have the dataset object var updated on focus out after each edit. function show_json(form_id){ var r = {}; var el = document.getElementById(form_id); table_to_json(r,el,null); $('body').html(formattedJSON(r)); }

    Read the article

  • Display image background fully for last repeat in a div

    - by Stiggler
    I have a 700x300 background repeating seamlessly under the main content-div. Now I'd like to attach a div at the bottom of the content-div, containing a continuation-to-end of the background image, connecting seamlessly with the background above it. Due to the nature of the pattern, unless the full 300px height of the background image is visible in the last repeat of the content-div's backround, the background in the div below won't seamlessly connect. Basically, I need the content div's height to be a multiple of 300px under all circumstances. What's a good approach to this sort of problem? I've tried resizing the content-div on loading the page, but this only works as long as the content div doesn't contain any resizing, dynamic content, which is not my case: function adjustContentHeight() { // Setting content div's height to nearest upper multiple of column backgrounds height, // forcing it not to be cut-off when repeated. var contentBgHeight = 300; var contentHeight = $("#content").height(); var adjustedHeight = Math.ceil(contentHeight / contentBgHeight); $("#content").height(adjustedHeight * contentBgHeight); } $(document).ready(adjustContentHeight); What I'm looking for there is a way to respond to a div resizing event, but there doesn't seem to be such a thing. Also, please assume I have no access to the JS controlling the resizing of content in the content-div, though this is potentially a way of solving the problem. Another potential solution I was thinking off was to offset the background image in the bottom div by a certain amount depending on the height of the content-div. Again, the missing piece seems to be the ability to respond to a resize event.

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • VB.NET switching from ADO.NET to LINQ

    - by Cj Anderson
    I'm VERY new to Linq. I have an application I wrote that is in VB.NET 2.0. Works great, but I'd like to switch this application to Linq. I use ADO.NET to load XML into a datatable. The XML file has about 90,000 records in it. I then use the Datatable.Select to perform searches against that Datatable. The search control is a free form textbox. So if the user types in terms it searches instantly. Any further terms that are typed in continue to restrict the results. So you can type in Bob, or type in Bob Barker. Or type in Bob Barker Price is Right. The more criteria typed in the more narrowed your result. I bind the results to a gridview. Moving forward what all do I need to do? From a high level, I assume I need to: 1) Go to Project Properties -- Advanced Compiler Settings and change the Target framework to 3.5 from 2.0. 2) Add the reference to System.XML.Linq, Add the Imports statement to the classes. So I'm not sure what the best approach is going forward after that. I assume I use XDocument.Load, then my search subroutine runs against the XDocument. Do I just do the standard Linq query for this sort of repeated search? Like so: var people = from phonebook in doc.Root.Elements("phonebook") where phonebook.Element("userid") = "whatever" select phonebook; Any tips on how to best implement?

    Read the article

  • Best way to randomly select columns from random rows of SQL results.

    - by LesterDove
    A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible. To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc. So if this is my real Customer data (simplified): FirstName LastName State ========================== Sally Simpson SD Will Warren WI Mike Malone MN Kelly Kline KS Then I'd generate several records that look like this: FirstName LastName State ========================== Sally Warren MN Kelly Malone SD Etc. My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.) SELECT FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()), LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()), State = (SELECT TOP 1 State FROM Customer ORDER BY newid()) Thanks!

    Read the article

  • Unit tests - The benefit from unit tests with contract changes?

    - by Stefan Hendriks
    Recently I had an interesting discussion with a colleague about unit tests. We where discussing when maintaining unit tests became less productive, when your contracts change. Perhaps anyone can enlight me how to approach this problem. Let me elaborate: So lets say there is a class which does some nifty calculations. The contract says that it should calculate a number, or it returns -1 when it fails for some reason. I have contract tests who test that. And in all my other tests I stub this nifty calculator thingy. So now I change the contract, whenever it cannot calculate it will throw a CannotCalculateException. My contract tests will fail, and I will fix them accordingly. But, all my mocked/stubbed objects will still use the old contract rules. These tests will succeed, while they should not! The question that rises, is that with this faith in unit testing, how much faith can be placed in such changes... The unit tests succeed, but bugs will occur when testing the application. The tests using this calculator will need to be fixed, which costs time and may even be stubbed/mocked a lot of times... How do you think about this case? I never thought about it thourougly. In my opinion, these changes to unit tests would be acceptable. If I do not use unit tests, I would also see such bugs arise within test phase (by testers). Yet I am not confident enough to point out what will cost more time (or less). Any thoughts?

    Read the article

  • Minimum cost strongly connected digraph

    - by Kazoom
    I have a digraph which is strongly connected (i.e. there is a path from i to j and j to i for each pair of nodes (i, j) in the graph G). I wish to find a strongly connected graph out of this graph such that the sum of all edges is the least. To put it differently, I need to get rid of edges in such a way that after removing them, the graph will still be strongly connected and of least cost for the sum of edges. I think it's an NP hard problem. I'm looking for an optimal solution, not approximation, for a small set of data like 20 nodes. Edit A more general description: Given a grap G(V,E) find a graph G'(V,E') such that if there exists a path from v1 to v2 in G than there also exists a path between v1 and v2 in G' and sum of each ei in E' is the least possible. so its similar to finding a minimum equivalent graph, only here we want to minimize the sum of edge weights rather than sum of edges. Edit: My approach so far: I thought of solving it using TSP with multiple visits, but it is not correct. My goal here is to cover each city but using a minimum cost path. So, it's more like the cover set problem, I guess, but I'm not exactly sure. I'm required to cover each and every city using paths whose total cost is minimum, so visiting already visited paths multiple times does not add to the cost.

    Read the article

  • Changing the parameters of a Javascript function using GreaseMonkey

    - by Dave
    There are a lot of questions here about Greasemonkey, but I didn't see anything directly related to my question. I'm on a website that uses a text editor control, and it's really annoying that they don't allow horizontal resizing (only vertical). So if you are using a cheapo projector that only supports 1024x768, the text runs off of the screen and there's nothing you can do about it. I looked at the page source, and found the section of code that I want Greasemonkey to modify: <script type="text/javascript"> // TinyMCE instance type: CD_TinyMCE tinyMCE.init({ convert_urls : "", mode : "specific_textareas", editor_selector : "rte", theme : "advanced", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left", theme_advanced_statusbar_location : "bottom", theme_advanced_resizing : true, **theme_advanced_resize_horizontal : false**, ... I just want to change the boldfaced value to true. I tried to approach this the brute force way and replace the text this way: function allowHorizontalResize() { var search_string = "theme_advanced_resize_horizontal : false"; var replace_string = "theme_advanced_resize_horizontal : true"; var doc_text = document.body.innerHTML; document.body.innerHTML = doc_text.replace( search_string, replace_string); } ...but it doesn't work. However, I know the basic search and replace part works because I can use this script to replace text within the editor -- I just can't change the javascript code that initializes the text editor. Can someone explain what I'm doing wrong? Is this even possible?

    Read the article

  • Data Annotations on ViewModels or Domain Objects

    - by Ahmad
    Where would data annotations be more suitable: ViewModels or Domain Objects or Both I am struggling to decide where these will be more suited. I have not as yet fully utilized them but this question came to mind. From most of the examples I have seen, they are generally placed on Models and simply use the required attributes for validation using ModelState.IsValid. I have also seen another question on SO where the use of data annotations alone is not sufficient and advocate. Option 1 - I will still need to validate again in my service layer. ( I think that my service layer should be complete and this include validation, since its planned to be used elsewhere) Option 2 - How will I then get the benefits of the built in validation both client and server side. Option 3 - there will be a repetition of validation logic, however I was wondering if one could use a MetaData class approach that can be used for both ViewModels and Domain Objects. ( This is completely of the top of my head, so it may be nonsensical) I wonder if this question even makes sense. If not, can someone please help in understanding this better. Have I completely misunderstood the use of data annotations?

    Read the article

  • NSFetchedResultsController sections localized sorted

    - by Gerd
    How could I use the NSFetchedResultsController with translated sort key and sectionKeyPath? Problem: I have ID in the property "type" in the database like typeA, typeB, typeC,... and not the value directly because it should be localized. In English typeA=Bird, typeB=Cat, typeC=Dog in German it would be Vogel, Katze, Hund. With a NSFetchedResultController with sort key and sectionKeyPath on "type" I receive the order and sections - typeA - typeB - typeC Next I translate for display and everything is fine in English: - Bird - Cat - Dog Now I switch to German and receive a wrong sort order - Vogel - Katze - Hund because it still sorts by typeA, typeB, typeC So I'm looking for a way to localize the sort for the NSFetchedResultsController. I tried the transient property approach, but this doesn't work for the sort key because the sort key need to be in the entity. I have no other idea. But I can't believe that's not possible to use NSFetchedResultsController on a derived attribute required for localization? There are related discussions like http://stackoverflow.com/questions/1384345/using-custom-sections-with-nsfetchedresultscontroller but the difference is that the custom section names and the sort key have probably the same order. Not in my case and this is the main difference. At the end I would need a sort order for the necessary NSSortDescriptor on a derived attribute, I guess. This sort order has also to serve for the sectionKeyPath. Thanks for any hint.

    Read the article

  • Starting with NHibernate

    - by George
    I'm having major difficulties to start off with NHiberante. Main problems: Where my hbm.xml files should reside? I create a Mappings folder but I received an error "Could not find xxx.hbm.xml file." I tried to load the specific class through the dialect cf.AddClass(typeof(xxx)); but it still gives me the same error (the files are marked as embebed resources. Also I'm having major problems in connection to it. I stopped trying to use the cfg xml file and tried a more direct approach with a library I have here. Configuration cfg = new Configuration(); cfg.AddClass(typeof(Tag)); ISessionFactory sessions = cfg.BuildSessionFactory(); AgnosticConnectionHandler agch = new AgnosticConnectionHandler("xxx","xxx","geo_biblio","localhost", 5432,DatabaseInstance.PostgreSQL); ISession sessao = sessions.OpenSession(agch.GetConnection); ITransaction tx = sessao.BeginTransaction(); Tag tag1 = new Tag(); tag1.NomeTag = "Teste Tag NHibernate!!!"; sessao.Save(tag1); tx.Commit(); sessao.Close(); Any tips for me? I'm getting the exception in line 2 of this code, and still not sure what to do. Any help is appreciated. Thanks

    Read the article

  • Python and a "time value of money" problem.

    - by jamieb
    (I asked this question earlier today, but I did a poor job of explaining myself. Let me try again) I have a client who is an industrial maintenance company. They sell service agreements that are prepaid 20 hour blocks of a technician's time. Some of their larger customers might burn through that agreement in two weeks while customers with fewer problems might go eight months on that same contract. I would like to use Python to help model projected sales revenue and determine how many billable hours per month that they'll be on the hook for. If each customer only ever bought a single service contract (never renewed) it would be easy to figure sales as monthly_revenue = contract_value * qty_contracts_sold. Billable hours would also be easy: billable_hrs = hrs_per_contract * qty_contracts_sold. However, how do I account for renewals? Assuming that 90% (or some other arbitrary amount) of customers renew, then their monthly revenue ought to grow geometrically. Another important variable is how long the average customer burns through a contract. How do I determine what the revenue and billable hours will be 3, 6, or 12 months from now, based on various renewal and burn rates? I assume that I'd use some type of recursive function but math was never one of my strong points. Any suggestions please? Edit: I'm thinking that the best way to approach this is to think of it as a "time value of money" problem. I've retitled the question as such. The problem is probably a lot more common if you think of "monthly sales" as something similar to annuity payments.

    Read the article

  • Replace low level web-service reference call transport with custom one

    - by hoodoos
    I'm not sure if title sounds right actually, so I will give more explanation here. I will begin from very beginning :) I'm using c# and .net for my development. I have an application that makes requests to some soap web-service and for each user request it produces 3 to 10 requests for web-service, they should all run async to finish in one time, so I use Async method of the web-service generated reference and then wait for result on callback. But it seems like it starts a thread (or takes it from pool) for every async call I make, so if I have 10 clients I got to spawn 30 to 100 threads and it sounds terrible even for my 16 cores server :) So i wanted to replace low level transport implementation with my own which uses non-blocking sockets and can handle at least 50 sockets run parallel in one thread with not much overhead. But I actually dunno where to put my override best. I analyzed System.Web.Services.Protocols.SoapHttpClientProtocol class and see that it has some GetWebRequest method which I actually could use. If only I could somehow interupt the object it creates and get a http request with all headers and body from there and then send it with my own sockets.. Any ideas what approach to use? Or maybe there's something built in the framework I can use?

    Read the article

  • Implementing scroll view that is much larger than the screen view with random images

    - by pulegium
    What I'm trying to do is to implement something like the fruit machine scroll view. Basically I have a sequence of images (randomly generated on the fly) and I want to allow the users to scroll through this line. There can fit approx 10 images on the screen, but the line is virtually unlimited. My question is what would be the best approach in implementing this? So far I've thought of having a UIImageView going across the screen (with width equal to the sum of 10 images) and the image associated with it would be a combination of 12 or so images, with two images falling out of the visible area, this would allow for smooth scrolling. If the user scrolls further, then I would reconstruct the image associated with the view so the new images are appended and the old one's are discarded. This image reconstruction business sounds bit complicated, so I was wondering if there's a more logical way to implement this. There's one more thing, I want to have two lines crossing each other with images, bit like conveyor belts crossing. If that makes any sense... Bit like below: V1 V2 H1 H2 H3 H4 H5 V4 V5 So if the vertical belt is moved it'd be like: V2 H3 H1 H2 V4 H4 H5 V5 V6 H1-H5, V1-V6 being automatically generated images. I'm not asking for the implementation code, just the thoughts on the principles how to implement this. Thanks!! :)

    Read the article

  • How to replace custom IDs in the order of their appearance with a shell script?

    - by Péter Török
    I have a pair of rather large log files with very similar content, except that some identifiers are different between the two. A couple of examples: UnifiedClassLoader3@19518cc | UnifiedClassLoader3@d0357a JBossRMIClassLoader@13c2d7f | JBossRMIClassLoader@191777e That is, wherever the first file contains UnifiedClassLoader3@19518cc, the second contains UnifiedClassLoader3@d0357a, and so on. I want to replace these with identical IDs so that I can spot the really important differences between the two files. I.e. I want to replace all occurrences of both UnifiedClassLoader3@19518cc in file1 and UnifiedClassLoader3@d0357a in file2 with UnifiedClassLoader3@1; all occurrences of both JBossRMIClassLoader@13c2d7f in file1 and JBossRMIClassLoader@191777e in file2 with JBossRMIClassLoader@2 etc. Using the Cygwin shell, so far I managed to list all different identifiers occurring in one of the files with grep -o -e 'ClassLoader[0-9]*@[0-9a-f][0-9a-f]*' file1.log | sort | uniq However, now the original order is lost, so I don't know which is the pair of which ID in the other file. With grep -n I can get the line number, so the sort would preserve the order of appearance, but then I can't weed out the duplicate occurrences. Unfortunately grep can not print only the first match of a pattern. I figured I could save the list of identifiers produced by the above command into a file, then iterate over the patterns in the file with grep -n | head -n 1, concatenate the results and sort them again. The result would be something like 2 ClassLoader3@19518cc 137 ClassLoader@13c2d7f 563 ClassLoader3@1267649 ... Then I could (either manually or with sed itself) massage this into a sed command like sed -e 's/ClassLoader3@19518cc/ClassLoader3@2/g' -e 's/ClassLoader@13c2d7f/ClassLoader@137/g' -e 's/ClassLoader3@1267649/ClassLoader3@563/g' file1.log > file1_processed.log and similarly for file2. However, before I start, I would like to verify that my plan is the simplest possible working solution to this. Is there any flaw in this approach? Is there a simpler way?

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • Python and Unicode: How everything should be Unicode

    - by A A
    Forgive if this a long a question: I have been programming in Python for around six months. Self taught, starting with the Python tutorial and then SO and then just using Google for stuff. Here is the sad part: No one told me all strings should be Unicode. No, I am not lying or making this up, but where does the tutorial mention it? And most examples also I see just make use of byte strings, instead of Unicode strings. I was just browsing and came across this question on SO, which says how every string in Python should be a Unicode string. This pretty much made me cry! I read that every string in Python 3.0 is Unicode by default, so my questions are for 2.x: Should I do a: print u'Some text' or just print 'Text' ? Everything should be Unicode, does this mean, like say I have a tuple: t = ('First', 'Second'), it should be t = (u'First', u'Second')? I read that I can do a from __future__ import unicode_literals and then every string will be a Unicode string, but should I do this inside a container also? When reading/ writing to a file, I should use the codecs module. Right? Or should I just use the standard way or reading/ writing and encode or decode where required? If I get the string from say raw_input(), should I convert that to Unicode also? What is the common approach to handling all of the above issues in 2.x? The from __future__ import unicode_literals statement? Sorry for being a such a noob, but this changes what I have been doing for a long time and so clearly I am confused.

    Read the article

  • nhibernate fluent repository pattern insert problem

    - by voam
    I am trying to use Fluent NHibernate and the repository pattern. I would like my business layer to not be knowledgeable of the data persistence layer. Ideally I would pass in an initialized domain object to the insert method of the repository and all would be well. Where I run into problems is if the object being passed in has a child object. For example say I want to insert an a new order for a customer, and the customer is a property of the order object. I would like to do something like this: Customer c = new Customer; c.CustomerId = 1; Order o = new Order; o.Customer = c; repository.InsertOrder(o); The problem is that using NHiberate the CustomerId field is only privately settable so I can not set it directly like this. so what I have ended up doing is have my repository have an interface of Order InsertOrder(int customerId) where all the foreign keys get passed in as parameters. Somehow this just doesn't seem right. The other approach was to use the NHibernate session variable to load a customer object in my business model and then have the order passed in to the repository but this defeats my persistence ignorance ideal. Should I throw this persistence ignorance out the window or am I missing something here? Thanks

    Read the article

  • "Session is Closed!" - NHibernate

    - by Alexis Abril
    This is in a web application environment: An initial request is able to successfully complete, however any additional requests return a "Session is Closed" response from the NHibernate framework. I'm using a HttpModule approach with the following code: public class MyHttpModule : IHttpModule { public void Init(HttpApplication context) { context.EndRequest += ApplicationEndRequest; context.BeginRequest += ApplicationBeginRequest; } public void ApplicationBeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession()); } public void ApplicationEndRequest(object sender, EventArgs e) { ISession currentSession = CurrentSessionContext.Unbind( SessionFactory.Instance); currentSession.Dispose(); } public void Dispose() { } } SessionFactory.Instance is my singleton implementation, using FluentNHibernate to return an ISessionFactory object. In my repository class, I attempt to use the following syntax: public class MyObjectRepository : IMyObjectRepository { public MyObject GetByID(int id) { using (ISession session = SessionFactory.Instance.GetCurrentSession()) return session.Get<MyObject>(id); } } This allows code in the application to be called as such: IMyObjectRepository repo = new MyObjectRepository(); MyObject obj = repo.GetByID(1); I have a suspicion my repository code is to blame, but I'm not 100% sure on the actual implementation I should be using. I found a similar issue on SO here. I too am using WebSessionContext in my implementation, however, no solution was provided other than writing a custom SessionManager. For simple CRUD operations, is a custom session provider required apart from the built in tools(ie WebSessionContext)?

    Read the article

  • SQLAlchemy - relationship limited on more than just the foreign key

    - by Marian
    I have a wiki db layout with Page and Revisions. Each Revision has a page_id referencing the Page, a page relationship to the referenced page; each Page has a all_revisions relationship to all its revisions. So far so common. But I want to implement different epochs for the pages: If a page was deleted and is recreated, the new revisions have a new epoch. To help find the correct revisions, each page has a current_epoch field. Now I want to provide a revisions relation on the page that only contains its revisions, but only those where the epochs match. This is what I've tried: revisions = relationship('Revision', primaryjoin = and_( 'Page.id == Revision.page_id', 'Page.current_epoch == Revision.epoch', ), foreign_keys=['Page.id', 'Page.current_epoch'] ) Full code (you may run that as it is) However this always raises ArgumentError: Could not determine relationship direction for primaryjoin condition ...`, I've tried all I had come to mind, it didn't work. What am I doing wrong? Is this a bad approach for doing this, how could it be done other than with a relationship?

    Read the article

  • Filling a byte array in Java

    - by Corleone
    Hey all! For part of a project I'm working on I am implementing a RTPpacket where I have to fill the header array of byte with RTP header fields. //size of the RTP header: static int HEADER_SIZE = 12; // bytes //Fields that compose the RTP header public int Version; // 2 bits public int Padding; // 1 bit public int Extension; // 1 bit public int CC; // 4 bits public int Marker; // 1 bit public int PayloadType; // 7 bits public int SequenceNumber; // 16 bits public int TimeStamp; // 32 bits public int Ssrc; // 32 bits //Bitstream of the RTP header public byte[] header = new byte[ HEADER_SIZE ]; This was my approach: /* * bits 0-1: Version * bit 2: Padding * bit 3: Extension * bits 4-7: CC */ header[0] = new Integer( (Version << 6)|(Padding << 5)|(Extension << 6)|CC ).byteValue(); /* * bit 0: Marker * bits 1-7: PayloadType */ header[1] = new Integer( (Marker << 7)|PayloadType ).byteValue(); /* SequenceNumber takes 2 bytes = 16 bits */ header[2] = new Integer( SequenceNumber >> 8 ).byteValue(); header[3] = new Integer( SequenceNumber ).byteValue(); /* TimeStamp takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[7-i] = new Integer( TimeStamp >> (8*i) ).byteValue(); /* Ssrc takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[11-i] = new Integer( Ssrc >> (8*i) ).byteValue(); Any other, maybe 'better' ways to do this?

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >