Search Results

Search found 1484 results on 60 pages for 'practical'.

Page 55/60 | < Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • The design of a generic data synchronizer, or, an [object] that does [actions] with the aid of [helpers]

    - by acheong87
    I'd like to create a generic data-source "synchronizer," where data-source "types" may include MySQL databases, Google Spreadsheets documents, CSV files, among others. I've been trying to figure out how to structure this in terms of classes and interfaces, keeping in mind (what I've read about) composition vs. inheritance and is-a vs. has-a, but each route I go down seems to violate some principle. For simplicity, assume that all data-sources have a header-row-plus-data-rows format. For example, assume that the first rows of Google Spreadsheets documents and CSV files will have column headers, a.k.a. "fields" (to parallel database fields). Also, eventually, I would like to implement this in PHP, but avoiding language-specific discussion would probably be more productive. Here's an overview of what I've tried. Part 1/4: ISyncable class CMySQL implements ISyncable GetFields() // sql query, pdo statement, whatever AddFields() RemFields() ... _dbh class CGoogleSpreadsheets implements ISyncable GetFields() // zend gdata api AddFields() RemFields() ... _spreadsheetKey _worksheetId class CCsvFile implements ISyncable GetFields() // read from buffer AddFields() RemFields() ... _buffer interface ISyncable GetFields() AddFields($field1, $field2, ...) RemFields($field1, $field2, ...) ... CanAddFields() // maybe the spreadsheet is locked for write, or CanRemFields() // maybe no permission to alter a database table ... AddRow() ModRow() RemRow() ... Open() Close() ... First Question: Does it make sense to use an interface, as above? Part 2/4: CSyncer Next, the thing that does the syncing. class CSyncer __construct(ISyncable $A, ISyncable $B) Push() // sync A to B Pull() // sync B to A Sync() // Push() and Pull() only differ in direction; factor. // Sync()'s job is to make sure that the fields on each side // match, to add fields where appropriate and possible, to // account for different column-orderings, etc., and of // course, to add and remove rows as necessary to sync. ... _A _B Second Question: Does it make sense to define such a class, or am I treading dangerously close to the "Kingdom of Nouns"? Part 3/4: CTranslator? ITranslator? Now, here's where I actually get lost, assuming the above is passable. Sometimes, two ISyncables speak different "dialects." For example, believe it or not, Google Spreadsheets (accessed through the Google Data API "list feed") returns column headers lower-cased and stripped of all spaces and symbols! That is, sys_TIMESTAMP is systimestamp, as far as my code can tell. (Yes, I am aware that the "cell feed" does not strip the name so; however cell-by-cell manipulation is too slow for what I'm doing.) One can imagine other hypothetical examples. Perhaps even the data itself can be in different "dialects." But let's take it as given for now, and not argue this if possible. Third Question: How would you implement "translation"? Note: Taking all this as an exercise, I'm more interested in the "idealized" design, rather than the practical one. (God knows that shipped sailed when I began this project.) Part 4/4: Further Thought Here's my train of thought to demonstrate I've thunk, albeit unfruitfully: First, I thought, primitively, "I'll just modify CMySQL::GetFields() to lower-case and strip field names so they're compatible with Google Spreadsheets." But of course, then my class should really be called, CMySQLForGoogleSpreadsheets, and that can't be right. So, the thing which translates must exist outside of an ISyncable implementor. And surely it can't be right to make each translation a method in CSyncer. If it exists outside of both ISyncable and CSyncer, then what is it? (Is it even an "it"?) Is it an abstract class, i.e. abstract CTranslator? Is it an interface, since a translator only does, not has, i.e. interface ITranslator? Does it even require instantiation? e.g. If it's an ITranslator, then should its translation methods be static? (I learned what "late static binding" meant, today.) And, dear God, whatever it is, how should a CSyncer use it? Does it "have" it? Is it, "it"? Who am I? ...am I, "I"? I've attempted to break up the question into sub-questions, but essentially my question is singular: How does one implement an object A that conceptually "links" (has) two objects b1 and b2 that share a common interface B, where certain pairs of b1 and b2 require a helper, e.g. a translator, to be handled by A? Something tells me that I've overcomplicated this design, or violated a principle much higher up. Thank you all very much for your time and any advice you can provide.

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • How to GET a read-only vs editable resource in REST style?

    - by Val
    I'm fairly familiar with REST principles, and have read the relevant dissertation, Wikipedia entry, a bunch of blog posts and StackOverflow questions on the subject, but still haven't found a straightforward answer to a common case: I need to request a resource to display. Depending on the resource's state, I need to render either a read-only or an editable representation. In both cases, I need to GET the resource. How do I construct a URL to get the read-only or editable version? If my user follows a link to GET /resource/<id>, that should suffice to indicate to me that s/he needs the read-only representation. But if I need to server up an editable form, what does that URL look like? GET /resource/<id>/edit is obvious, but it contains a verb in the URL. Changing that to GET /resource/<id>/editable solves that problem, but at a seemingly superficial level. Is that all there is to it -- change verbs to adjectives? If instead I use POST to retrieve the editable version, then how do I distinguish between the POST that initially retrieves it, vs the POST that saves it? My (weak) excuse for using POST would be that retrieving an editable version would cause a change of state on the server: locking the resource. But that only holds if my requirements are to implement such a lock, which is not always the case. PUT fails for the same reason, plus PUT is not enabled by default on the Web servers I'm running, so there are practical reasons not to use it (and DELETE). Note that even in the editable state, I haven't made any changes yet; presumably when I submit the resource to the Web server again, I'd POST it. But to get something that I can later POST, the server has to first serve up a particular representation. I guess another approach would be to have separate resources at the collection level: GET /read-only/resource/<id> and GET /editable/resource/<id> or GET /resource/read-only/<id> and GET /resource/editable/<id> ... but that looks pretty ugly to me. Thoughts?

    Read the article

  • Using regex to fix phone numbers in a CSV with PHP

    - by Hurpe
    My new phone does not recognize a phone number unless it's area code matches the incoming call. Since I live in Idaho where an area code is not needed for in-state calls, many of my contacts were saved without an area code. Since I have thousands of contacts stored in my phone, it would not be practical to manually update them. I decided to write the following PHP script to handle the problem. It seems to work well, except that I'm finding duplicate area codes at the beginning of random contacts. <?php //the script can take a while to complete set_time_limit(200); function validate_area_code($number) { //digits are taken one by one out of $number, and insert in to $numString $numString = ""; for ($i = 0; $i < strlen($number); $i++) { $curr = substr($number,$i,1); //only copy from $number to $numString when the character is numeric if (is_numeric($curr)) { $numString = $numString . $curr; } } //add area code "208" to the beginning of any phone number of length 7 if (strlen($numString) == 7) { return "208" . $numString; //remove country code (none of the contacts are outside the U.S.) } else if (strlen($numString) == 11) { return preg_replace("/^1/","",$numString); } else { return $numString; } } //matches any phone number in the csv $pattern = "/((1? ?\(?[2-9]\d\d\)? *)? ?\d\d\d-?\d\d\d\d)/"; $csv = file_get_contents("contacts2.CSV"); preg_match_all($pattern,$csv,$matches); foreach ($matches[0] as $key1 => $value) { /*create a pattern that matches the specific phone number by adding slashes before possible special characters*/ $pattern = preg_replace("/\(|\)|\-/","\\\\$0",$value); //create the replacement phone number $replacement = validate_area_code($value); //add delimeters $pattern = "/" . $pattern . "/"; $csv = preg_replace($pattern,$replacement,$csv); } echo $csv; ?> Is there a better approach to modifying the csv? Also, is there a way to minimize the number of passes over the csv? In the script above, preg_replace is called thousands of times on a very large String.

    Read the article

  • .NET XML Serialization without <?xml> root node

    - by Graphain
    Hi, I'm trying to generate XML like this: <?xml version="1.0"?> <!DOCTYPE APIRequest SYSTEM "https://url"> <APIRequest> <Head> <Key>123</Key> </Head> <ObjectClass> <Field>Value</Field </ObjectClass> </APIRequest> I have a class (ObjectClass) decorated with XMLSerialization attributes like this: [XmlRoot("ObjectClass")] public class ObjectClass { [XmlElement("Field")] public string Field { get; set; } } And my really hacky intuitive thought to just get this working is to do this when I serialize: ObjectClass inst = new ObjectClass(); XmlSerializer serializer = new XmlSerializer(inst.GetType(), ""); StringWriter w = new StringWriter(); w.WriteLine(@"<?xml version=""1.0""?>"); w.WriteLine("<!DOCTYPE APIRequest SYSTEM"); w.WriteLine(@"""https://url"">"); w.WriteLine("<APIRequest>"); w.WriteLine("<Head>"); w.WriteLine(@"<Field>Value</Field>"); w.WriteLine(@"</Head>"); XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("", ""); serializer.Serialize(w, inst, ns); w.WriteLine("</APIRequest>"); However, this generates XML like this: <?xml version="1.0"?> <!DOCTYPE APIRequest SYSTEM "https://url"> <APIRequest> <Head> <Key>123</Key> </Head> <?xml version="1.0" encoding="utf-16"?> <ObjectClass> <Field>Value</Field> </ObjectClass> </APIRequest> i.e. the serialize statement is automatically adding a <?xml root element. I know I'm attacking this wrong so can someone point me in the right direction? As a note, I don't think it will make practical sense to just make an APIRequest class with an ObjectClass in it (because there are say 20 different types of ObjectClass that each needs this boilerplate around them) but correct me if I'm wrong.

    Read the article

  • Fixed point math in c#?

    - by x4000
    Hi there, I was wondering if anyone here knows of any good resources for fixed point math in c#? I've seen things like this (http://2ddev.72dpiarmy.com/viewtopic.php?id=156) and this (http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math), and a number of discussions about whether decimal is really fixed point or actually floating point (update: responders have confirmed that it's definitely floating point), but I haven't seen a solid C# library for things like calculating cosine and sine. My needs are simple -- I need the basic operators, plus cosine, sine, arctan2, PI... I think that's about it. Maybe sqrt. I'm programming a 2D RTS game, which I have largely working, but the unit movement when using floating-point math (doubles) has very small inaccuracies over time (10-30 minutes) across multiple machines, leading to desyncs. This is presently only between a 32 bit OS and a 64 bit OS, all the 32 bit machines seem to stay in sync without issue, which is what makes me think this is a floating point issue. I was aware from this as a possible issue from the outset, and so have limited my use of non-integer position math as much as possible, but for smooth diagonal movement at varying speeds I'm calculating the angle between points in radians, then getting the x and y components of movement with sin and cos. That's the main issue. I'm also doing some calculations for line segment intersections, line-circle intersections, circle-rect intersections, etc, that also probably need to move from floating-point to fixed-point to avoid cross-machine issues. If there's something open source in Java or VB or another comparable language, I could probably convert the code for my uses. The main priority for me is accuracy, although I'd like as little speed loss over present performance as possible. This whole fixed point math thing is very new to me, and I'm surprised by how little practical information on it there is on google -- most stuff seems to be either theory or dense C++ header files. Anything you could do to point me in the right direction is much appreciated; if I can get this working, I plan to open-source the math functions I put together so that there will be a resource for other C# programmers out there. UPDATE: I could definitely make a cosine/sine lookup table work for my purposes, but I don't think that would work for arctan2, since I'd need to generate a table with about 64,000x64,000 entries (yikes). If you know any programmatic explanations of efficient ways to calculate things like arctan2, that would be awesome. My math background is all right, but the advanced formulas and traditional math notation are very difficult for me to translate into code.

    Read the article

  • How to setup a webserver in common lisp?

    - by Serpico
    Several months ago, I was inspired by the magnificent book ANSI Common Lisp written by Paul Graham, and the statement that Lisp could be used as a secret weapon in your web development, published by the same author on his blog. Wow, that is amazing. That is something that I have been looking for long time. The author really developed a successful web applcation and sold it to Yahoo. With those encouraging images, I determined to spend some time (1 year or 2 year, who knows) on learning Common Lisp. Maybe someday I will development my web application and turn into a great Lisp expert. In fact, this is the second time for me to get to study Lisp. The first time was a couple of years ago when I was fascinated by the famous book SICP but found later Scheme was so unbelievably immature for real life application. After reading some chapters of ANSI Common Lisp, I was pretty sure that is a great book full of detailed exploration of Common Lisp. Then I began to set up a web server in Common Lisp. After all, this should be the best way if you want to learn something. Demonstrations are always better than definations. As suggested by the book Practical Common Lisp (by the way, this is also a great book), I chose to install AllegroServe on some Common Lisp implementation. Then, from somewhere else, I learned that Hunchentoot seems to be better than AllegroServe. (I don't remember where and whom this word is from. So don't argue with me.) Ironically, you know what, I never could installed the two packages on any Common Lisp implementation. More annoyingly, I even don't know why. The machine always spit up a lot of jargon and lead me into a chaos. I've tried searching the internet and have not found anything. Could anybody who has successfully installed these packages in Linux tell me how you did it? Have you run into any trouble? How did you figured out what is wrong and fixed it? The more detailed, the more helpful.

    Read the article

  • Volunteer for a potential employer?

    - by EoRaptor013
    I've been looking for work since March, and haven't had much luck. Recently, however, I interviewed with a small company near my home for a C#, .NET, SQL development position. I hit it off very well with the hiring manager during the phone screen, and even more so during the face to face. Unfortunately, I failed the practical test: wiring up a web form, creating a couple of SQL stored procedures, saving new data with validation, and creating a minimal search screen. I knew what I was doing, but I was too slow to meet their standards as all the work needed to be done within an hour. Nevertheless, I really liked the place, the environment, the people who I would have been working with, and the boss. (I gave the company an 11 on Joel's 12 point scale.) So, the obvious next step was to scrape the rust off. I've been trying to create little projects for myself, but I don't know that I've been effective in getting any faster. What with all that goes into creating a project, I'm not heads-down coding as much as I think I need. Now, with all that introduction, here's the question. I have been thinking about calling the hiring manager at that place, and asking him to let me volunteer for three or four weeks, with no strings attached. I think it would benefit me, and wouldn't cost him anything (as long as I didn't slow the existing people down!). At the end of that period, he might, or might not, be inclined to hire me, but even if not, I would have had as much as 160 hours of in the trenches development. Maybe not all shiny, but no more rust, I would think. Does this plan make any sense at all? I certainly don't want to sound desperate (although, I'm not far from being there), and I very much need the tuneup, lube, and change the oil. What's the downside, if any, to me doing this? Do any of you see red flags going up—either from the prerspective of the hiring manager, or from the perspective of a developer?

    Read the article

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

  • Avoiding Redundancies in XML documents

    - by MarceloRamires
    I was working with a certain XML where there were no redundancies <person> <eye> <eye_info> <eye_color> blue </eye_color> </eye_info> </eye> <hair> <hair_info> <hair_color> blue </hair_color> </hair_info> </hair> </person> As you can see, the sub-tag eye-color makes reference to eye in it's name, so there was no need to avoid redundancies, I could get the eye color in a single line after loading the XML into a dataset: dataset.ReadXml(path); value = dataset.Tables("eye_info").Rows(0)("eye_color"); I do realise it's not the smartest way of doing so, and this situation I'm having now wasn't unforeseen. Now, let's say I have to read xml's that are in this format: <person> <eye> <info> <color> blue </color> </info> </eye> <hair> <info> <color> blue </color> </info> </hair> </person> So If I try to call it like this: dataset.ReadXml(path); value = dataset.Tables("info").Rows(0)("color"); There will be a redundancy, because I could only go as far as one up level to identify a single field in a XML with my previous method, and the 'disambiguator' is three levels above. Is there a practical way to reach with no mistake a single field given all the above (or at least a few) fields ?

    Read the article

  • Uses of a C++ Arithmetic Promotion Header

    - by OlduvaiHand
    I've been playing around with a set of templates for determining the correct promotion type given two primitive types in C++. The idea is that if you define a custom numeric template, you could use these to determine the return type of, say, the operator+ function based on the class passed to the templates. For example: // Custom numeric class template <class T> struct Complex { Complex(T real, T imag) : r(real), i(imag) {} T r, i; // Other implementation stuff }; // Generic arithmetic promotion template template <class T, class U> struct ArithmeticPromotion { typedef typename X type; // I realize this is incorrect, but the point is it would // figure out what X would be via trait testing, etc }; // Specialization of arithmetic promotion template template <> class ArithmeticPromotion<long long, unsigned long> { typedef typename unsigned long long type; } // Arithmetic promotion template actually being used template <class T, class U> Complex<typename ArithmeticPromotion<T, U>::type> operator+ (Complex<T>& lhs, Complex<U>& rhs) { return Complex<typename ArithmeticPromotion<T, U>::type>(lhs.r + rhs.r, lhs.i + rhs.i); } If you use these promotion templates, you can more or less treat your user defined types as if they're primitives with the same promotion rules being applied to them. So, I guess the question I have is would this be something that could be useful? And if so, what sorts of common tasks would you want templated out for ease of use? I'm working on the assumption that just having the promotion templates alone would be insufficient for practical adoption. Incidentally, Boost has something similar in its math/tools/promotion header, but it's really more for getting values ready to be passed to the standard C math functions (that expect either 2 ints or 2 doubles) and bypasses all of the integral types. Is something that simple preferable to having complete control over how your objects are being converted? TL;DR: What sorts of helper templates would you expect to find in an arithmetic promotion header beyond the machinery that does the promotion itself?

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

  • How can I structure and recode messy categorical data in R?

    - by briandk
    I'm struggling with how to best structure categorical data that's messy, and comes from a dataset I'll need to clean. The Coding Scheme I'm analyzing data from a university science course exam. We're looking at patterns in student responses, and we developed a coding scheme to represent the kinds of things students are doing in their answers. A subset of the coding scheme is shown below. Note that within each major code (1, 2, 3) are nested non-unique sub-codes (a, b, ...). What the Raw Data Looks Like I've created an anonymized, raw subset of my actual data which you can view here. Part of my problem is that those who coded the data noticed that some students displayed multiple patterns. The coders' solution was to create enough columns (reason1, reason2, ...) to hold students with multiple patterns. That becomes important because the order (reason1, reason2) is arbitrary--two students (like student 41 and student 42 in my dataset) who correctly applied "dependency" should both register in an analysis, regardless of whether 3a appears in the reason column or the reason2 column. How Can I Best Structure Student Data? Part of my problem is that in the raw data, not all students display the same patterns, or the same number of them, in the same order. Some students may do just one thing, others may do several. So, an abstracted representation of example students might look like this: Note in the example above that student002 and student003 both are coded as "1b", although I've deliberately shown the order as different to reflect the reality of my data. My (Practical) Questions Should I concatenate reason1, reason2, ... into one column? How can I (re)code the reasons in R to reflect the multiplicity for some students? Thanks I realize this question is as much about good data conceptualization as it is about specific features of R, but I thought it would be appropriate to ask it here. If you feel it's inappropriate for me to ask the question, please let me know in the comments, and stackoverflow will automatically flood my inbox with sadface emoticons. If I haven't been specific enough, please let me know and I'll do my best to be clearer.

    Read the article

  • Can I call make runtime decided method calls in Java?

    - by Catalin Marin
    I know there is an invoke function that does the stuff, I am overall interested in the "correctness" of using such a behavior. My issue is this: I have a Service Object witch contains methods which I consider services. What I want to do is alter the behavior of those services without later intrusion. For example: class MyService { public ServiceResponse ServeMeDonuts() { do stuff... return new ServiceResponse(); } after 2 months I find out that I need to offer the same service to a new client app and I also need to do certain extra stuff like setting a flag, or make or updating certain data, or encode the response differently. What I can do is pop it up and throw down some IFs. In my opinion this is not good as it means interaction with tested code and may result in un wanted behaviour for the previous service clients. So I come and add something to my registry telling the system that the "NewClient" has a different behavior. So I'll do something like this: public interface Behavior { public void preExecute(); public void postExecute(); } public class BehaviorOfMyService implements Behavior{ String method; String clientType; public void BehaviorOfMyService(String method,String clientType) { this.method = method; this.clientType = clientType; } public void preExecute() { Method preCall = this.getClass().getMethod("pre" + this.method + this.clientType); if(preCall != null) { return preCall.invoke(); } return false; } ...same for postExecute(); public void preServeMeDonutsNewClient() { do the stuff... } } when the system will do something like this if(registrySaysThereIs different behavior set for this ServiceObject) { Class toBeCalled = Class.forName("BehaviorOf" + usedServiceObjectName); Object instance = toBeCalled.getConstructor().newInstance(method,client); instance.preExecute(); ....call the service... instance.postExecute(); .... } I am not particularly interested in correctness of code as in correctness of thinking and approach. Actually I have to do this in PHP, witch I see as a kind of Pop music of programming which I have to "sing" for commercial reasons, even though I play POP I really want to sing by the book, so putting aside my more or less inspired analogy I really want to know your opinion on this matter for it's practical necessity and technical approach. Thanks

    Read the article

  • Embedded "Smart" character LCD driver. Is this a good idea?

    - by chris12892
    I have an embedded project that I am working on, and I am currently coding the character LCD driver. At the moment, the LCD driver only supports "dumb" writing. For example, let's say line 1 has some text on it, and I make a call to the function that writes to the line. The function will simply seek to the beginning of the line and write the text (plus enough whitespace to erase whatever was last written). This is well and good, but I get the feeling it is horribly inefficient sometimes, since some lines are simply: "Some-reading: some-Value" Rather than "brute force" replacing the entire line, I wanted to develop some code that would figure out the best way to update the information on the LCD. (just as background, it takes 2 bytes to seek to any char position. I can then begin writing the string) My idea was to first have a loop. This loop would compare the input to the last write, and in doing so, it would cover two things: A: Collect all the differences between the last write and the input. For every contiguous segment (be it same or different) add two bytes to the byte count. This is referenced in B to determine if we are wasting serial bandwidth. B: The loop would determine if this is really a smart thing to do. If we end up using more bytes to update the line than to "brute force" the line, then we should just return and let the brute force method take over. We should exit the smart write function as soon as this condition is met to avoid wasting time. The next part of the function would take all the differences, seek to the required char on the LCD, and write them. Thus, if we have a string like this already on the LCD: "Current Temp: 80F" and we want to update it to "Current Temp: 79F" The function will go through and see that it would take less bandwidth to simply seek to the "8" and write "79". The "7" will cover the "8" and the "9" will cover the "0". That way, we don't waste time writing out the entire string. Does this seem like a practical idea?

    Read the article

  • approximating log10[x^k0 + k1]

    - by Yale Zhang
    Greetings. I'm trying to approximate the function Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14. k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error. This function is virtually identical to Log[x], except near 0, where it differs a lot. I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions. My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1). The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order. SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely. What I'm asking is if there's a more efficient approximation at a higher level? For example: Can this function be decomposed into functions with a limited domain like log2((2^x) * significand) = x + log2(significand) hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it? Iterative method? don't think so because the Newton method for log[x] is already a complicated expression Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive. Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.

    Read the article

  • Convert NSData into Hex NSString

    - by Dawson
    With reference to the following question: Convert NSData into HEX NSSString I have solved the problem using the solution provided by Erik Aigner which is: NSData *data = ...; NSUInteger capacity = [data length] * 2; NSMutableString *stringBuffer = [NSMutableString stringWithCapacity:capacity]; const unsigned char *dataBuffer = [data bytes]; NSInteger i; for (i=0; i<[data length]; ++i) { [stringBuffer appendFormat:@"%02X", (NSUInteger)dataBuffer[i]]; } However, there is one small problem in that if there are extra zeros at the back, the string value would be different. For eg. if the hexa data is of a string @"3700000000000000", when converted using a scanner to integer: unsigned result = 0; NSScanner *scanner = [NSScanner scannerWithString:stringBuffer]; [scanner scanHexInt:&result]; NSLog(@"INTEGER: %u",result); The result would be 4294967295, which is incorrect. Shouldn't it be 55 as only the hexa 37 is taken? So how do I get rid of the zeros? EDIT: (In response to CRD) Hi, thanks for clarifying my doubts. So what you're doing is to actually read the 64-bit integer directly from a byte pointer right? However I have another question. How do you actually cast NSData to a byte pointer? To make it easier for you to understand, I'll explain what I did originally. Firstly, what I did was to display the data of the file which I have (data is in hexadecimal) NSData *file = [NSData dataWithContentsOfFile:@"file path here"]; NSLog(@"Patch File: %@",file); Output: Next, what I did was to read and offset the first 8 bytes of the file and convert them into a string. // 0-8 bytes [file seekToFileOffset:0]; NSData *b = [file readDataOfLength:8]; NSUInteger capacity = [b length] * 2; NSMutableString *stringBuffer = [NSMutableString stringWithCapacity:capacity]; const unsigned char *dataBuffer = [b bytes]; NSInteger i; for (i=0; i<[b length]; ++i) { [stringBuffer appendFormat:@"%02X", (NSUInteger)dataBuffer[i]]; } NSLog(@"0-8 bytes HEXADECIMAL: %@",stringBuffer); As you can see, 0x3700000000000000 is the next 8 bytes. The only changes I would have to make to access the next 8 bytes would be to change the value of SeekFileToOffset to 8, so as to access the next 8 bytes of data. All in all, the solution you gave me is useful, however it would not be practical to enter the hexadecimal values manually. If formatting the bytes as a string and then parsing them is not the way to do it, then how do I access the first 8 bytes of the data directly and cast them into a byte pointer?

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >