Search Results

Search found 2253 results on 91 pages for 'constant dean'.

Page 76/91 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Deserialize xml which uses attribute name/value pairs

    - by Bodyloss
    My application receives a constant stream of xml files which are more or less a direct copy of the database record <record type="update"> <field name="id">987654321</field> <field name="user_id">4321</field> <field name="updated">2011-11-24 13:43:23</field> </record> And I need to deserialize this into a class which provides nullable property's for all columns class Record { public long? Id { get; set; } public long? UserId { get; set; } public DateTime? Updated { get; set; } } I just cant seem to work out a method of doing this without having to parse the xml file manually and switch on the field's name attribute to store the values. Is their a way this can be achieved quickly using an XmlSerializer? And if not is their a more efficient way of parsing it manually? Regards and thanks My main problem is that the attribute name needs to have its value set to a property name and its value as the contents of a <field>..</field> element

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Negative number representation across multiple architechture

    - by Donotalo
    I'm working with OKI 431 micro controller. It can communicate with PC with appropriate software installed. An EEPROM is connected in the I2C bus of the micro which works as permanent memory. The PC software can read from and write to this EEPROM. Consider two numbers, B and C, each is two byte integer. B is known to both the PC software and the micro and is a constant. C will be a number so close to B such that B-C will fit in a signed 8 bit integer. After some testing, appropriate value for C will be determined by PC and will be stored into the EEPROM of the micro for later use. Now the micro can store C in two ways: The micro can store whole two byte representing C The micro can store B-C as one byte signed integer, and can later derive C from B and B-C I think that two's complement representation of negative number is now universally accepted by hardware manufacturers. Still I personally don't like negative numbers to be stored in a storage medium which will be accessed by two different architectures because negative number can be represented in different ways. For you information, 431 also uses two's complement. Should I get rid of the headache that negative number can be represented in different ways and accept the one byte solution as my other team member suggested? Or should I stick to the decision of the two byte solution because I don't need to deal with negative numbers? Which one would you prefer and why?

    Read the article

  • Random forests for short texts

    - by Jasie
    Hi all, I've been reading about Random Forests (1,2) because I think it'd be really cool to be able to classify a set of 1,000 sentences into pre-defined categories. I'm wondering if someone can explain to me the algorithm better, I think the papers are a bit dense. Here's the gist from 1: Overview We assume that the user knows about the construction of single classification trees. Random Forests grows many classification trees. To classify a new object from an input vector, put the input vector down each of the trees in the forest. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes (over all the trees in the forest). Each tree is grown as follows: If the number of cases in the training set is N, sample N cases at random - but with replacement, from the original data. This sample will be the training set for growing the tree. If there are M input variables, a number m « M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing. Each tree is grown to the largest extent possible. There is no pruning. So, does this look right? I'd have N = 1,000 training cases (sentences), M = 100 variables (let's say, there are only 100 unique words across all sentences), so the input vector is a bit vector of length 100 corresponding to each word. I randomly sample N = 1000 cases at random (with replacement) to build trees from. I pick some small number of input variables m « M, let's say 10, to build a tree off of. Do I build tree nodes randomly, using all m input variables? How many classification trees do I build? Thanks for the help!

    Read the article

  • [C#] how to do Exception Handling & Tracing

    - by shrimpy
    Hi all, i am reading some C# books, and got some exercise don't know how to do, or not sure what does the question mean. Problem: After working for a company for some time, your skills as a knowledgeable developer are recognized, and you are given the task of “policing” the implementation of exception handling and tracing in the source code (C#) for an enterprise application that is under constant incremental development. The two goals set by the product architect are: 100% of methods in the entire application must have at least a standard exception handler, using try/catch/finally blocks; more complex methods must also have additional exception handling for specific exceptions All control flow code can optionally write “tracing” information to assist in debugging and instrumentation of the application at run-time in situations where traditional debuggers are not available (eg. on staging and production servers). (i am not quite understand these criterias, i came from the java world, java has two kind of exception, check and unchecked exception. Developer must handle checked exception, and do logging. about unchecked exception, still do logging maybe, but most of the time we just throw it. however here comes to C#, what should i do????) Question for Problem: List rules you would create for the development team to follow, and the ways in which you would enforce rules, to achieve these goals. How would you go about ensuring that all existing code complies with the rules specified by the product architect; in particular, what considerations would impact your planning for the work to ensure all existing code complies?

    Read the article

  • How do I DRY up business logic between sever-side Ruby and client-side Javascript?

    - by James A. Rosen
    I have a Widget model with inheritance (I'm using Single-Table Inheritance, but it's equally valid for Class-per-Table). Some of the subclasses require a particular field; others do not. class Widget < ActiveRecord ALL_WIDGET_TYPES = [FooWidget, BarWidget, BazWidget] end class FooWidget < Widget validates_presence_of :color end class BarWidget < Widget # no color field end class BazWidget < Widget validates_presence_of :color end I'm building a "New Widget" form (app/views/widgets/new.html.erb) and would like to dynamically show/hide the color field based on a <select> for widget_type. <% form_for @widget do |f| %> <%= f.select :type, Widget::ALL_WIDGET_TYPES %> <div class='hiddenUnlessWidgetTypeIsFooOrBaz'> <%= f.label :color %> <%= f.text_field :color %> </div> <% end %> I can easily write some jQuery to watch for onChange events on widget_type, but that would mean putting some sort of WidgetTypesThatRequireColor constant in my Javascript. Easy enough to do manually, but it is likely to get disconnected from the Widget model classes. I would prefer not to output Javascript directly in my view, though I have considered using content_for(:js) and have a yield :js in my template footer. Any better ideas?

    Read the article

  • How big can I make an Android application's canvas in terms of pixels?

    - by user279112
    I've determined an estimate of the size of my Android emulator's screen in pixels, although I think its resolution can be changed to other numbers. Quite frankly though that doesn't eliminate the general problem of not knowing how many pixels on each axis I have to work with on my Android applications in general. The main problem I'm trying to solve is this: How do I make sure I don't use a faulty resolution on Android applications if I want to keep things' sizes constant (so that if the application screen shrinks, for instances, objects will still show up just as big - there just won't be as many of them being shown) if I wish to do this with a single universal resolution for each program? Failing that, how do I make sure everything's alright if I try to do everything the same way with maybe a few different pre-set resolutions? Mainly it seems like a relevant question that must be answered before I can come across a complete answer for the general problem is how big can I always make my application in pixels, NOT regarding if and when a user resizes the application's screen to something smaller than the maximum size permitted by the phone and its operating system. I really want to try to keep this simple. If I were doing this for a modern desktop, for instance, I know that if I design the application with a 800x600 canvas, the user can still shrink the application to the point they're not doing themselves any favors, but at least I can basically count on it working right and not being too big for the monitor or something. Is there such a magic resolution for Android, assuming that I'm designing for API levels 3+ (Android 1.5+)? Thanks

    Read the article

  • Constraint to array dimension in C language

    - by Summer_More_More_Tea
    int KMP( const char *original, int o_len, const char *substring, int s_len ){ if( o_len < s_len ) return -1; int k = 0; int cur = 1; int fail[ s_len ]; fail[ k ] = -1; while( cur < s_len ){ k = cur - 1; do{ if( substring[ cur ] == substring[ k ] ){ fail[ cur ] = k; break; }else{ k = fail[ k ] + 1; } }while( k ); if( !k && ( substring[ cur ] != substring[ 0 ] ) ){ fail[ cur ] = -1; }else if( !k ){ fail[ cur ] = 0; } cur++; } k = 0; cur = 0; while( ( k < s_len ) && ( cur < o_len ) ){ if( original[ cur ] == substring[ k ] ){ cur++; k++; }else{ if( k == 0 ){ cur++; }else{ k = fail[ k - 1 ] + 1; } } } if( k == s_len ) return cur - k; else return -1; } This is a KMP algorithm I once coded. When I reviewed it this morning, I find it strange that an integer array is defined as int fail[ s_len ]. Does the specification requires dimesion of arrays compile-time constant? How can this code pass the compilation? By the way, my gcc version is 4.4.1. Thanks in advance!

    Read the article

  • design an extendable database model

    - by wishi_
    Hi! Currently I'm doing a project whose specifications are unclear - well who doesn't. I wonder what's the best development strategy to design a DB, that's going to be extended sooner or later with additional tables and relations. I want to include "changeability". My main concern is that I want to apply design patterns (it's a university project) and I want to separate the constant factors from those, that change by choosing appropriate design patterns - in my case MVC and a set of sub-patterns at model level. When it comes to the DB however, I may have to resdesign my model in my MVC approach, because my domain model at a later stage my require a different set of classes representing the DB tables. I use Hibernate as an abstraction layer between DB and application. Would you start with a very minimal DB, just a few tables and relations? And what if I want an efficient DB, too? I wonder what strategies are applied in the real world. Stakeholder analysis for example isn't a sufficient planing solution when it comes to changing requirements. I think - at a DB level - my design pattern ends. So there's breach whose impact I'd like to minimize with a smart strategy.

    Read the article

  • Is there "good" PRNG generating values without hidden state?

    - by actual
    I need some good pseudo random number generator that can be computed like a pure function from its previous output without any state hiding. Under "good" I mean: I must be able to parametrize generator in such way that running it for 2^n iterations with any parameters should cover all or almost all values between 0 and 2^n - 1, where n is the number of bits in output value. Combined generator output of n + p bits must cover all or almost all values between 0 and 2^(n + p) - 1 if I run it for 2^n iterations for every possible combination of its parameters, where p is the number of bits in parameters. For example, LCG can be computed like a pure function and it can meet first condition, but it can not meet second one. Say, we have 32-bit generator, m = 2^32 and it is constant, our p = 64 (two 32-bit parameters a and c), n + p = 96, so we must peek data by three ints from output to meet second condition. Unfortunately, condition can not be meet because of strictly alternating sequence of odd and even ints in output. To overcome this, hidden state must be introduced, but that makes function not pure and breaks first condition (period become much longer). Am I wanting too much?

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Why is my quick sort so slow?

    - by user513075
    Hello, I am practicing writing sorting algorithms as part of some interview preparation, and I am wondering if anybody can help me spot why this quick sort is not very fast? It appears to have the correct runtime complexity, but it is slower than my merge sort by a constant factor of about 2. I would also appreciate any comments that would improve my code that don't necessarily answer the question. Thanks a lot for your help! Please don't hesitate to let me know if I have made any etiquette mistakes. This is my first question here. private class QuickSort implements Sort { @Override public int[] sortItems(int[] ts) { List<Integer> toSort = new ArrayList<Integer>(); for (int i : ts) { toSort.add(i); } toSort = partition(toSort); int[] ret = new int[ts.length]; for (int i = 0; i < toSort.size(); i++) { ret[i] = toSort.get(i); } return ret; } private List<Integer> partition(List<Integer> toSort) { if (toSort.size() <= 1) return toSort; int pivotIndex = myRandom.nextInt(toSort.size()); Integer pivot = toSort.get(pivotIndex); toSort.remove(pivotIndex); List<Integer> left = new ArrayList<Integer>(); List<Integer> right = new ArrayList<Integer>(); for (int i : toSort) { if (i > pivot) right.add(i); else left.add(i); } left = partition(left); right = partition(right); left.add(pivot); left.addAll(right); return left; } }

    Read the article

  • Selenium Webdriver Java - looking for alternatives for Actions and Robot when performing drag-and-drop

    - by Ja-ke Alconcel
    I first tried Actions class and the drag-and-drop does work on different elements, however it was unable to locate the a specific draggable element on it's exact screen/webpage position. Here's the code I've used: Point loc = driver.findElement(By.id("thiselement")).getLocation(); System.out.println(loc); WebElement drag = driver.findElement(By.id("thiselement")); Actions test = new Actions(driver); test.dragAndDropBy(drag, 0, 60).build().perform(); I checked the element with it's pixel location and it prints (837, -52), which was somewhere on top of the webpage and was pixels away from the actual element. Then I tried using the Robot class and works perfectly fine on my script, but can only provide constant successful runs on a single test machine, running it with a different machine with different screen resolution and screen size will render the script to fail due to the dependency of Robot on the pixel location of the element. The sample code of the Robot script I'm using: Robot dragAndDrop = new Robot(); dragAndDrop.mouseMove(945, 166); //actual pixel location of the draggable element dragAndDrop.mousePress(InputEvent.BUTTON1_MASK); sleep(3000); dragAndDrop.mouseMove(945, 226); dragAndDrop.mouseRelease(InputEvent.BUTTON1_MASK); sleep(3000); Is there any alternative for Actions and Robot to automate drag-and-drop? Or maybe a help on working the script to work on Actions as I really can't use Robot. Thanks in advance.

    Read the article

  • How do you submit an authenticated HTML form using XUL (Firefox extension) Javascript?

    - by machineghost
    I am working on a Firefox extension, and in that extension I am trying to use AJAX to submit a form on a webpage. I am using: var request = Components.classes["@mozilla.org/xmlextras/xmlhttprequest;1"].createInstance(Components.interfaces.nsIXMLHttpRequest); request.onload = loadHandler; request.open("POST", url, true); request.send(values); to make the request, and it works ... mostly. The one problem is that the form has an authentication token on it, and I need to submit that token with my POST. I tried doing a GET separately to get this token, but by the time I made my second (POST) request my session had (evidently) changed, and the authenticity token was considered invalid. Does anyone know of a way to use the XUL/Chrome Javscript to maintain a constant session across multiple requests (all "behind the scenes") for something this? I'm still a XUL n00b, so there may be a totally obvious alternative that I'm missing (eg. hidden IFRAME; I tried that briefly but couldn't get it to work).

    Read the article

  • Apache console accesses network drives, service does not?

    - by danspants
    I have an apache 2.2 server running Django. We have a network drive T: which we need constant access to within our Django app. When running Apache as a service, we cannot access this drive, as far as any django code is concerned the drive does not exist. If I add... <Directory "t:/"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> to the httpd.conf file the service no longer runs, but I can start apache as a console and it works fine, Django can find the network drive and all is well. Why is there a difference between the console and the service? Should there be a difference? I have the service using my own log on so in theory it should have the same access as I do. I'm keen to keep it running as a service as it's far less obtrusive when I'm working on the server (unless there's a way to hide the console?). Any help would be most appreciated.

    Read the article

  • Algorithm to pick values from set to match target value?

    - by CSharperWithJava
    I have a fixed array of constant integer values about 300 items long (Set A). The goal of the algorithm is to pick two numbers (X and Y) from this array that fit several criteria based on input R. Formal requirement: Pick values X and Y from set A such that the expression X*Y/(X+Y) is as close as possible to R. That's all there is to it. I need a simple algorithm that will do that. Additional info: The Set A can be ordered or stored in any way, it will be hard coded eventually. Also, with a little bit of math, it can be shown that the best Y for a given X is the closest value in Set A to the expression X*R/(X-R). Also, X and Y will always be greater than R From this, I get a simple iterative algorithm that works ok: int minX = 100000000; int minY = 100000000; foreach X in A if(X<=R) continue; else Y=X*R/(X-R) Y=FindNearestIn(A, Y);//do search to find closest useable Y value in A if( X*Y/(X+Y) < minX*minY/(minX+minY) ) then minX = X; minY = Y; end end end I'm looking for a slightly more elegant approach than this brute force method. Suggestions?

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • how can i set up a uniqueness constraint in mysql for columns that can be null?

    - by user299689
    I know that in MySQL, UNIQUE constraits don't treat NULL values as equal. So if I have a unique constraint on ColumnX, then two separate rows can have values of NULL for ColumnX and this wouldn't violate the constraint. How can I work around this? I can't just set the value to an arbitrary constant that I can flag, because ColumnX in my case is actually a foreign key to another table. What are my options here? Please note that this table also has an "id" column that is its primary key. Since I'm using Ruby on Rails, it's important to keep this id column as the primary key. Note 2: In reality, my unique key encompasses many columns, and some of them have to be null, because they are foreign keys, and only one of them should be non-null. What I'm actually trying to do is to "simulate" a polymorphic relationship in a way that keep referential integrity in the db, but using the technique outlined in the first part of the question asked here: http://stackoverflow.com/questions/922184/why-can-you-not-have-a-foreign-key-in-a-polymorphic-association

    Read the article

  • How do I implement a group of strings that starts randomly, but is bound by relevance upon selection?

    - by Freakishly
    Okay the title may be a little confusing, but here's what I'm trying to do :) I have a game in XNA where every tap from the user draws a moving circle on the screen. The circle has a tag, say 'dogs' displayed on it. So imagine multiple taps on the screen, and we have all these circles of various colors and sizes moving around the screen with different (but constant) velocities. Each circle with different tags: 'dogs', 'cats' and so on... Clicking on empty space generates a new circle at that point. Clicking on one of the circles "selects" it, turning it into a greeen shade and slowing its velocity down to a fraction of what it was. Clicking on it again "unselects" it, and restores its original color and velocity (trajectory does not change). With each circle comes a tag, and as of now I'm populating these tags randomly from a string array (which means there's a chance tags will repeat). I would like the tags for newly created circles to be relevant to previous "selected" tags. So when I click on 'dogs', I would like 'German Shepherd' but I would also like 'dog parks' and 'lemurs' assuming dogs get along well with lemurs. What would be the best way to approach this problem. In my head I have a massive many-to-many mapping, but I can't seem to translate it to code. Thanks for looking. FYI I'm using the sample project from here: http://mobile.tutsplus.com/tutorials/windows/introduction-to-xna-on-windows-phone-7/

    Read the article

  • Elo rating system: start value

    - by Marco W.
    I've implemented an Elo rating system in a game. There is no limit for the number players. Players can join the game constantly so the number of players probably rises gradually. How the Elo values are exactly calculated isn't important because of this fact: If team A beats team B then A's Elo win equals B's Elo loss. Hence I've got a problem concerning the starting values for my rating system: Should I use the starting value "0" for every player? The sum of all Elo values would be constant. But since the number of players is increasing there would be some kind of Elo deflation, wouldn't it? Should I use any starting value greater than 0? In this case, the sum of all Elo values would constantly increase. So there could be an Elo inflation. The problem: Elo points lose value but the starting value keeps always the same. What should I do? Can you help me? Thanks in advance!

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • Which cms or script for social network wiki?

    - by Jason
    Hi, I am building a social network. I need a cms that will allow users to contribute content. Each content item will need to have a google map, list of features, ratings, comments, etc. And the content must be editable by other users with revision control. Also, each user should have a profile with their bookmarked content items, contributed items, comments, etc. It's very important that I can create a template for the wiki/content entries so that each item looks uniform. (and as a kick in the teeth, I would like to be able to search for wiki items using a radial search or map) Joomla was my first choice, as I've used it for many projects, but the wiki functionality is not there. I was also setting up a grou.ps site, but the wiki is so-so - not feature rich and it really doesn't have the option I need. Additionally, I know someone out there will mention Drupal. I may consider it if I can see it put to use without and overabundance of custom programming (I don't mind initial coding, but drupal requires constant coding & recoding - with this site, I dont' have that time commitment) I thought about using mediawiki with buddypress, but i'm not sure if that's the way to go. Thoughts?

    Read the article

  • Java string to double conversion.

    - by wretrOvian
    Hi, I've been reading up on the net about the issues with handling float and double types in java. Unfortunately, the image is still not clear. Hence, i'm asking here direct. :( My MySQL table has various DECIMAL(m,d) columns. The m may range from 5 to 30. d stays a constant at 2. Question 1. What equivalent data-type should i be using in Java to work (i.e store, retrieve, and process) with the size of the values in my table? (I've settled with double - hence this post). Question 2. While trying to parse a double from a string, i'm getting errors Double dpu = new Double(dpuField.getText()); for example - "1" -> java.lang.NumberFormatException: empty String "10" -> 1.0 "101" -> 10.0 "101." -> 101.0 "101.1" -> 101.0 "101.19" -> 101.1 What am i doing wrong? What is the correct way to convert a string to a double value? And what measures should i take to perform operations on such values?

    Read the article

  • Argument type deduction, references and rvalues

    - by uj2
    Consider the situation where a function template needs to forward an argument while keeping it's lvalue-ness in case it's a non-const lvalue, but is itself agnostic to what the argument actually is, as in: template <typename T> void target(T&) { cout << "non-const lvalue"; } template <typename T> void target(const T&) { cout << "const lvalue or rvalue"; } template <typename T> void forward(T& x) { target(x); } When x is an rvalue, instead of T being deduced to a constant type, it gives an error: int x = 0; const int y = 0; forward(x); // T = int forward(y); // T = const int forward(0); // Hopefully, T = const int, but actually an error forward<const int>(0); // Works, T = const int It seems that for forward to handle rvalues (without calling for explicit template arguments) there needs to be an forward(const T&) overload, even though it's body would be an exact duplicate. Is there any way to avoid this duplication?

    Read the article

  • Why it's important to specify the complete class name in your association when using namespaces

    - by Carmine Paolino
    In my Rails application there is a model that has some has_one associations (this is a fabricated example): class Person::Admin < ActiveRecord::Base has_one :person_monthly_revenue has_one :dude_monthly_niceness accepts_nested_attributes_for :person_monthly_revenue, :dude_monthly_niceness end class Person::MonthlyRevenue < ActiveRecord::Base belongs_to :person_admin end class Dude::MonthlyNiceness < ActiveRecord::Base belongs_to :person_admin end The application talks to a backend that computes some data and returns a piece of JSON like this: { "dude_monthly_niceness": { "february": 1.1153232569518972, "october": 1.1250217200558268, "march": 1.3965786869658541, "august": 1.6293418014601631, "september": 1.4062771500697835, "may": 1.7166279693955291, "january": 1.0086401628086725, "june": 1.5711510228365859, "april": 1.5614525597326563, "december": 0.99894169970474289, "july": 1.7263264324994585, "november": 0.95044938418509506 }, "person_monthly_revenue": { "february": 10.585596551505297, "october": 10.574823016656749, "march": 9.9125274764852787, "august": 9.2111604702328922, "september": 9.7905249446675153, "may": 9.1329712474607962, "january": 10.479614016604238, "june": 9.3710235926961936, "april": 9.5897372624830304, "december": 10.052587677671438, "july": 8.9508877843925561, "november": 10.925339756096172 }, } To deserialize it, I use ActiveRecord's from_json, but instead of a Person::Admin object with all the associations in place, I get this error: >> Person::Admin.new.from_json(json) NameError: uninitialized constant Person::Admin::DudeMonthlyNiceness Am I doing something wrong? Is there a better way to deserialize data? (I can modify the backend easily) UPDATE: the original title was "How to deserialize from json to ActiveRecord objects with associations?" but it ended up being my mistake in specifying associations so I changed the title.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >