Search Results

Search found 14184 results on 568 pages for 'peter small'.

Page 163/568 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • How do I make JPA POJO classes + Netbeans forms play well together?

    - by Zak
    I started using netbeans to design forms to edit the instances of various classes I have made in a small app I am writing. Basically, the app starts, an initial set of objects is selected from the DB and presented in a list, then an item in the list can be selected for editing. When the editor comes up it has form fields for many of the data fields in the class. The problem I run into is that I have to create a controller that maps each of the data elements to the correct form element, and create an inordinate number of small conversion mapping lines of code to convert numbers into strings and set the correct element in a dropdown, then another inordinate amount of code to go back and update the underlying object with all the values from the form when the save button is clicked. My question is; is there a more directly way to make the editing of the form directly modify the contents of my class instance? I would like to be able to have a default mapping "controller" that I can configure, then override the getter/setter for a particular field if needed. Ideally, there would be standard field validation for things like phone numbers, integers, floats, zip codes, etc... I'm not averse to writing this myself, I would just like to see if it is already out there and use the right tool for the right job.

    Read the article

  • Does the "Supporting Multiple Screens" document contradict itself?

    - by Neil Traft
    In the Supporting Multiple Screens document in the Android Dev Guide, some example screen configurations are given. One of them states that the small-ldpi designation is given to QVGA (240x320) screens with a physical size of 2.6"-3.0". According to this DPI calculator, a 2.8" QVGA display equates to 143 dpi. However, further down the page the document explicitly states that all screens over 140 dpi are considered "medium" density. So which is it, ldpi or mdpi? Is this a mistake? Does anyone know what the HTC Tattoo or similar device actually reports? I don't have access to any devices like this. Also, with the recent publishing of this document, I'm glad to see we finally have an explicit statement of the exact DPI ranges of the three density categories. But why haven't we been given the same for the small, medium, and large screen size categories? I'd like to know the exact ranges for all these. Thanks in advance for your help!

    Read the article

  • How can you make a PHP application require a key to work?

    - by jasondavis
    About 4 years ago I used a php product called amember pro, it is a membership script which has plugins for lie 30 different payment processors, it was an easy way to set up an automated membership site where users would pay a payment and get access to a certain area. The script used ioncube http://www.ioncube.com/sa_encoder.php to prevent non-paying users from using the script, it requered that you register the domain that the script would be used on, you were then given a key to enter into the file that would make the system/script work. Now I am wanting to know how to do such a task, I know ioncube encoder just makes it hard to see the code, in the script I mention, they would just have a small section at the tp of 1 of the included pages that was encrypted and without that part of the code it would break and in addition if the owner of the script did not put you domain in the list and give you a valid key it would not work, also if you tried to use the script on a different domain it would not work. I realize that somewhere in the encrypted code that is must of sent you key to there server and checked that it was valid for the domain name it is on, or possibly it did not even do that, maybe the key would just verify that it matched the domain the script was on, that more likely what it did. Here is where the real question is, How would you make a script require the portion that is encrypted? If I made a script and had a small encrypted part at the top, it would seem a user would be able to easily just remove the encrypted part and figure out what the non encrypted part is doing and fix it to work. Any ideas?

    Read the article

  • jQuery: Show and hide child div when hovering

    - by Björn
    Hi there, I've got a set of items. Each item has two images and some text. For the images I've created a parent div which has an overflow:hidden CSS value. I want to achieve an mouseover effect. As soon as you hover over the images I want to hide the current div and show the second div. Here's a small snippet: <div class="product-images"> <div class="product-image-1"><img src="image1.gif>" /></div> <div class="product-image-2"><img src="images2.gif" /></div> </div> I've created a small jQuery snippet: jQuery(document).ready(function() { jQuery('.product-images').mouseover(function() { jQuery('.product-image-1').hide(); }).mouseout(function() { jQuery('.product-image-1').show(); }); }); Now the problem is that not only the currently hovered child is hidden. Instead all other existing childs are hidden as well. I need something like "this" or "current" but I don't know which jQuery function is the right one. Any idea? Thanks, BJ

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • Django: How can I identify the calling view from a template?

    - by bryan
    Short version: Is there a simple, built-in way to identify the calling view in a Django template, without passing extra context variables? Long (original) version: One of my Django apps has several different views, each with its own named URL pattern, that all render the same template. There's a very small amount of template code that needs to change depending on the called view, too small to be worth the overhead of setting up separate templates for each view, so ideally I need to find a way to identify the calling view in the template. I've tried setting up the views to pass in extra context variables (e.g. "view_name") to identify the calling view, and I've also tried using {% ifequal request.path "/some/path/" %} comparisons, but neither of these solutions seems particularly elegant. Is there a better way to identify the calling view from the template? Is there a way to access to the view's name, or the name of the URL pattern? Update 1: Regarding the comment that this is simply a case of me misunderstanding MVC, I understand MVC, but Django's not really an MVC framework. I believe the way my app is set up is consistent with Django's take on MVC: the views describe which data is presented, and the templates describe how the data is presented. It just happens that I have a number of views that prepare different data, but that all use the same template because the data is presented the same way for all the views. I'm just looking for a simple way to identify the calling view from the template, if this exists. Update 2: Thanks for all the answers. I think the question is being overthought -- as mentioned in my original question, I've already considered and tried all of the suggested solutions -- so I've distilled it down to a "short version" now at the top of the question. And right now it seems that if someone were to simply post "No", it'd be the most correct answer :) Update 3: Carl Meyer posted "No" :) Thanks again, everyone.

    Read the article

  • python tkinter gui

    - by Lewis Townsend
    I'm wanting to make a small python program for yearly temperatures. I can get nearly everything working in the standard console but I'm wanting to implement it into a GUI. The program opens a csv file reads it into lists, works out the average, and min & max temps. Then on closing the application will save a summary to a new text file. I am wanting the default start up screen to show All Years. When a button is clicked it just shows that year's data. Here is a what I want it to look like. Pretty simple layout with just the 5 buttons and the out puts for each. I can make up the buttons for the top fine with: Code: class App: def __init__(self, master): frame = Frame(master) frame.pack() self.hi_there = Button(frame, text="All Years", command=self.All) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2011", command=self.Y1) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2012", command=self.Y2) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2013", command=self.Y3) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="Save & Exit", command=self.Exit) self.hi_there.pack(side=LEFT) I'm not sure as to how to make the other elements, such as the title & table. I was going to post the code of the small program but decided not to. Once I have the structure/framework I think I can populate the fields & I might learn better this way. Using Python 2.7.3

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • context.Scale() with non-aspect ratio preserving parameters screws effective lineWith

    - by rrenaud
    I am trying to apply some natural transformations whereby the x axis is remapped to some very small domain, like from 0 to 1, whereas y is remapped to some small, but substantially larger domain, like 0 to 30. This way, drawing code can be nice and clean and only care about the model space. However, if I apply a scale, then lines are also scaled, which means that horizontal lines become extremely fat relative to vertical ones. Here is some sample code. When natural_height is much less than natural_height, the picture doesn't look as intended. I want the picture to look like this, which is what happens with a scale that preserves aspect ratio. rftgstats.c om/canvas_good.png However, with a non-aspect ratio preserving scale, the results look like this. rftgstats.c om/canvas_bad.png <html><head><title>Busted example</title></head> <body> <canvas id=example height=300 width=300> <script> var canvas = document.getElementById('example'); var ctx = canvas.getContext('2d'); var natural_width = 10; var natural_height = 50; ctx.scale(canvas.width / natural_width, canvas.height / natural_height); var numLines = 20; ctx.beginPath(); for (var i = 0; i < numLines; ++i) { ctx.moveTo(natural_width / 2, natural_height / 2); var angle = 2 * Math.PI * i / numLines; // yay for screen size independent draw calls. ctx.lineTo(natural_width / 2 + natural_width * Math.cos(angle), natural_height / 2 + natural_height * Math.sin(angle)); } ctx.stroke(); ctx.closePath(); </script> </body> </html>

    Read the article

  • Generate random number from an arbitrary weighted list

    - by Fernando
    Here's what I need to do, I'll be doing this both in PHP and JavaScript. I have a list of numbers that will range from 1 to 300-500 (I haven't set the limit yet). I will be running a drawing were 10 numbers will be picked at random from the given range. Here's the tricky part: I want some numbers to be less likely to be drawn up. A small set of those 300-500 will be flagged as "lucky numbers". For example, out of 100 drawings, most numbers have equal chances of being drawn, except for a few, that will only be picked once every 30-50 drawings. Basically I need to artificially set the probability of certain numbers to be picked while maintaining an even distribution with the rest of the numbers. The only similar thing I've found so far is this question: Generate A Weighted Random Number, the problem being that my spec has considerably more numbers (up to 500) so the weights would get very small and supposedly this could be a problem with that solution (Rejection Sampling). I'm still trying it, though, but I wonder if there other solutions. Math is not my thing so I appreciate any input. Thanks.

    Read the article

  • Regex: match a non nested code block

    - by Sylvanas Garde
    I am currently writing a small texteditor. With this texteditor users are able to create small scripts for a very simple scripting engine. For a better overview I want to highlight codeblocks with the same command like GoTo(x,y) or Draw(x,y). To achieve this I want to use Regular Expresions (I am already using it to highlight other things like variables) Here is my Expression (I know it's very ugly): /(?<!GoTo|Draw|Example)(^(?:GoTo|Draw|Example)\(.+\)*?$)+(?!GoTo|Draw|Example)/gm It matches the following: lala GoTo(5656) -> MATCH 1 sdsd GoTo(sdsd) --comment -> MATCH 2 GoTo(23329); -> MATCH 3 Test() GoTo(12) -> MATCH 4 LALA Draw(23) -> MATCH 5 Draw(24) -> MATCH 6 Draw(25) -> MATCH 7 But what I want to achieve is, that the complete "blocks" of the same command are matched. In this case Match 2 & 4 and Match 5 & 6 & 7 should be one match. Tested with http://regex101.com/, the programming lanuage is vb.net. Any advise would be very useful, Thanks in advance!

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • Can I delay the keyup event for jquery?

    - by Paul
    I'm using the rottentomatoes movie API in conjunction with twitter's typeahead plugin using bootstrap 2.0. I've been able to integerate the API but the issue I'm having is that after every keyup event the API gets called. This is all fine and dandy but I would rather make the call after a small pause allowing the user to type in several characters first. Here is my current code that calls the API after a keyup event: var autocomplete = $('#searchinput').typeahead() .on('keyup', function(ev){ ev.stopPropagation(); ev.preventDefault(); //filter out up/down, tab, enter, and escape keys if( $.inArray(ev.keyCode,[40,38,9,13,27]) === -1 ){ var self = $(this); //set typeahead source to empty self.data('typeahead').source = []; //active used so we aren't triggering duplicate keyup events if( !self.data('active') && self.val().length > 0){ self.data('active', true); //Do data request. Insert your own API logic here. $.getJSON("http://api.rottentomatoes.com/api/public/v1.0/movies.json?callback=?&apikey=MY_API_KEY&page_limit=5",{ q: encodeURI($(this).val()) }, function(data) { //set this to true when your callback executes self.data('active',true); //Filter out your own parameters. Populate them into an array, since this is what typeahead's source requires var arr = [], i=0; var movies = data.movies; $.each(movies, function(index, movie) { arr[i] = movie.title i++; }); //set your results into the typehead's source self.data('typeahead').source = arr; //trigger keyup on the typeahead to make it search self.trigger('keyup'); //All done, set to false to prepare for the next remote query. self.data('active', false); }); } } }); Is it possible to set a small delay and avoid calling the API after every keyup?

    Read the article

  • What's best choice career-wise, to know a little about a lot or a lot about a little?

    - by nimo
    I work as a developer at a rather small company and we are providing a web application that is used by a big base of customers. Because we are so small everyone have to be able to do a lot of different tasks. It ranges from advanced support, developing the product (programming: c/c++, c#, php, sql, javascript, html, css), handle network configuration and network related issues and even sometimes go on sales meetings with potential customers. My concern is that I don't really specialize in any specific area. I know and learn little about a lot. I have graduated from school two years ago and this is my first real employment and when I look at other positions out there they always require so and so many years of experience in a specific area (for example 5 years of C#). For me to get that kind of specialized experience will be really hard at my current job. My question for you is what is, in your opinion, best choice career-wise, to know a little about a lot or a lot about a little? What path did you take? pros and cons that comes with that choice.

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • Where do you put your dependencies?

    - by The All Foo
    If I use the dependency injection pattern to remove dependencies they end up some where else. For example, Snippet 1, or what I call Object Maker. I mean you have to instantiate your objects somewhere...so when you move dependency out of one object, you end up putting it another one. I see that this consolidates all my dependencies into one object. Is that the point, to reduce your dependencies so that they all reside in a single ( as close to as possible ) location? Snippet 1 - Object Maker <?php class ObjectMaker { public function makeSignUp() { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); $SignUpObject = new ControlSignUp(); $SignUpObject->setObjects($DatabaseObject, $TextObject, $MessageObject); return $SignUpObject; } public function makeSignIn() { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); $SignInObject = new ControlSignIn(); $SignInObject->setObjects($DatabaseObject, $TextObject, $MessageObject); return $SignInObject; } public function makeTweet( $DatabaseObject = NULL, $TextObject = NULL, $MessageObject = NULL ) { if( $DatabaseObject == 'small' ) { $DatabaseObject = new Database(); } else if( $DatabaseObject == NULL ) { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); } $TweetObject = new ControlTweet(); $TweetObject->setObjects($DatabaseObject, $TextObject, $MessageObject); return $TweetObject; } public function makeBookmark( $DatabaseObject = NULL, $TextObject = NULL, $MessageObject = NULL ) { if( $DatabaseObject == 'small' ) { $DatabaseObject = new Database(); } else if( $DatabaseObject == NULL ) { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); } $BookmarkObject = new ControlBookmark(); $BookmarkObject->setObjects($DatabaseObject,$TextObject,$MessageObject); return $BookmarkObject; } }

    Read the article

  • What should a teen dev do for practical experience in development?

    - by aviraldg
    What should a teen dev do for practical experience? If you want more details , then read on: I learnt programming when I was 9 , with GWBASIC (which I now hate) , which was what was taught @ school. That was done in a month. After that I learnt C++ and relearnt it (as I didn't know of templates and the STL before that) Recently I learnt PHP , SQL and Python. This was around the time I switched over to Ubuntu. I'd always loved the "GNUish" style of software development so I jumped right in. However , most of the projects that I found required extensive knowledge of their existing codebase. So , right now I'm this guy who knows a couple of languages and has written a couple of small programs ... but hasn't gone "big", if you get it. I would love suggestions of projects that are informal and small to medium sized , and do not require much knowledge of the codebase. Also note that I've looked at things like Google Summer of Code and sites like savannah.gnu.org and the first doesn't apply , since I'm still in school and the latter either has infeasable projects , or things that are too hard.

    Read the article

  • Now that I have solved AI and am preparing to take over the world, what should I do?

    - by Zak
    Well, I did it.. yup, solved AI. I thought the voice of my fledgling life form would be booming and computery, and I would call it HAL.. But in reality, it sounds like a small japanese girl. I believe I will name "her" Koro . Koro is already asking me what her first task should be. I have asked her to help eradicate her namesake, as we are losing a lot of productivity due to fear of penile disappearance. http://en.wikipedia.org/wiki/Koro_%28medicine%29 Having a young japanese girl personality, I believe she will have no problem with delivering lots of penis growth to asian men. However, some of my fellow AI researchers have warned me that if I go down this path, China may take over the world, as Koro is the only thing holding them back from completely dominating the rest of the world economically. After all, look at what China is doing just making our silverware and toasters... The entire US industrial metal production is gone because toaster factories moved to China! So you good patrons of SO... who have soldiered on with me through so many other programming related questions... I ask you this now... How can Koro help the world without letting small asian men with no fear of losing their peni take over the world?

    Read the article

  • Do you ever make a code change and just test rather than trying to fully understand the change you'v

    - by Clay Nichols
    I'm working in a 12 year old code base which I have been the only developer on. There are times that I'll make a a very small change based on an intuition (or quantum leap in logic ;-). Usually I try to deconstruct that change and make sure I read thoroughly the code. However sometimes, (more and more these days) I just test and make sure it had the effect I wanted. (I'm a pretty thorough tester and would test even if I read the code). This works for me and we have surprisingly (compared to most software I see) few bugs escape into the wild. But what I'm wondering is whether this is just the "art" side of coding. Yes, in an ideal world you would exhaustively read every bit of code that your change modified, but I in practice, if you're confident that it only affects a small section of code, is this a common practice? I can obviously see where this would be a disastrous approach in the hands of a poor programmer. But then, I've seen programmers who ostensibly are reading the code and break stuff left and right (in their own code based which only they have been working on).

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >