Search Results

Search found 7722 results on 309 pages for 'pitfalls to avoid'.

Page 256/309 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Javascript problem with iframe that's hidden before loaded

    - by Aistina
    I have a page that contains an iframe that gets loaded using Javascript: index.html <iframe id="myFrame" width="800" height="600" style="display: none;"></iframe> <div id="loader"><!-- some loading indicator --></div> <script type="text/javascript"> function someFunction() { var myFrame = document.getElementById('myFrame'); var loader = document.getElementById('loader'); loader.style.display = 'block'; myFrame.src = 'myFrame.html'; myFrame.onload = function() { myFrame.style.display = 'block'; loader.style.display = 'none'; }; } </script> The page that gets loaded in the iframe contains some Javascript logic which calculates the sizes of certain elements for the purposes of adding a JS driven scrollbar (jScrollPane + jQuery Dimensions). myFrame.html <div id="scrollingElement" style="overflow: auto;"> <div id="several"></div> <div id="child"></div> <div id="elements"></div> </div> <script type="text/javascript"> $(document).load(function() { $('#scrollingElement').jScrollPane(); }); </script> This works in Chrome (and probably other Webkit browsers), but fails in Firefox and IE because at the time jScrollPane gets called, all the elements are still invisble and jQuery Dimensions is unable to determine any element's dimensions. Is there a way to make sure my iframe is visible before $(document).ready(...) gets called? Other than using setTimeout to delay jScrollPane, which is something I definitely want to avoid.

    Read the article

  • Can't touch UITextField on UIScrollView

    - by Chris B
    Hey guys, I know this has been talked about a lot. I think I've gone thru every question on this site, and still have not been able to get this working. I'm new to developing but I have a good sense of what's going on with all of my code. I definitely don't have a lot of experience though, this is my first iPhone app. I'm making a data entry field that is comprised of multiple UITextFields within a UIScrollView. I'll avoid explaining the other details for now, as it seems its a very basic problem. Without a scrollview, the textfields work perfectly. I can touch the textfield and the keyboard or picker view show up appropriately. When I add the textfields to a scrollview, the scrollview works, but then the text fields don't receive my touches. Here's the key: When 'User Interaction' is ENABLED, the scrollview works but the textfield touches are NOT registered. When 'User Interaction' is DISABLED, the scrollview doesn't work, but the textfield touches ARE registered and the keyboard/picker pops up. From the other posts I have seen people creating subclasses and overriding the touches in a separate implementation. I've seen people using custom content views (subviews?), I've seen some solutions that are now obsolete since the APIs have changed in newer versions of the SDK, and I am just completely stuck. I will leave out my code for now, because maybe there is a solution that someone has without requiring my code. If someone needs to see my code, I will put it up. My app is being written in the 3.1.3 SDK. If anyone has ANY information that could help, it would be so greatly appreciated.

    Read the article

  • Moving Multiple Divs With Same Class Inside Parent

    - by superzilla
    So, My website is hosted on enjin.com and when you give users a tag with an image it appears on the forums under their name, however it's a vertical list with the tag names. I want to organize it so all of the tag images appear on one line and all of the tag names appear on a horizontal list below it. They use div tags somewhat like this <div class="tag"> <div class="tag-image"></div> <!-- If the tag lacks an image this is excluded --> <div class="tag-text">Tag Name</div> <!-- If you want the name to be hidden this is excluded --> </div> They then pretty much repeat this for every tag. Somewhat the same thing is used for forum posts too. So, I want to move all the tag-image divs together on a single line and all of the tag-text divs down and give each it's own line. I only want to move them within their parent element (to avoid images/tags from one forum user from going to another). I know how to append things with jquery and have googled how to move elements around, but I'm afraid that they're all going to end up together under another user's avatar.

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • Trouble creating a SQL query

    - by JoBu1324
    I've been thinking about how to compose this SQL query for a while now, but after thinking about it for a few hours I thought I'd ask the SO community to see if they have any ideas. Here is a mock up of the relevant portion of the tables: contracts id date ar (yes/no) term payments contract_id payment_date The object of the query is to determine, per month, how many payments we expect, vs how many payments we received. conditions for expecting a payment Expected payments begin on contracts.term months after contracts.date, if contracts.ar is "yes". Payments continue to be expected until the month after the first missed payment. There is one other complication to this: payments might be late, but they need to show up as if they were paid on the date expected. The data is all there, but I've been having trouble wrapping my head around the SQL query. I am not an SQL guru - I merely have a decent amount of experience handling simpler queries. I'd like to avoid filtering the results in code, if possible - but without your help that may be what I have to do. Expected Output Month Expected Payments Received Payments January 500 450 February 498 478 March 234 211 April 987 789 ... SQL Fiddle I've created an SQL Fiddle: http://sqlfiddle.com/#!2/a2c3f/2

    Read the article

  • Amazon Product API ResponseGroups and Default results

    - by aboxy
    A. In our application, most of the data we work with is stored as free text .i.e. there is no categorization done as of now. We are using openNLP libraries to make sense of the data(extract keywords/classify) and do a query to Amazon web services to pull the results of the query. We use searchindex=All and keywords=. Results are not always returned and we basically get 'AWS.ECommerceService.NoExactMatches' How to avoid that? 1) Is there a way to specify default results if no match found? e.g. Amazon carousel widget does that if the search query did not return results, it basically show some computer items. 2) Should I batch the request always and add another search criteria to every request? If my first criteria does not pull any results, we can be sure that our 2nd query will always pull results(possibly caching?) Here is one search criteria 'Open Circle Hoop Earrings Polished Stainless Steel Open Circle Hoop Earrings Polished Stainless Steel DiamondShark' This return no results via API. On Amazon site,I get alternative suggestions with some results which are pretty relevant. Is there a way to pull those results? B. We just need a thumbnail image and a title and description for our app. Which responseGroup is appropriate? We are using medium rt now but there is awful lot of information even with that responseGroup. Any help is appreciated. thanks

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • How to update a Widget dynamically (Not waiting 30 min for onUpdate to be called)?

    - by Donal Rafferty
    I am currently learning about widgets in Android. I want to create a WIFI widget that will display the SSID, the RSSI (Signal) level. But I also want to be able to send it data from a service I am running that calculates the Quality of Sound over wifi. Here is what I have after some reading and a quick tutorial: public class WlanWidget extends AppWidgetProvider{ RemoteViews remoteViews; AppWidgetManager appWidgetManager; ComponentName thisWidget; WifiManager wifiManager; public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { Timer timer = new Timer(); timer.scheduleAtFixedRate(new WlanTimer(context, appWidgetManager), 1, 10000); } private class WlanTimer extends TimerTask{ RemoteViews remoteViews; AppWidgetManager appWidgetManager; ComponentName thisWidget; public WlanTimer(Context context, AppWidgetManager appWidgetManager) { this.appWidgetManager = appWidgetManager; remoteViews = new RemoteViews(context.getPackageName(), R.layout.widget); thisWidget = new ComponentName(context, WlanWidget.class); wifiManager = (WifiManager)context.getSystemService(Context.WIFI_SERVICE); } @Override public void run() { remoteViews.setTextViewText(R.id.widget_textview, wifiManager.getConnectionInfo().getSSID()); appWidgetManager.updateAppWidget(thisWidget, remoteViews); } } The above seems to work ok, it updates the SSID on the widget every 10 seconds. However what is the most efficent way to get the information from my service that will be already running to update periodically on my widget? Also is there a better approach to updating the the widget rather than using a timer and timertask? (Avoid polling) UPDATE As per Karan's suggestion I have added the following code in my Service: RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.widget); ComponentName thisWidget = new ComponentName( context, WlanWidget.class ); remoteViews.setTextViewText(R.id.widget_QCLevel, " " + qcPercentage); AppWidgetManager.getInstance( context ).updateAppWidget( thisWidget, remoteViews ); This gets run everytime the RSSI level changes but it still never updates the TextView on my widget, any ideas why?

    Read the article

  • Code Golf: Counting XML elements in file on Android

    - by CSharperWithJava
    Take a simple XML file formatted like this: <Lists> <List> <Note/> ... <Note/> </List> <List> <Note/> ... <Note/> </List> </Lists> Each node has some attributes that actually hold the data of the file. I need a very quick way to count the number of each type of element, (List and Note). Lists is simply the root and doesn't matter. I can do this with a simple string search or something similar, but I need to make this as fast as possible. Design Parameters: Must be in java (Android application). Must AVOID allocating memory as much as possible. Must return the total number of Note elements and the number of List elements in the file, regardless of location in file. Number of Lists will typically be small (1-4), and number of notes can potentially be very large (upwards of 1000, typically 100) per file. I look forward to your suggestions.

    Read the article

  • What if a large number of objects are passed to my SwingWorker.process() method?

    - by Trejkaz
    I just found an interesting situation. Suppose you have some SwingWorker (I've made this one vaguely reminiscent of my own): public class AddressTreeBuildingWorker extends SwingWorker<Void, NodePair> { private DefaultTreeModel model; public AddressTreeBuildingWorker(DefaultTreeModel model) { } @Override protected Void doInBackground() { // Omitted; performs variable processing to build a tree of address nodes. } @Override protected void process(List<NodePair> chunks) { for (NodePair pair : chunks) { // Actually the real thing inserts in order. model.insertNodeInto(parent, child, parent.getChildCount()); } } private static class NodePair { private final DefaultMutableTreeNode parent; private final DefaultMutableTreeNode child; private NodePair(DefaultMutableTreeNode parent, DefaultMutableTreeNode child) { this.parent = parent; this.child = child; } } } If the work done in the background is significant then things work well - process() is called with relatively small lists of objects and everything is happy. Problem is, if the work done in the background is suddenly insignificant for whatever reason, process() receives a huge list of objects (I have seen 1,000,000, for instance) and by the time you process each object, you have spent 20 seconds on the Event Dispatch Thread, exactly what SwingWorker was designed to avoid. In case it isn't clear, both of these occur on the same SwingWorker class for me - it depends on the input data, and the type of processing the caller wanted. Is there a proper way to handle this? Obviously I can intentionally delay or yield the background processing thread so that a smaller number might arrive each time, but this doesn't feel like the right solution to me.

    Read the article

  • Boost Property_Tree iterators, how to handle them?

    - by Andry
    Hello... I am sorry but I asked a question about the same argument before, but my problem concerns another aspect of the one described in that question (How to iterate a boost...). My problem is this, take a look at the following code: #include <iostream> #include <string> #include <boost/property_tree/ptree.hpp> #include <boost/property_tree/xml_parser.hpp> #include <boost/algorithm/string/trim.hpp> int main(int argc, char** argv) { using boost::property_tree::ptree; ptree pt; read_xml("try.xml", pt); ptree::const_iterator end = pt.end(); for (ptree::const_iterator it = pt.begin(); it != end; it++) std::cout << "Here " << it->? << std::endl; } Well, as told me in the previous question, there is the possibility to use iterators on property_tree in Boost, but I do not know what type it is... and what methods or properties I can use... Well, I assume that it must be another ptree or something representing another xml hierarchy to be browsed again (if I want) but documentation about this is very bad... I do not know why, but in boost docs I cannot find nothing good about this... just something about a macro to browse nodes, but this approach is one I would really like to avoid... Well, the question is so... Once getting the iterator on a ptree, how can I access node name, value, parameters (a node in a xml file)? Thankyou

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Can I delay the keyup event for jquery?

    - by Paul
    I'm using the rottentomatoes movie API in conjunction with twitter's typeahead plugin using bootstrap 2.0. I've been able to integerate the API but the issue I'm having is that after every keyup event the API gets called. This is all fine and dandy but I would rather make the call after a small pause allowing the user to type in several characters first. Here is my current code that calls the API after a keyup event: var autocomplete = $('#searchinput').typeahead() .on('keyup', function(ev){ ev.stopPropagation(); ev.preventDefault(); //filter out up/down, tab, enter, and escape keys if( $.inArray(ev.keyCode,[40,38,9,13,27]) === -1 ){ var self = $(this); //set typeahead source to empty self.data('typeahead').source = []; //active used so we aren't triggering duplicate keyup events if( !self.data('active') && self.val().length > 0){ self.data('active', true); //Do data request. Insert your own API logic here. $.getJSON("http://api.rottentomatoes.com/api/public/v1.0/movies.json?callback=?&apikey=MY_API_KEY&page_limit=5",{ q: encodeURI($(this).val()) }, function(data) { //set this to true when your callback executes self.data('active',true); //Filter out your own parameters. Populate them into an array, since this is what typeahead's source requires var arr = [], i=0; var movies = data.movies; $.each(movies, function(index, movie) { arr[i] = movie.title i++; }); //set your results into the typehead's source self.data('typeahead').source = arr; //trigger keyup on the typeahead to make it search self.trigger('keyup'); //All done, set to false to prepare for the next remote query. self.data('active', false); }); } } }); Is it possible to set a small delay and avoid calling the API after every keyup?

    Read the article

  • Can you catch exceeded allocated memory error before it kills the script?

    - by kristovaher
    The thing is that I want to catch memory problems before they happen. I have a system that gets rows from database and casts the returned associative array to a variable, but I never know what the size of the database result is is or how much memory it will take once the database request is made. This means that my software can fail simply because memory is exceeded. But I want to avoid that somehow. One of the ways is to obviously make database requests that are smaller, but what if this is not possible or what if I do not know the size of data that is returned from database? Is it possible to 'catch' situations where memory use is exceeded in PHP? Something like this: $requestOk=memory_test(function(){ return doSomething(); }); if($requestOk){ // Memory seems fine // $requestOk now has the value from memory_test() function } else { // Function would have exceeded memory } I just find it problematic that my script can just die at any moment because of memory issues. From what I know, try-catch cannot be used here because it is a fatal error. Any help would be appreciated!

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • Database schema for a library

    - by ABach
    Hi all - I'm designing a library management system for a department at my university and I wanted to enlist your eyes on my proposed schema. This post is primarily concerned with how we store multiple copies of each book; something about what I've designed rubs me the wrong way, and I'm hoping you all can point out better ways to tackle things. For dealing with users checking out books, I've devised three tables: book, customer, and book_copy. The relationships between these tables are as follows: Every book has many book_copies (to avoid duplicating the book's information while storing the fact that we have multiple copies of that book). Every user has many book_copies (the other end of the relationship) The tables themselves are designed like this: ------------------------------------------------ book ------------------------------------------------ + id + title + author + isbn + etc. ------------------------------------------------ ------------------------------------------------ customer ------------------------------------------------ + id + first_name + first_name + email + address + city + state + zip + etc. ------------------------------------------------ ------------------------------------------------ book_copy ------------------------------------------------ + id + book_id (FK to book) + customer_id (FK to customer) + checked_out + due_date + etc. ------------------------------------------------ Something about this seems incorrect (or at least inefficient to me) - the perfectionist in me feels like I'm not normalizing this data correctly. What say ye? Is there a better, more effective way to design this schema? Thanks!

    Read the article

  • Good Replacement for User Control?

    - by David Lively
    I found user controls to be incredibly useful when working with ASP.NET webforms. By encapsulating the code required for displaying a control with the markup, creation of reusable components was very straightforward and very, very useful. While MVC provides convenient separation of concerns, this seems to break encapsulation (ie, you can add a control without adding or using its supporting code, leading to runtime errors). Having to modify a controller every time I add a control to a view seems to me to integrate concerns, not separate them. I'd rather break the purist MVC ideology than give up the benefits of reusable, packaged controls. I need to be able to include components similar to webforms user controls throughout a site, but not for the entire site, and not at a level that belongs in a master page. These components should have their own code not just markup (to interact with the business layer), and it would be great if the page controller didn't need to know about the control. Since MVC user controls don't have codebehind, I can't see a good way to do this. I've searched previous SO questions, and have yet to find a good answer. Options so far In an attempt to avoid turning the comments section into a discussion... RenderAction This allows the view to call another controller, which will be responsible for interacting with the BLL and whatever data is necessary to its corresponding view. The calling view needs to be aware of the sub controller. This seems to provide a nice way to encapsulate partial views and controls, without having to modify the calling controller. RenderPartial The calling controller is still responsible for executing whatever code is associated with the partial view, and making sure that the model passed to the partial view contains the data it expects. Effectively, modifying the partial view potentially means modifying the calling controller. Annoying especially if this is used in multiple places. Portable Areas Place each control in its own project/area?

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Is unnecessary error handling recommended in business logic? eg. Null check/Percentage limit check etc

    - by novice_at_work
    We usually put unnecessary checks in our business logic to avoid failures. Eg. 1. public ObjectABC funcABC(){ ObjectABC obj = new ObjectABC; .......... .......... //its never set to null here. .......... return obj; } ObjectABC o = funABC(); if(o!=null){ //do something } Why do we need this null check if we are sure that it will never be null? Is it a good practice or not? 2. int pplReached = funA(..,..,..); int totalPpl = funB(..,..,..); funA() just puts a few more restriction over result of funB(). Double percentage = (totalPpl==0||totalPpl<pplReached) ? 0.0 : pplReached/totalPpl; Do we again need this check? The questions is: Aren't we swallowing some fundamental issue by putting such checks? Issues which should be shown ideally, are avoided by putting these checks. What is the recommended way?

    Read the article

  • Visitor Pattern can be replaced with Callback functions?

    - by getit
    Is there any significant benefit to using either technique? In case there are variations, the Visitor Pattern I mean is this: http://en.wikipedia.org/wiki/Visitor_pattern And below is an example of using a delegate to achieve the same effect (at least I think it is the same) Say there is a collection of nested elements: Schools contain Departments which contain Students Instead of using the Visitor pattern to perform something on each collection item, why not use a simple callback (Action delegate in C#) Say something like this class Department { List Students; } class School { List Departments; VisitStudents(Action<Student> actionDelegate) { foreach(var dep in this.Departments) { foreach(var stu in dep.Students) { actionDelegate(stu); } } } } School A = new School(); ...//populate collections A.Visit((student)=> { ...Do Something with student... }); *EDIT Example with delegate accepting multiple params Say I wanted to pass both the student and department, I could modify the Action definition like so: Action class School { List Departments; VisitStudents(Action<Student, Department> actionDelegate, Action<Department> d2) { foreach(var dep in this.Departments) { d2(dep); //This performs a different process. //Using Visitor pattern would avoid having to keep adding new delegates. //This looks like the main benefit so far foreach(var stu in dep.Students) { actionDelegate(stu, dep); } } } }

    Read the article

  • JSF 2 - clearing component attributes on page load?

    - by jamiebarrow
    Hi, The real question: Is there a way to clear certain attributes for all components on an initial page load? Background info: In my application, I have a JSF 2.0 frontend layer that speaks to a service layer (the service layer is made up of Spring beans that get injected to the managed beans). The service layer does its own validation, and I do the same validation in the frontend layer using my own validator classes to try and avoid code duplication somehow. These validator classes aren't JSF validators, they're just POJOs. I'm only doing validation on an action, so in the action method, I perform validation, and only if it's valid do I call through to the service layer. When I do my validation, I set the styleClass and title on the UIComponents using reflection (so if the UIComponent has the setStyleClass(:String) or setTitle(:String) methods, then I use them). This works nicely, and on a validation error I see a nicely styled text box with a popup containing the error message if I hover over it. However, since the component is bound to a Session Scoped Managed Bean, it seems that these attributes stick. So if I navigate away and come back to the same page, the styleClass and title are still in the error state. Is there a way to clear the styleClass and title attributes on each initial page load? Thanks, James P.S. I'm using the action method to validate because of some issues I had before with JSF 1.2 and it's validation methods, but can't remember why... so that's why I'm using the action method to validate.

    Read the article

  • Trying to make a plugin system in C++

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >