Search Results

Search found 7625 results on 305 pages for 'duane fields'.

Page 72/305 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

  • MySQL for Excel 1.3.0 Beta has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.3.0.  This is a beta release for 1.3.x. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel As this is a beta version the MySQL for Excel product can be downloaded only by using the product standalone installer at this link http://dev.mysql.com/downloads/windows/excel/ Your feedback on this beta version is very well appreciated, you can raise bugs on the MySQL bugs page or give us your comments on the MySQL for Excel forum. Changes in MySQL for Excel 1.3.0 (2014-06-06, Beta) This section documents all changes and bug fixes applied to MySQL for Excel since the release of 1.2.1. Several new features were added, for more information see What Is New In MySQL for Excel (http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel-what-is-new.html). Known limitations: Upgrading from versions MySQL for Excel 1.2.0 and lower is not possible due to a bug fixed in MySQL for Excel 1.2.1. In that scenario, the old version (MySQL for Excel 1.2.0 or lower) must be uninstalled first. Upgrading from version 1.2.1 works correctly. <CTRL> + <A> cannot be used to select all database objects. Either <SHIFT> + <Arrow Key> or <CTRL> + click must be used instead. PivotTables are normally placed to the right (skipping one column) of the imported data, they will not be created if there is another existing Excel object at that position. Functionality Added or Changed Imported data can now be refreshed by using the native Refresh feature. Fields in the imported data sheet are then updated against the live MySQL database using the saved connection ID. Functionality was added to import data directly into PivotTables, which can be created from any Import operation. Multiple objects (tables and views) can now be imported into Excel, when before only one object could be selected. Relational information is also utilized when importing multiple objects. All options now have descriptive tooltips. Hovering over an option/preference displays helpful information about its use. A new Export Data, Advanced Options option was added that shows all available data types in the Data Type combo box, instead of only showing a subset of the most popular data types. The option dialogs now include a Refresh to Defaults button that resets the dialog's options to their defaults values. Each option dialog is set individually. A new Add Summary Fields for Numeric Columns option was added to the Import Data dialog that automatically adds summary fields for numeric data after the last row of the imported data. The specific summary function is selectable from many options, such as "Total" and "Average." A new collation option was added for the schema and table creation wizards. The default schema collation is "Server Default", and the default table collation is "Schema Default". Collation options may be selected from a drop-down list of all available collations. Quick links: MySQL for Excel documentation: http://dev.mysql.com/doc/en/mysql-for-excel.html. MySQL on Windows blog: http://blogs.oracle.com/MySQLOnWindows. MySQL for Excel forum: http://forums.mysql.com/list.php?172. MySQL YouTube channel: http://www.youtube.com/user/MySQLChannel. Enjoy and thanks for the support! 

    Read the article

  • Moving the Oracle User Experience Forward with the New Release 7 Simplified UI for Oracle Sales Cloud

    - by mvaughan
    By Kathy Miedema, Oracle Applications User ExperienceIn September 2013, Release 7 for Oracle Cloud Applications became generally available for Oracle Sales Cloud and HCM Cloud. This significant release allowed the Oracle Applications User Experience (UX) team to finally talk freely about Simplified UI, a user experience project in the works since Oracle OpenWorld 2012. Simplified UI represents the direction that the Oracle user experience – for all of its enterprise applications – is heading. Oracle’s Apps UX team began by building a Simplified UI for sales representatives. You can find that today in Release 7, and it was demoed extensively during OpenWorld 2013 in San Francisco. This screenshot shows how Opportunities appear in the new Simplified UI for Oracle Sales Cloud, a user interface built for sales reps.Analyst Rebecca Wettemann, vice president of Nucleus Research, saw Simplified UI at Oracle Openworld 2013 and talked about it with CRM Buyer in “Oracle Revs Its Cloud Engines for a Better Customer Experience.” Wettemann said there are distinct themes to the latest release: "One is usability. Oracle Sales Cloud, for example, is designed to have zero training for onboarding sales reps, which it does," she explained. "It is quite impressive, actually -- the intuitive nature of the application and the design work they have done with this goal in mind."The software uses as few buttons and fields as possible, she pointed out. "The sales rep doesn't have to ask, 'what is the next step?' because she can see what it is."In fact, there are three themes driving the usability that Wettemann noted. They are simplicity, mobility, and extensibility, and we write more about them on the Usable Apps web site. These three themes embody the strategy for Oracle’s cloud applications user experiences.  Simplified UI for Oracle Sales CloudIn developing a Simplified UI for Oracle Sales Cloud, Oracle’s UX team concentrated on the tasks that sales reps need to do most frequently, and are most important. “Knowing that the majority of their work lives are spent on the road and on the go, they need to be able to quickly get in and qualify and convert their leads, monitor and progress their opportunities, update their customer and contact information, and manage their schedule,” Jeremy Ashley, Vice President of the Applications UX team, said.Ashley said the Apps UX team has a good reason for creating a Simplified UI that focuses on self-service. “Sales people spend the day selling stuff,” he said. “The only reason they use software is because the company wants to track what they’re doing.” Traditional systems of tracking that information include filling in a spreadsheet of leads or sales. Oracle wants to automate this process for the salesperson, and enable that person to keep everyone who needs to know up-to-date easily and quickly. Simplified UI addresses that problem by providing light-touch input.  “It has to be useful to the salesperson,” Ashley said about the Sales Cloud user experience. Simplified UI can tell sales reps about key opportunities, or provide information about a contact in just a click or two. Customer information is accessible quickly and easily with Simplified UI for the Oracle Sales Cloud.Simplified UI for Sales Cloud can also be extended easily, Ashley said. Users usually just need to add various business fields or create and modify analytical reports. The way that Simplified UI is constructed allows extensibility to happen by hiding or showing a few necessary fields. The Settings user interface, starting in release 7, allows for the simple configuration of the most important visual elements. “With Sales cloud, we identified a need to make the application useful and very simple,” Ashley said. Simplified UI meets that need. Where can you find out more?To find out more about the simplified UI and Oracle’s ongoing investment in applications user experience innovations, come to one of our sessions at a user group conference near you. Stay tuned to the Voice of User Experience (VoX) blog – the next post will be about Simplified UI and HCM Cloud.

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Octree subdivision problem

    - by ChaosDev
    Im creating octree manually and want function for effectively divide all nodes and their subnodes - For example - I press button and subnodes divided - press again - all subnodes divided again. Must be like - 1 - 8 - 64. The problem is - i dont understand how organize recursive loops for that. OctreeNode in my unoptimized implementation contain pointers to subnodes(childs),parent,extra vector(contains dublicates of child),generation info and lots of information for drawing. class gOctreeNode { //necessary fields gOctreeNode* FrontBottomLeftNode; gOctreeNode* FrontBottomRightNode; gOctreeNode* FrontTopLeftNode; gOctreeNode* FrontTopRightNode; gOctreeNode* BackBottomLeftNode; gOctreeNode* BackBottomRightNode; gOctreeNode* BackTopLeftNode; gOctreeNode* BackTopRightNode; gOctreeNode* mParentNode; std::vector<gOctreeNode*> m_ChildsVector; UINT mGeneration; bool mSplitted; bool isSplitted(){return m_Splitted;} .... //unnecessary fields }; DivideNode of Octree class fill these fields, set mSplitted to true, and prepare for correctly drawing. Octree contains basic nodes(m_nodes). Basic node can be divided, but now I want recursivly divide already divided basic node with 8 subnodes. So I write this function. void DivideAllChildCells(int ix,int ih,int id) { std::vector<gOctreeNode*> nlist; std::vector<gOctreeNode*> dlist; int index = (ix * m_Height * m_Depth) + (ih * m_Depth) + (id * 1);//get index of specified node gOctreeNode* baseNode = m_nodes[index].get(); nlist.push_back(baseNode->FrontTopLeftNode); nlist.push_back(baseNode->FrontTopRightNode); nlist.push_back(baseNode->FrontBottomLeftNode); nlist.push_back(baseNode->FrontBottomRightNode); nlist.push_back(baseNode->BackBottomLeftNode); nlist.push_back(baseNode->BackBottomRightNode); nlist.push_back(baseNode->BackTopLeftNode); nlist.push_back(baseNode->BackTopRightNode); bool cont = true; UINT d = 0;//additional recursive loop param (?) UINT g = 0;//additional recursive loop param (?) LoopNodes(d,g,nlist,dlist); //Divide resulting nodes for(UINT i = 0; i < dlist.size(); i++) { DivideNode(dlist[i]); } } And now, back to the main question,I present LoopNodes, which must do all work for giving dlist nodes for splitting. void LoopNodes(UINT& od,UINT& og,std::vector<gOctreeNode*>& nlist,std::vector<gOctreeNode*>& dnodes) { //od++;//recursion depth bool f = false; //pass through childs for(UINT i = 0; i < 8; i++) { if(nlist[i]->isSplitted())//if node splitted and have childs { //pass forward through tree for(UINT j = 0; j < 8; j++) { nlist[j] = nlist[j]->m_ChildsVector[j];//set pointers to these childs } LoopNodes(od,og,nlist,dnodes); } else //if no childs { //add to split vector dnodes.push_back(nlist[i]); } } } This version of loop nodes works correctly for 2(or 1?) generations after - this will not divide neightbours nodes, only some corners. I need correct algorithm. Screenshot All I need - is correct version of LoopNodes, which can add all nodes for DivideNode.

    Read the article

  • Call For Papers Tips and Tricks

    - by speakjava
    This year's JavaOne session review has just been completed and by now everyone who submitted papers should know whether they were successful or not.  I had the pleasure again this year of leading the review of the 'JavaFX and Rich User Experiences' track.  I thought it would be useful to write up a few comments to help people in future when submitting session proposals, not just for JavaOne, but for any of the many developer conferences that run around the world throughout the year.  This also draws on conversations I recently had with various Java User Group leaders at the Oracle User Group summit in Riga.  Many of these leaders run some of the biggest and most successful Java conferences in Europe. Try to think of a title which will sound interesting.  For example, "Experiences of performance tuning embedded Java for an ARM architecture based single board computer" probably isn't going to get as much attention as "Do you like coffee with your dessert? Java on the Raspberry Pi".  When thinking of the subject and title for your talk try to steer clear of sessions that might be too generic (and so get lost in a group of similar sessions).  Introductory talks are great when the audience is new to a subject, but beware of providing sessions that are too basic when the technology has been around for a while and there are lots of tutorials already available on the web. JavaOne, like many other conferences has a number of fields that need to be filled in when submitting a paper.  Many of these are selected from pull-down lists (like which track the session is applicable to).  Check these lists carefully.  A number of sessions we had needed to be shuffled between tracks when it was thought that the one selected was not appropriate.  We didn't count this against any sessions, but it's always a good idea to try and get the right one from the start, just in case. JavaOne, again like many other conferences, has two fields that describe the session being submitted: abstract and summary.  These are the most critical to a successful submission.  The two fields have different names and that is significant; a frequent mistake people make is to write an abstract for a session and then duplicate it for the summary.  The abstract (at least in the case of JavaOne) is what gets printed in the show guide and is typically what will be used by attendees when deciding what sessions to attend.  This is where you need to sell your session, not just to the reviewers, but also the people who you want in your audience.  Submitting a one line abstract (unless it's a really good one line) is not usually enough to decide whether this is worth investing an hour of conference time.  The abstract typically has a limit of a few hundred characters.  Try to use as many of them as possible to get as much information about your session across.  The summary should be different from the abstract (and don't leave it blank as some people do).  This field is where you can give the reviewers more detail about things like the structure of the talk, possible demonstrations and so on.  As a reviewer I look to this section to help me decide whether the hard-sell of the title and abstract will actually be reflected in the final content.  Try to make this comprehensive, but don't make it excessively long.  When you have to review possibly hundreds of sessions a certain level of conciseness can make life easier for reviewers and help the cause of your session. If you've not made many submissions for talks in the past, or if this is your first, try to give reviewers places to find background on you as a presenter.  Having an active blog and Twitter handle can also help reviewers if they're not sure what your level of expertise is.  Many call-for-papers have places for you to include this type of information.  It's always good to have new and original presenters and presentations for conferences.  Hopefully these tips will help you be successful when you answer the next call-for-papers.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • MVC - Cocoa interface - Cocoa Design pattern book

    - by Idan
    So I started reading this book: http://www.amazon.com/Cocoa-Design-Patterns-Erik-Buck/dp/0321535022 On chapter 2 it explains about the MVC design pattern and gives and example which I need some clarification to. The simple example shows a view with the following fields: hourlyRate, WorkHours, Standarthours , salary. The example is devided into 3 parts : View - contains some text fiels and a table (the table contains a list of employees' data). Controller - comprised of NSArrayController class (contains an array of MyEmployee) Model - MyEmployee class which describes an employee. MyEmployee class has one method which return the salary according to the calculation logic, and attributes in accordance with the view UI controls. MyEmployee inherits from NSManagedObject. Few things i'm not sure of : 1. Inside the MyEmplpyee class implemenation file, the calculation method gets the class attributes using sentence like " [[self valueForKey:@"hourlyRate"] floatValue];" Howevern, inside the header there is no data member named hourlyRate or any of the view fields. I'm not quite sure how does it work, and how it gets the value from the right view field. (does it have to be the same name as the field name in the view). maybe the conncetion is made somehow using the Interface builder and was not shown in the book ? and more important: 2. how does it seperate the view from the model ? let's say ,as the book implies might happen, I decide one day to remove one of the fields in the view. as far as I understand, that means changing the way the salary method works in MyEmplpyee (cause we have one field less) , and removing one attribute from the same calss. So how is that separate the View from the Model if changing one reflect on the other ? I guess I get something wrong... Any comments ? Thanks

    Read the article

  • LINQ - IEnumerable.Join on Anonymous Result Set in VB.NET

    - by user337501
    I've long since built a way around this, but it still keeps bugging me... it doesnt help that my grasp of dynamic LINQ queries is still shakey. For the example: Parent has fields (ParentKey, ParentField) Child has fields (ChildKey, ParentKey, ChildField) Pet has fields (PetKey, ChildKey, PetField) Child has a foreign key reference to Parent on Child.ParentKey = Parent.ParentKey Pet has a foreign key reference to Child on Pet.Childkey = Child.ChildKey Simple enough eh? Lets say I have LINQ like this... Dim Q = FROM p in DataContext.Parent _ Join c In DataContext.Child On c.ParentKey = p.ParentKey Consider this a "base query" on which I will perform other filtering actions. Now I want to join the Pet table like this: Q = Q.Join(DataContext.Pet, _ Function(a) a.c.ChildKey, _ Function(p As Pet) p.ChildKey, _ Function(a, p As Pet) p.ChildKey = a.c.ChildKey) The above Join call doesnt work. I sort of understand why it doesnt work, but hopefully it'll show you how I tried to accomplish this task. After all this was done I would have appended a Select to finish the job. Any ideas on a better way to do this? I tried it with the PredicateBuilder with little success. I might not know how to use it right but it felt like it wasnt gonna handle the joining.

    Read the article

  • How to prevent duplicate records being inserted with SqlBulkCopy when there is no primary key

    - by kscott
    I receive a daily XML file that contains thousands of records, each being a business transaction that I need to store in an internal database for use in reporting and billing. I was under the impression that each day's file contained only unique records, but have discovered that my definition of unique is not exactly the same as the provider's. The current application that imports this data is a C#.Net 3.5 console application, it does so using SqlBulkCopy into a MS SQL Server 2008 database table where the columns exactly match the structure of the XML records. Each record has just over 100 fields, and there is no natural key in the data, or rather the fields I can come up with making sense as a composite key end up also having to allow nulls. Currently the table has several indexes, but no primary key. Basically the entire row needs to be unique. If one field is different, it is valid enough to be inserted. I looked at creating an MD5 hash of the entire row, inserting that into the database and using a constraint to prevent SqlBulkCopy from inserting the row,but I don't see how to get the MD5 Hash into the BulkCopy operation and I'm not sure if the whole operation would fail and roll back if any one record failed, or if it would continue. The file contains a very large number of records, going row by row in the XML, querying the database for a record that matches all fields, and then deciding to insert is really the only way I can see being able to do this. I was just hoping not to have to rewrite the application entirely, and the bulk copy operation is so much faster. Does anyone know of a way to use SqlBulkCopy while preventing duplicate rows, without a primary key? Or any suggestion for a different way to do this?

    Read the article

  • iPhone nextResponder switching keyboards

    - by Steve
    iPhone SDK 3.2 Beta 4 (I am downloading B5 to see if this clears this up) To me this seems like a bug, but maybe I am missing something. The linked test program illustrates this problem. I have several UITextFields that I am linking together using the Tag property. In textFieldShouldReturn I get the next UITextField and set it as the active responder. The problem arises with the IB settings for the keyboard type. In the test program I have the first two text fields set to default. The next three text fields are set to numeric and symbols. As you move into the set (bottom three) that use the numeric/symbol keyboard the keyboard will toggle between numeric/symbol and the default. What I mean is the third field will display the numeric/symbol. The second Default. The third numeric/symbol. If you add more text fields setup as numeric/symbol this will continue on. Now if you tab into the forth field (it should bring up the default) and then manually select the numeric/symbol. The problem will disappear. I have tried various things to track down why the problem disappears. No luck so far. Sample Project

    Read the article

  • Rails App hangs after few requests

    - by Paddy
    I have Bitnami Rails stack installed on my Mac. To better explain my problem i created a simple scaffold based rails app with mysql as the backend. I can get to perform simple POST and GET requests for a while and after a few requests the app just hangs indefinitely. No exception caught or anything worthwhile in the development log to report this strange behavior. This is the last bit from the development log before the app froze: Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:38:51) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.9ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 99ms (View: 88, DB: 4) | 200 OK [http://localhost/writedatas] [4;36;1mSQL (0.2ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#new (for 127.0.0.1 at 2010-03-30 20:38:52) [GET] [4;36;1mWritedata Columns (2.0ms) [0m [0;1mSHOW FIELDS FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/new Rendered writedatas/_form (5.9ms) Completed in 34ms (View: 25, DB: 2) | 200 OK [http://localhost/writedatas/new] [4;36;1mSQL (0.4ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:39:17) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.6ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 101ms (View: 90, DB: 4) | 200 OK [http://localhost/writedatas] It just hung at this point. And after this happens i have to restart the server, for it to hang again after few requests. This is the weirdest problem i have faced and i am truly stumped.

    Read the article

  • tabindex not working on rails form

    - by ash34
    Hi, I am specifying a tabindex on a rails form. Below is example code for a select input and text input. <div class="row"> <%= f.label :hrs, "Enter number of hours" %> <%= f.select :hrs, VALID_HOURS::HOURS, {:selected => "8"}, {:tabindex => "3", :style =>'width:50px;', :class => "select"} %> </div> <div class="row"> <%= f.label :notes, "notes" %> <%= f.text_field :notes, :maxlength => 30, :tabindex => "5", :style => 'width:200px;' %> </div> I have two select fields and some input text fields. They don't seem to be getting focused in the right order. Also it does'nt seem to be tabbing over a few text fields. thanks much.

    Read the article

  • cakephp paginate using mysql SQL_CALC_FOUND_ROWS

    - by michael
    I'm trying to make Cakephp paginate take advantage of the SQL_CALC_FOUND_ROWS feature in mysql to return a count of total rows while using LIMIT. Hopefully, this can eliminate the double query of paginateCount(), then paginate(). http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows I've put this in my app_model.php, and it basically works, but it could be done better. Can someone help me figure out how to override paginate/paginateCount so it executes only 1 SQL stmt? /** * Overridden paginateCount method, to using SQL_CALC_FOUND_ROWS */ public function paginateCount($conditions = null, $recursive = 0, $extra = array()) { $options = array_merge(compact('conditions', 'recursive'), $extra); $options['fields'] = "SQL_CALC_FOUND_ROWS `{$this->alias}`.*"; Q: how do you get the SAME limit value used in paginate()? $options['limit'] = 1; // ideally, should return same rows as in paginate() Q: can we somehow get the query results to paginate directly, without the extra query? $cache_results_from_paginateCount = $this->find('all', $options); /* * note: we must run the query to get the count, but it will be cached for multiple paginates, so add comment to query */ $found_rows = $this->query("/* do not cache for {$this->alias} */ SELECT FOUND_ROWS();"); $count = array_shift($found_rows[0][0]); return $count; } /** * Overridden paginate method */ public function paginate($conditions, $fields, $order, $limit, $page = 1, $recursive = null, $extra = array()) { $options = array_merge(compact('conditions', 'fields', 'order', 'limit', 'page', 'recursive'), $extra); Q: can we somehow get $cache_results_for_paginate directly from paginateCount()? return $cache_results_from_paginateCount; // paginate in only 1 SQL stmt return $this->find('all', $options); // ideally, we could skip this call entirely }

    Read the article

  • Cannot update a single field using Linq to Sql

    - by KallDrexx
    I am having a hard time attempting to update a single field without having to retrieve the whole record prior to saving. For example, in my web application I have an in place editor for the Name and Description fields of an object. Once you edit either field, it sends the new field (with the object's ID value) to the web server. What I want is the webserver to take that value and ID and only update the one field. There are only two ways google tells me to do this: 1) When I get the value I want to change, the value and the ID, retrieve the record from the database, update the field in the c# object, and then send it back to the server. I don't like this method because not only does it include a completely unnecessary database read call (which includes two tables due to the way my schema is). 2) Set UpdateCheck for all the fields (but the primary keys) to UpdateCheck.Never. This doesn't work for me (I think) due to my mapping layer between the Linq to Sql and my Entity/ViewModel layer. When I convert my entity into the linq to sql db object it seems to be updating those fields regardless of the UpdateCheck setting. This might be just because of integers, since not setting an int means it is a zero (and no, I can't use int? instead). Are there any other options that I have?

    Read the article

  • How can I add a link with a UID from a User Reference from a table in Drupal Views

    - by pduncan
    I have the following in Drupal 6: A Master CCK type which contains a User reference field and other fields. There will only be one record per user here. A View of this CCK, shown as a table, with one of the fields being the user ref from the CCK type. This field is initially shown as a user name, linking to the user profile. A Second CCK type which can have several pieces of data about a particular user. A View for this CCK type, displaying information as a table. It takes a user id as an argument (an integer) I want to click on the user name in the master view, and be directed to the detail view for this user. To do this, I tried selecting 'Output this field as a link' on the user field. The thing available for me to replace are: Fields * [field_my_user_ref_uid_1] == Content: User (field_my_user_ref) Arguments * %1 == User: Uid However, the [field_my_user_ref_uid_1] element is replaced by the user name, and %1 seems to get replaced with an empty string. How can I put the user id in here?

    Read the article

  • Custom Keyboard (iPhone), UIKeyboardDidShowNotification and UITableViewController

    - by Pascal
    On an iPhone App, I've got a custom keyboard which works like the standard keyboard; it appears if a custom textfield becomes first responder and hides if the field resigns first responder. I'm also posting the Generic UIKeyboardWillShowNotification, UIKeyboardDidShowNotification and their hiding counterparts, like follows: NSMutableDictionary *userInfo = [NSMutableDictionary dictionaryWithCapacity:5]; [userInfo setObject:[NSValue valueWithCGPoint:self.center] forKey:UIKeyboardCenterBeginUserInfoKey]; [userInfo setObject:[NSValue valueWithCGPoint:shownCenter] forKey:UIKeyboardCenterEndUserInfoKey]; [userInfo setObject:[NSValue valueWithCGRect:self.bounds] forKey:UIKeyboardBoundsUserInfoKey]; [userInfo setObject:[NSNumber numberWithInt:UIViewAnimationCurveEaseOut] forKey:UIKeyboardAnimationCurveUserInfoKey]; [userInfo setObject:[NSNumber numberWithDouble:thisAnimDuration] forKey:UIKeyboardAnimationDurationUserInfoKey]; [[NSNotificationCenter defaultCenter] postNotificationName:UIKeyboardWillShowNotification object:nil userInfo:userInfo]; This code is working and I use it in UIViewController subclasses. Now since iPhone OS 3.0, UITableViewController automatically resizes its tableView when the system keyboards show and hide. I'm only now compiling against 3.0 and I thought that the controller should also resize the table if my custom keyboard appears, since I'm posting the same notification. However it doesn't. The table view controller is set as the delegate of the input fields. Does anyone have an idea why this might be the case? Has anyone implemented something similar successfully? I have standard input fields along the custom ones, so if the user changes the fields the standard keyboard hides and the custom one shows. It would be beneficial if the tableView didn't resize to full height and I didn't have to resize it back with a custom method.

    Read the article

  • What's the proper way to setup different objects as delegates using Interface Builder?

    - by eagle
    Let's say I create a new project. I now add two text fields to the view controller in Interface Builder. I want to respond to delegate events that the text fields create, however, I don't want to have the main view controller to act as the delegate for both text fields. Ideally I want a separate file for each text field that acts as the delegate. Each of these objects also needs to be able to interact with the main view controller. My question is how I would set this up and link everything correctly? I tried creating a new class that inherits from NSObject and implements UITextFieldDelegate. I then added an instance variable called "viewController" of the same type of my view controller and marked it with IBOutlet (this required me to add #import "myViewcontroller.h"). I then went to Interface Builder and opened up my view controller which contains the two edit boxes. I added an NSObject to the form and changed it's type to be of the new class I created. I set its viewController property to the File's Owner, and set one of the textbox's delegate properties to point to this new object I created. Now when I run the program, it crashes when I touch the text box. It gives the error EXC_BAD_ACCESS. I'm guessing I didn't link stuff correctly in IB. Some things I'm not sure about which might be the problem: Does IB automatically know to create an instance of the class just by placing the NSObject in the ViewController? Can it properly assign the viewController property to an instance of itself even though it is creating itself at the same time?

    Read the article

  • Correct OOP design without getters?

    - by kane77
    I recently read that getters/setters are evil and I have to say it makes sense, yet when I started learning OOP one of the first things I learned was "Encapsulate your fields" so I learned to create class give it some fields, create getters, setters for them and create constructor where I initialize these fields. And every time some other class needs to manipulate this object (or for instance display it) I pass it the object and it manipulate it using getters/setters. I can see problems with this approach. But how to do it right? For instance displaying/rendering object that is "data" class - let's say Person, that has name and date of birth. Should the class have method for displaying the object where some Renderer would be passed as an argument? Wouldn't that violate principle that class should have only one purpose (in this case store state) so it should not care about presentation of this object. Can you suggest some good resources where best practices in OOP design are presented? I'm planning to start a project in my spare time and I want it to be my learning project in correct OOP design..

    Read the article

  • Switching between iScroll and standard WebView Scrolling functionality

    - by Jonathan
    I'm using iScroll 4 for my rather heavy iPhone web/Phonegap app and trying to find a way to switch between iScrolls scrolling functionality and standard "native" webview scrolling, basically on click. Why I want this is described below. My app has several different subpages all in one file. Some subpages have input fields, some don't. As we all know, iScroll + input fields = you're out of luck. I've wrapped iScrolls wrapper div (and all its functionality) around the one sub page where scrolling is crucial, and where there are no input fields. The other sections, I've simply placed outside this div, which gives these no scrolling functionality at all. I've of course tried wrapping all inside the wrapper and enabling/disabling scroll (shown below) but that didn't work me at all: myScroll.disable() myScroll.enable() By placing some sub pages outside the main scrolling area / iscroll div, I've disabled both iScrolls and the standard webview scrolling (the latter - which i guess iScroll does) which leaves me with only basic basic scrolling, hence basically no scrolling at all. One can move around vertically, but once you let go of the screen with, the "scrolling" stops. Quite naturally but alas so nasty. Therefore, I'm searching for a way to enable standard webview scrolling on the sub pages placed outside of iScroll's wrapper div. I've tried different approaches such as the one above and by using: document.removeEventListener('touchmove', preventDefault, false); document.removeEventListener('touchmove', preventDefault, true); But with no success. Sorry for not providing you guys with any hard code or demos to test out for yourselves, it's simply too much code and it would be presented so out of its context, nobody would be able to debug it. So, is there a way n javascript to do this, switching between iScroll scrolling functionality and standard "native" webview scrolling? I would rather not rebuild the entire DOM framework so a solution like the one described above would be preferable.

    Read the article

  • Odd difference between Python 2.5 and Python 2.6 on MacOS 10.6 using ctypes and libproc proc_pidinfo

    - by cemasoniv
    I'm trying to determine the current working directory of a process given its PID. The command-line utility lsof does something similar. Here's the source to the python script: import ctypes from ctypes import util import sys PROC_PIDVNODEPATHINFO = 9 proc = ctypes.cdll.LoadLibrary(util.find_library("libproc")) print(proc.proc_pidinfo) class vnode_info(ctypes.Structure): _fields_ = [('data', ctypes.c_ubyte * 152)] class vnode_info_path(ctypes.Structure): _fields_ = [('vip_vi', vnode_info), ('vip_path', ctypes.c_char * 1024)] class proc_vnodepathinfo(ctypes.Structure): _fields_ = [('pvi_cdir', vnode_info_path), ('pvi_rdir', vnode_info_path)] inst = proc_vnodepathinfo() pid = int(sys.argv[1]) ret = proc.proc_pidinfo( pid, PROC_PIDVNODEPATHINFO, 0, ctypes.byref(inst), ctypes.sizeof(inst) ) print(ret, inst.pvi_cdir.vip_path) However, even though this script behaves as expected on Python 2.6, it does not work in Python 2.5: host:dir user$ sudo /usr/bin/python2.6 script.py 2698 <_FuncPtr object at 0x100419ae0> (2352, '/') host:dir user$ sudo /usr/bin/python2.5 script.py 2698 <_FuncPtr object at 0x19fdc0> (0, '') (PID 2698 is "Activity Monitor.app"). Note the different return values. Since this program strongly based on ctypes, I can't imagine any difference in Python itself that would cause this. The same behavior (as Python 2.5) occurs with my self-built Python 3.2. I'm not sure what versioning information I can give to help track down the weirdness -- or even come up with a solution for 2.5 -- but here's some stuff: host:dir user$ otool -L /usr/bin/python2.6 /usr/bin/python2.6: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ otool -L /usr/bin/python2.5 /usr/bin/python2.5 (architecture i386): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) /usr/bin/python2.5 (architecture ppc7400): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ uname -a Darwin host.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Thanks to anyone that has a clue about what's going on here:)

    Read the article

  • Modeling a Generic Relationship in a Database

    - by StevenH
    This is most likely one for all you sexy DBAs out there: How would I effieciently model a relational database whereby I have a field in an "Event" table which defines a "SportType". This "SportsType" field can hold a link to different sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event". Each of these Sports tables have different fields specific to that sport. My goal is to be able to genericly add sports types in the future as required, yet hold sport specific event data (fields) as part of my Event Entity. Is it possible to use an ORM such as NHibernate / Entity framework which would reflect such a relationship? I have thrown together a quick C# example to express my intent at a higher level: public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • ASP.NET MVC: null reference exception using HtmlHelper.TextBox and custom model binder

    - by mr.nicksta
    I have written a class which implements IModelBinder (see below). This class handles a form which has 3 inputs each representing parts of a date value (day, month, year). I have also written a corresponding HtmlHelper extension method to print out three fields on the form. When the day, month, year inputs are given values which can be parsed, but a seperate value fails validation, all is fine - the fields are repopulated and the page served to the user as expected. however when an invalid values are supplied and a DateTime cannot be parsed, i return an arbitrary DateTime so that the fields will be repopulated when returned to the user. I read up on similar problems people have had and they all seemed to be due to lack of calling SetModelValue(). I wasn't doing this, but even after adding the problem has not been resolved. public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { string modelName = bindingContext.ModelName; string monthKey = modelName + ".Month"; string dayKey = modelName + ".Day"; string yearKey = modelName + ".Year"; //get values submitted on form string year = bindingContext.ValueProvider[yearKey].AttemptedValue; string month = bindingContext.ValueProvider[monthKey].AttemptedValue; string day = bindingContext.ValueProvider[dayKey].AttemptedValue; DateTime parsedDate; if (DateTime.TryParse(string.Format(DateFormat, year, month, day), out parsedDate)) return parsedDate; //could not parse date time report error, return current date bindingContext.ModelState.AddModelError(yearKey, ValidationErrorMessages.DateInvalid); //added this after reading similar problems, does not fix! bindingContext.ModelState.SetModelValue(modelName, bindingContext.ValueProvider[modelName]); return DateTime.Today; } the null reference exception is thrown when i attempt to create a textbox for the Year property of the date, but strangely not for Day or Month! Can anyone offer an explanation as to why this is? Thanks :-)

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >