Search Results

Search found 17727 results on 710 pages for 'large apps'.

Page 27/710 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • Real world pitfalls of introducing F# into a large codebase and engineering team

    - by nganju
    I'm CTO of a software firm with a large existing codebase (all C#) and a sizable engineering team. I can see how certain parts of the code would be far easier to write in F#, resulting in faster development time, fewer bugs, easier parallel implementations, etc., basically overall productivity gains for my team. However, I can also see several productivity pitfalls of introducing F#, namely: 1) Everyone has to learn F#, and it's not as trivial as switching from, say, Java to C#. Team members that have not learned F# will be unable to work on F# parts of the codebase. 2) The pool of hireable F# programmers, as of now (Dec 2010) is non-existent. Search various software engineer resume databases for "F#", way less than 1% of resumes contain the keyword. 3) Community support as of now (Dec 2010) is less available. You can google almost any problem in C# and find someone that has already dealt with it, not so with F#. Third party tool support (NUnit, Resharper etc) is also sketchy. I realize that this is a bit Catch-22, i.e. if people like me don't use F# then the community and tools will never materialize, etc. But, I've got a company to run, and I can be cutting edge but not bleeding edge. Any other pitfalls I'm not considering? Or anyone care to rebut the pitfalls I've mentioned? I think this is an important discussion and would love to hear your counter-arguments in this public forum that may do a lot to increase F# adoption by industry. Thanks.

    Read the article

  • MyMessageBox for Phone and Store apps

    - by Daniel Moth
    I am sharing a class I use for both my Windows Phone 8 and Windows Store app projects. Background and my requirements For my Windows Phone 7 projects two years ago I wrote an improved custom MessageBox class that preserves the well-known MessageBox interface while offering several advantages. I documented those and shared it for Windows Phone 7 here: Guide.BeginShowMessageBox wrapper. Aside: With Windows Phone 8 we can now use the async/await feature out of the box without taking a dependency on additional/separate pre-release software. As I try to share code between my existing Windows Phone 8 projects and my new Windows Store app projects, I wanted to preserve the calling code, so I decided to wrap the WinRT MessageDialog class in a custom class to present the same MessageBox interface to my codebase. BUT. The MessageDialog class has to be called with the await keyword preceding it (which as we know is viral) which means all my calling code will also have to use await. Which in turn means that I have to change my MessageBox wrapper to present the same interface to the shared codebase and be callable with await… for both Windows Phone projects and Windows Store app projects. Solution The solution is what the requirements above outlined: a single code file with a MessageBox class that you can drop in your project, regardless of whether it targets Windows Phone 8, or Windows 8 Store apps or both. Just call any of its static Show functions using await and dependent on the overload check the return type to see which button the user chose.// example from http://www.danielmoth.com/Blog/GuideBeginShowMessageBox-Wrapper.aspx if (await MyMessageBox.Show("my message", "my caption", "ok, got it", "that sucks") == MyMessageBoxResult.Button1) { // Do something Debug.WriteLine("OK"); } The class can be downloaded from the bottom of my older blog post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • ALMing in Hinglish 2&ndash;Windows 8-Manual Testing Metro Style Apps using MTM11

    - by Tarun Arora
    What is ALMing in Hinglish => Introduction     ????? ?????? ??? ?????? ????, ?????? ??????? ?? ?????? ?????? ?? ????? ?????? ?????? 8 ?????? ?????? ??????????? ?? ?????? ???????? ?? ???? ???. ??? ???? ???????????? ????? ??????? 2011 ?? ?????? ?? ?? ???? ????? ?????? 8 ?????? ?????? ??????????? ?? ?????? ???????? ??. ALMing in Hinglish–Windows 8 Metro Style App manual testing using MTM11   In this second in the series of videos I bring to you Shubhra Maji who is a Program Manager on the Visual Studio dev tools team in Hyderabad along with the very seasoned Aditya Agarwal & Srishti Sridhar who have been working in the Visual Studio team from past several releases. The team wonderfully walks us through manually testing Metro Style Apps in Windows 8 using Microsoft Test Manager 2011. A great thank you for watching, if you have any questions/feedback/suggestions please contact us. Stay Tuned for more… Namaste!   You might also like - ALMing in Hinglish 1-Exploratory Testing in VS11 with Nivedita Bawa

    Read the article

  • Mapping dynamic buffers in Direct3D11 in Windows Store apps

    - by Donnie
    I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer. The buffer is declared as a member variable: Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer; Then I create it (which succeeds): D3D11_BUFFER_DESC instanceDesc; ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC)); instanceDesc.Usage = D3D11_USAGE_DYNAMIC; instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT; instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; instanceDesc.MiscFlags = 0; instanceDesc.StructureByteStride = 0; DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer)); However, when I try to map it: D3D11_MAPPED_SUBRESOURCE inst; DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst)); The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.

    Read the article

  • How to handle editing a large file for a non-technical user

    - by Luke
    I have a client who is given a tab delimited .txt file containing hundreds of thousands of rows. I have a user story as follows: As a user I want to take the text file and add a new value at the end of each line which contains the concatenated value of two of the columns. for example if the file read text_one text_two I need to output the following (preferably to a .txt file) text_one text_two text_onetext_two My first approach was to ask the vendor supplying the file to do the concatenation before providing the file, the easiest way to solve a problem is to eliminate it right? however they are very uncooperative and have point blank refused. I've looked at building a simple javascript application that does this client side so a non-technical user could select the file using a file selector. This approach has a few problems The file could be over a GB in size and so can't be loaded straight into memory, I've tried and the browser crashes There is no means to write a file in javascript so I'd need to output the content to the screen and have the user save it (somehow) I was thinking if I could get around the filesize limitations I could just output the edited content to the page and have the user save the page as a .txt file, however I think there is a better way than using javascript that will still accommodate the users lack of technical know-how. Please consider this question to be stack agnostic, but bear in mind that a nice little shell script or python script would be deemed unsuitable for a non technical user unless there is a way of "packaging" it nicely for a non-technical user. Updates The file is too large to open in excel. The process needs to be run weekly, but it doesn't require scheduling or automation...(yet)

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • How to Print From Metro Apps in Windows 8

    - by Taylor Gibb
    Printing has become an application aware feature in Metro applications. This makes the outcome of a print job different from application to application, but the question remains, how do you print? Using the Keyboard Not all apps support printing in Windows 8, a good example of one that does is Mail. So fire up the default Mail app select an email you want to print. When you are ready, go ahead and press the ctrl + P keyboard combination. This will bring up a list of available print devices on the right-hand side, you can use the up and down arrows to select a printer. You will get most of the options you are use to when printing, so once you have set up your preferences go ahead and hit the print button. Using the Mouse If you would rather use your mouse, move it to the bottom right hand corner of your screen, which will bring up the Charms bar, from here you will need to click on the devices charm. Using this will list your printers as well as other devices, so make sure you select a printer. That’s all there is to it. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • CA For A Large Intranet

    - by Tim Post
    I'm managing what has become a very large intranet (over 100 different hosts / services) and will be stepping down from my role in the near future. I want to make things easy for the next victim person who takes my place. All hosts are secured via SSL. This includes various portals, wikis, data entry systems, HR systems and other sensitive things. We're using self signed certificates which worked o.k. in the past, but are now problematic because: Browsers make it harder for users to understand exactly what is going on when a self signed certificate is encountered, much less accept them. Putting up a new host means 100 phone calls asking what "Add an exception" means What we were doing is just importing the self signed certs when we set up a new workstation. This was fine when we only had a dozen to deal with, but now its just overwhelming. Our I.T. Department has classified this as ya all's problem, all we get from them is support for switch and router configurations. Beyond the user having connectivity, everything else is up to the intranet administrators. We have a mix of Ubuntu and Windows workstations. We'd like to set up our own self signed CA root, which can sign certificates for each host that we deploy on the intranet. Client browsers would of course be told to trust our CA. My question is, would this be dangerous and would we be better off going with intermediate certificates from someone like Verisign? Either way, I still have to import the root for the intermediate CA, so I really don't see what the difference is? Other than charging us money, what would Verisign be doing that we could not, beyond protecting the root CA cert so it can't be used to make forgeries?

    Read the article

  • What stops HTML5 and JS apps to perform as good as native apps?

    - by Amogh Talpallikar
    From what I understand, HTML is a mark-up language, so is the content of XAML, XIB and whatever Android uses and other native UI development frameworks. JavaScript is a programming language used along with it to handle client side scripting which will include things like event handling, client side validations and anything else C#,Java,Objective-C or C++ do in various such frameworks. There are MVC/MVVM patterns available in form frameworks like Sencha's, Angular etc. We have localStorage in form of both sqlite and key-value store as other frameworks have and you have API specification for almost everything that it missing. Whenever a native UI frameworks has to render UI , it has to parse a similar the markup and render the UI. Question break-down What stops from doing the same in HTML and JS itself ? Instead of having a web-control or browser as a layer in between why can't HTML(along with CSS) and JS be made to perform the same way ? Even if there is a layer,so does .net runtime and JVM are in other cases where C++,C are not being used. So Lets take the case of Android, like Dalvik, why Can't Chromium be another option(along with dalvik and NDK) where HTML does what android markup does and JavaScript is used to do what Java does ? So the Question is, Even if current implementations aren't as good, but theoretically is it possible to get HTML5 based applications to work as other native apps specially on mobile ?

    Read the article

  • Very large log files, what should I do?

    - by Masroor
    (This question deals with a similar issue, but it talks about a rotated log file.) Today I got a system message regarding very low /var space. As usual I executed the commands in the line of sudo apt-get clean which improved the scenario only slightly. Then I deleted the rotated log files which again provided very little improvement. Upon examination I find that some log files in the /var/log has grown up to be very huge ones. To be specific, ls -lSh /var/log gives, total 28G -rw-r----- 1 syslog adm 14G Aug 23 21:56 kern.log -rw-r----- 1 syslog adm 14G Aug 23 21:56 syslog -rw-rw-r-- 1 root utmp 390K Aug 23 21:47 wtmp -rw-r--r-- 1 root root 287K Aug 23 21:42 dpkg.log -rw-rw-r-- 1 root utmp 287K Aug 23 20:43 lastlog As we can see, the first two are the offending ones. I am mildly surprised why such large files have not been rotated. So, what should I do? Simply delete these files and then reboot? Or go for some more prudent steps? I am using Ubuntu 14.04.

    Read the article

  • Configuration Tips for better Performance with ADF Mobile Apps

    - by SRINI INDLA
    Some tips to keep in mind to make sure ADF Mobile application's performance is optimal: 1. Select release mode in deployment profile. This is perhaps the most important thing to remember to ensure best performance for ADF Mobile Apps. Selecting this option causes the deployer to package optimized JVM and minified JS libs with the mobile app there by significantly improving the over all performance of the application. 2. For iOS you do not need to do anything else other than selecting  release mode in deploy profile. However, on Android you have to create a keystore and configure it in JDev --> Tools --> Preferences --> ADF Mobile --> Platforms : Android as shown in the snapshot below 3. Steps for generating the Keystore for Android using keytool :  4. Logging level setting in logging.properties: Make sure the log level is set to SEVERE for both framework logger as well as the application logger as follows oracle.adfmf.framework.level=SEVERE oracle.adfmf.application.level=SEVERE 5. When using SOAP WebServices with WebService Data Control make sure you select the option to copy the WSDL. This will cause the JDev to download the WSDL and all the XSDs referenced by the WSDL from the server at design time and package them with the application during deployment. This way the application does not incur the cost of downloading these resources at run time from the device.

    Read the article

  • How to Reap Anticipated ROI in Large-Scale Capital Projects

    - by Sylvie MacKenzie, PMP
    Only a small fraction of companies in asset-intensive industries reliably achieve expected ROI for major capital projects 90 percent of the time, according to a new industry study. In addition, 12 percent of companies see expected ROIs in less than half of their capital projects. The problem: no matter how sophisticated and far-reaching the planning processes are, many organizations struggle to manage risks or reap the expected value from major capital investments. The data is part of the larger survey of companies in oil and gas, mining and metals, chemicals, and utilities industries. The results appear in Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, a comprehensive new report sponsored by Oracle and developed by the Economist Intelligence Unit. Analysts say the shortcomings in large-scale, long-duration capital-investments projects often stem from immature capital-planning processes. The poor decisions that result can lead to significant financial losses and disappointing project benefits, which are particularly harmful to organizations during economic downturns. The report highlights three other important findings. Teaming the right data and people doesn’t guarantee that ROI goals will be achieved. Despite involving cross-functional teams and looking at all the pertinent data, executives are still failing to identify risks and deliver bottom-line results on capital projects. Effective processes are the missing link. Project-planning processes are weakest when it comes to risk management and predicting costs and ROI. Organizations participating in the study said they fail to achieve expected ROI because they regularly experience unexpected events that derail schedules and inflate budgets. But executives believe that using more-robust risk management and project planning strategies will help avoid delays, improve ROI, and more accurately predict the long-term cost of initiatives. Planning for unexpected events is a key to success. External factors, such as changing market conditions and evolving government policies are difficult to forecast precisely, so organizations need to build flexibility into project plans to make it easier to adapt to the changes. The report outlines a series of steps executives can take to address these shortcomings and improve their capital-planning processes. Read the full report or take the benchmarking survey and find out how your organization compares.

    Read the article

  • Enable Diagnostics in Oracle apps

    - by PRajkumar
    How to enable Oracle apps Diagnostics-> Examine, for certain users?   Steps 1 Navigate to System Administrator responsibility> Profile> System>     Steps 2 Enter profile name: Utilities:Diagnostics Enter Application User for whom you want to enable Diagnostics-> Examine   Steps 3 Give Yes at User level and Save the Changes Note – You can set Yes at Site level also if you want to enable this option for all Oracle application users   Steps 4 Again navigate to System Administrator responsibility> Profile> System> Enter profile name: Hide Diagnostics menu entry Enter Application User for whom you do not want to hide Diagnostics menu entry   Steps 5 Give No at User level and Save the Changes Note – You can set No at Site level also if you do not want to hide menu entry option for all Oracle application users Steps 6 Congratulations you have successfully enabled Diagnostics-> Examine Logout from Oracle Application and login again. Now can see Diagnostics-> Examine option

    Read the article

  • Necessary Infrastructure for large project with many components communicating through IPCs

    - by jluzwick
    I have a fairly in depth question which probably doesn't have an exact answer. As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application? More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a "trail" of what happened before the issue occurred. I'm thinking of something similar to Androids alogcat tool. These necessary infrastructure elements should be language-agnostic. While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold? Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive. Thanks for any help!

    Read the article

  • Semantic coupling vs. large class

    - by user106587
    I have hardware I communicate with via TCP. This hardware accepts ~40 different commands/requests with about 20 different responses. I've created a HardwareProxy class which has a TcpClient to send and receive data. I didn't like the idea of having 40 different methods to send the commands/requests, so I started down the path of having a single SendCommand method which takes an ICommand and returns an IResponse, this results in 40 different SpecificCommand classes. The problem is this requires semantic coupling, i.e. the method that invokes SendCommand receives an IResponse which it has to downcast to SpecificResponse, I use a future map which I believe ensures the appropriate SpecificResponse, but I get the impression this code smells. Besides the semantic coupling, ICommand and IResponse are essentially empty abstract classes (Marker Interfaces) and this seems suspicious to me. If I go with the 40 methods I don't think I have broken the single responisbility principle as the responsibility of the HardwareProxy class is to act as the hardware, which has all of these commands. This route is just ugly, plus I'd like to have Asynchronous versions, so there'd be about 80 methods. Is it better to bite the bullet and have a large class, accept the coupling and MarkerInterfaces for a smaller soultuion, or am I missing a better way? Thanks.

    Read the article

  • RetinaPad Enables Retina Display for iPhone Apps on the iPad

    - by Jason Fitzpatrick
    RetinaPad is an iPad application that actives the Retina Display resolution on iPhone applications to increase the clarity on the iPad. It’s a feature that should be built-in but is currently only available for jailbroken iPads. The premise is simple. Currently iPads lack support for the Retina Display level resolution that iPhone apps are capable of if displaying. RetinaPad allows you to stop using the ugly and blocky simple doubling available on the iPad and start accessing the higher resolution Retina Display mode for iPhone applications on the iPad. It’s such a trivial thing that it’s outright shameful Apple doesn’t include it by default. You should have to jailbreak your device to unlock functionality that should be there right from the factory. Check out the demo video below to see it in action: Fire up your jailbroken iPad, launch Cydia and search for RetinaPad. Retina Pad is $2.99, iPad only. How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Computer Says No: Mobile Apps Connectivity Messages

    - by ultan o'broin
    Sharing some insight into connectivity messages for mobile applications. Based on some recent ethnography done my myself, and prompted by a real business case, I would recommend a message that: In plain language, briefly and directly tells the user what is wrong and why. Something like: Cannot connect because of a network problem. Affords the user a means to retry connecting (or attempts automatically). Mobile context of use means users use anticipate interruptibility and disruption of task, so they will try again as an effective course of action. Tells the user when connection is re-established, and off they go. Saves any work already done, implicitly. (Bonus points on the ADF critical task setting scale) The following images showing my experience reading ADF-EMG Google Groups notification my (Android ICS) Samsung Galaxy S2 during a loss of WiFi give you a good idea of a suitable kind of messaging user experience for mobile apps in this kind of scenario. Inline connection lost message with Retry button Connection re-established toaster message The UX possible is dependent on device and platform features, sure, so remember to integrate with the device capability (see point 10 of this great article on mobile design by Brent White and Lynn Hnilo-Rampoldi) but taking these considerations into account is far superior to a context-free dumbed down common error message repurposed from the desktop mentality about the connection to the server being lost, so just "Click OK" or "Contact your sysadmin.".

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

  • Mobile enabled web apps with ASP.NET MVC 3 and jQuery Mobile

    - by shiju
    In my previous blog posts, I have demonstrated a simple web app using ASP.NET MVC 3 and EF Code First. In this post, I will be focus on making this application for mobile devices. A single web site will be used for both mobile browsers and desktop browsers. If users are accessing the web app from mobile browsers, users will be redirect to mobile specific pages and will get normal pages if users are accessing from desktop browsers. In this demo app, the mobile specific pages are maintained in an ASP.NET MVC Area named Mobile and mobile users will be redirect to MVC Area Mobile. Let’s add a new area named Mobile to the ASP.NET MVC app. For adding Area, right click the ASP.NET MVC project and  select Area from Add option. Our mobile specific pages using jQuery Mobile will be maintained in the Mobile Area. ASP.NET MVC Global filter for redirecting mobile visitors to Mobile area Let’s add an ASP.NET MVC Global filter for redirecting mobile visitors to Mobile area. The below Global filter is taken from the sample app http://aspnetmobilesamples.codeplex.com/ created by the ASP.NET team. The below filer will redirect the Mobile visitors to an ASP.NET MVC Area Mobile. public class RedirectMobileDevicesToMobileAreaAttribute : AuthorizeAttribute     {         protected override bool AuthorizeCore(System.Web.HttpContextBase httpContext)         {             // Only redirect on the first request in a session             if (!httpContext.Session.IsNewSession)                 return true;               // Don't redirect non-mobile browsers             if (!httpContext.Request.Browser.IsMobileDevice)                 return true;               // Don't redirect requests for the Mobile area             if (Regex.IsMatch(httpContext.Request.Url.PathAndQuery, "/Mobile($|/)"))                 return true;               return false;         }           protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)         {             var redirectionRouteValues = GetRedirectionRouteValues(filterContext.RequestContext);             filterContext.Result = new RedirectToRouteResult(redirectionRouteValues);         }           // Override this method if you want to customize the controller/action/parameters to which         // mobile users would be redirected. This lets you redirect users to the mobile equivalent         // of whatever resource they originally requested.         protected virtual RouteValueDictionary GetRedirectionRouteValues(RequestContext requestContext)         {             return new RouteValueDictionary(new { area = "Mobile", controller = "Home", action = "Index" });         }     } Let’s add the global filer RedirectMobileDevicesToMobileAreaAttribute to the global filter collection in the Application_Start() of Global.asax.cs file   GlobalFilters.Filters.Add(new RedirectMobileDevicesToMobileAreaAttribute(), 1); Now your mobile visitors will be redirect to the Mobile area. But the browser detection logic in the RedirectMobileDevicesToMobileAreaAttribute filter will not be working in some modern browsers and some conditions. But the good news is that ASP.NET’s browser detection feature is extensible and will be greatly working with the open source framework 51Degrees.mobi. 51Degrees.mobi is a Browser Capabilities Provider that will be working with ASP.NET’s Request.Browser and will provide more accurate and detailed information. For more details visit the documentation page at http://51degrees.codeplex.com/documentation. Let’s add a reference to 51Degrees.mobi library using NuGet We can easily add the 51Degrees.mobi from NuGet and this will update the web.config for necessary configuartions. Mobile Web App using jQuery Mobile Framework jQuery Mobile Framework is built on top of jQuery that provides top-of-the-line JavaScript in a unified User Interface that works across the most-used smartphone web browsers and tablet form factors. It provides an easy way to develop user interfaces for mobile web apps. The current version of the framework is jQuery Mobile Alpha 3. We need to include the following files to use jQuery Mobile. The jQuery Mobile CSS file (jquery.mobile-1.0a3.min.css) The jQuery library (jquery-1.5.min.js) The jQuery Mobile library (jquery.mobile-1.0a3.min.js) Let’s add the required jQuery files directly from jQuery CDN . You can download the files and host them on your own server. jQuery Mobile page structure The basic jQuery Mobile page structure is given below <!DOCTYPE html> <html>   <head>   <title>Page Title</title>   <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a1.min.css" />   <script src="http://code.jquery.com/jquery-1.5.min.js"></script>   <script src="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.js"></script> </head> <body> <div data-role="page">   <div data-role="header">     <h1>Page Title</h1>   </div>   <div data-role="content">     <p>Page content goes here.</p>      </div>   <div data-role="footer">     <h4>Page Footer</h4>   </div> </div> </body> </html> The data- attributes are the new feature of HTML5 so that jQuery Mobile will be working on browsers that supporting HTML 5. You can get a detailed browser support details from http://jquerymobile.com/gbs/ . In the Head section we have included the Core jQuery javascript file and jQuery Mobile Library and the core CSS Library for the UI Element Styling. These jQuery files are minified versions and will improve the performance of page load on Mobile Devices. The jQuery Mobile pages are identified with an element with the data-role="page" attribute inside the <body> tag. <div data-role="page"> </div> Within the "page" container, any valid HTML markup can be used, but for typical pages in jQuery Mobile, the immediate children of a "page" are div element with data-roles of "header", "content", and "footer". <div data-role="page">     <div data-role="header">...</div>     <div data-role="content">...</div>     <div data-role="footer">...</div> </div> The div data-role="content" holds the main content of the HTML page and will be used for making user interaction elements. The div data-role="header" is header part of the page and div data-role="footer" is the footer part of the page. Creating Mobile specific pages in the Mobile Area Let’s create Layout page for our Mobile area <!DOCTYPE html> <html> <head>     <title>@ViewBag.Title</title>     <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.css" />     <script src="http://code.jquery.com/jquery-1.5.min.js"></script>     <script src="http://code.jquery.com/mobile/1.0a3/jquery.mobile-1.0a3.min.js"></script>     </head>      <body> @RenderBody()    </body> </html> In the Layout page, I have given reference to jQuery Mobile JavaScript files and the CSS file. Let’s add an Index view page Index.chtml @{     ViewBag.Title = "Index"; } <div data-role="page"> <div data-role="header">      <h1>Expense Tracker Mobile</h1> </div> <div data-role="content">   <ul data-role="listview">     <li>@Html.Partial("_LogOnPartial")</li>      <li>@Html.ActionLink("Home", "Index", "Home")</li>      <li>@Html.ActionLink("Category", "Index", "Category")</li>                          <li>@Html.ActionLink("Expense", "Index", "Expense")</li> </ul> </div> <div data-role="footer">           Shiju Varghese | <a href="http://weblogs.asp.net/shijuvarghese">Blog     </a> | <a href="http://twitter.com/shijucv">Twitter</a>   </div> </div>   In the Index page, we have used data-role “listview” for showing our content as List View Let’s create a data entry screen create.cshtml @model MyFinance.Domain.Category @{     ViewBag.Title = "Create Category"; }   <div data-role="page"> <div data-role="header">      <h1>Create Category</h1>             @Html.ActionLink("Home", "Index","Home",null, new { @class = "ui-btn-right" })      </div>       <div data-role="content">     @using (Html.BeginForm("Create","Category",FormMethod.Post))     {       <div data-role="fieldcontain">        @Html.LabelFor(model => model.Name)        @Html.EditorFor(model => model.Name)        <div>           @Html.ValidationMessageFor(m => m.Name)        </div>         </div>         <div data-role="fieldcontain">         @Html.LabelFor(model => model.Description)         @Html.EditorFor(model => model.Description)                   </div>                    <div class="ui-body ui-body-b">         <button type="submit" data-role="button" data-theme="b">Save</button>       </div>     }        </div> </div>   In jQuery Mobile, the form elements should be placed inside the data-role="fieldcontain" The below screen shots show the pages rendered in mobile browser Index Page Create Page Source Code You can download the source code from http://efmvc.codeplex.com   Summary We have created a single  web app for desktop browsers and mobile browsers. If a user access the site from desktop browsers, users will get normal web pages and get mobile specific pages if users access from mobile browsers. If users are accessing the website from mobile devices, we will redirect to a ASP.NET MVC area Mobile. For redirecting to the Mobile area, we have used a Global filer for the redirection logic and used open source framework 51Degrees.mobi for the better support for mobile browser detection. In the Mobile area, we have created the pages using jQuery Mobile and users will get mobile friendly web pages. We can create great mobile web apps using ASP.NET MVC  and jQuery Mobile Framework.

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tom Cruise: Meet Fusion Apps UX and Feel the Speed

    - by ultan o'broin
    Unfortunately, I am old enough to remember, and now to admit that I really loved, the movie Top Gun. You know the one - Tom Cruise, US Navy F-14 ace pilot, Mr Maverick, crisis of confidence, meets woman, etc., etc. Anyway, one of more memorable lines (there were a few) was: "I feel the need, the need for speed." I was reminded of Tom Cruise recently. Paraphrasing a certain Senior Vice President talking about Oracle Fusion Applications and user experience at an all-hands meeting, I heard that: Applications can never be too easy to use. Performance can never be too fast. Developers, assume that your code is always "on". Perfect. You cannot overstate the user experience importance of application speed to users, or at least their perception of speed. We all want that super speed of execution and performance, and increasingly so as enterprise users bring the expectations of consumer IT into the work environment. Sten Vesterli (@stenvesterli), an Oracle Fusion Applications User Experience Advocate, also addressed the speed point artfully at an Oracle Usability Advisory Board meeting in Geneva. Sten asked us that when we next Googled something, to think about the message we see that Google has found hundreds of thousands or millions of results for us in a split second (for example, About 8,340,000 results (0.23 seconds)). Now, how many results can we see and how many can we use immediately? Yet, this simple message communicating the total results available to us works a special magic about speed, delight, and excitement that Google has made its own in the search space. And, guess what? The Oracle Application Development Framework table component relies on a similar "virtual performance boost", says Sten, when it displays the first 50 records in a table, and uses a scrollbar indicating the total size of the data record set. The user scrolls and the application automatically retrieves more records as needed. Application speed and its perception by users is worth bearing in mind the next time you're at a customer site and the IT Department demands that you retrieve every record from the database. Just think of... Dave Ensor: I'll give you all the rows you ask for in one second. If you promise to use them. (Again, hat tip to Sten.) And then maybe think of... Tom Cruise. And if you want to read about the speed of Oracle Fusion Applications, and what that really means in terms of user productivity for your entire business, then check out the Oracle Applications User Experience Oracle Fusion Applications white papers on the usable apps website.

    Read the article

  • introducing pointers to a large software project

    - by stefan
    I have a fairly large software project written in c++. In there, there is a class foo which represents a structure (by which i don't mean the programmers struct) in which foo-objects can be part of a foo-object. Here's class foo in simplest form: class Foo { private: std::vector<unsigned int> indices; public: void addFooIndex(unsigned int); unsigned int getFooIndex(unsigned int); }; Every foo-object is currently stored in an object of class bar. class Bar { private: std::vector<Foo> foos; public: void addFoo(Foo); std::vector<Foo> getFoos(); } So if a foo-object should represent a structure with a "inner" foo-object, I currently do Foo foo; Foo innerFoo; foo.addFooIndex(bar.getFoos().size() - 1); bar.addFoo(innerFoo); And to get it, I obviously use: Foo foo; for ( unsigned int i = 0; i < foo.getFooIndices().size(); ++i ) { Foo inner_foo; assert( foo.getFooIndices().at(i) < bar.getFoos().size() ); inner_foo = bar.getFoos().at(foo.getFooIndices().at(i)); } So this is not a problem. It just works. But it's not the most elegant solution. I now want to make the inner foos to be "more connected" with the foo-object. It would be obviously to change class foo to: class Foo { private: std::vector<Foo*> foo_pointers; public: void addFooPointer(Foo*); std::vector<Foo*> getFooPointers(); }; So now, for my question: How to gently change this basic class without messing up the whole code? Is there a "clean way"?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >