Search Results

Search found 3241 results on 130 pages for 'extract'.

Page 98/130 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Installing Eclipse for OSB Development

    - by James Taylor
    OSB provides 2 methods for OSB development, the OSB console, and Eclipse. This post deals with a typical development environment with OSB installed on a remote server and the developer requiring an IDE on their PC for development. As at 11.1.1.4 Eclipse is only IDE supported for OSB development. We are hoping OSB will support JDeveloper in the future. To get the download for Eclipse use the download WebLogic Server with the Oracle Enterprise Pack for Eclipse, e.g. wls1034_oepe111161_win32.exe.To ensure the Eclipse version is compatible with your OSB version I recommend using the Eclipse that comes with the supported WLS server, e.g. OSB 11.1.1.4 you would install WLS 10.3.4+oepe.The install is a 2 step process, install the base Eclipse, then install the OSB plugins. In this example I'm using the 11.1.1.4 install for windows, your versions may differ. You need to download 2 programs, WebLogic Server with the oepe plugin for your OS, and the Oracle Service Bus which is generally generic. Place these files in a directory of your choice. Start the executable I create a new Oracle Home for this installation as it don't want to impact on my JDeveloper install or any other Oracle products installed on my machine. Ignore the support / email notifications Choose a custom install as we only want to install the minimum for Eclipse. If you really want you can do a typical and install everything. Deselect all products then select the Oracle Enterprise Pack for Eclipse. This will select the minimum prerequisites required for install. As I'm only going to use this home for OSB Development I deselect the JRockit JVM. Accept the locations for the installs. If running on a Windows environment you will be asked to start a Node Manger service. This is optional. I have chosen to ignore. Select the user permissions you require, I have set to default. Do a last check to see if the values are correct and continue to install. The install should start. The install should complete successfully. I chose not to run the Quick Start. Extract the OSB download to a location of your choice and double click on the setup.exe. You may be asked to supply a correct java location. Point this to the java installed in your OS. I'm running Windows 7 so I used the 64bit version. Skip the software updates. Set the OSB home to the location of the WLS home installed above Choose a custom install as all we want to install is the OSB Eclipse Plugins. Select OSB IDE. For the rest of the install screens accept the defaults. Start the install There is no need to configure a WLS domain if you only intend to deploy to the remote server. If you need to do this there are other sites how to configure via the configuration wizard. Start Eclipse to make sure the OSB Plugin has been created. In the top right drop down you should see OSB as an option. Connecting to the remote server, select the Server Tab at the bottom Right-click in that frame and select Server. Chose the remote server version and the hostname Provide and name for your server if necessary, and accept the defaults Enter connection details for the remote server Click on the Remote server and it should validate stating its status.Now you ready to develop, Happy developing!

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data – Basics of Big Data Analytics – Day 18 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the various components in Big Data Story. In this article we will understand what are the various analytics tasks we try to achieve with the Big Data and the list of the important tools in Big Data Story. When you have plenty of the data around you what is the first thing which comes to your mind? “What do all these data means?” Exactly – the same thought comes to my mind as well. I always wanted to know what all the data means and what meaningful information I can receive out of it. Most of the Big Data projects are built to retrieve various intelligence all this data contains within it. Let us take example of Facebook. When I look at my friends list of Facebook, I always want to ask many questions such as - On which date my maximum friends have a birthday? What is the most favorite film of my most of the friends so I can talk about it and engage them? What is the most liked placed to travel my friends? Which is the most disliked cousin for my friends in India and USA so when they travel, I do not take them there. There are many more questions I can think of. This illustrates that how important it is to have analysis of Big Data. Here are few of the kind of analysis listed which you can use with Big Data. Slicing and Dicing: This means breaking down your data into smaller set and understanding them one set at a time. This also helps to present various information in a variety of different user digestible ways. For example if you have data related to movies, you can use different slide and dice data in various formats like actors, movie length etc. Real Time Monitoring: This is very crucial in social media when there are any events happening and you wanted to measure the impact at the time when the event is happening. For example, if you are using twitter when there is a football match, you can watch what fans are talking about football match on twitter when the event is happening. Anomaly Predication and Modeling: If the business is running normal it is alright but if there are signs of trouble, everyone wants to know them early on the hand. Big Data analysis of various patterns can be very much helpful to predict future. Though it may not be always accurate but certain hints and signals can be very helpful. For example, lots of data can help conclude that if there is lots of rain it can increase the sell of umbrella. Text and Unstructured Data Analysis: unstructured data are now getting norm in the new world and they are a big part of the Big Data revolution. It is very important that we Extract, Transform and Load the unstructured data and make meaningful data out of it. For example, analysis of lots of images, one can predict that people like to use certain colors in certain months in their cloths. Big Data Analytics Solutions There are many different Big Data Analystics Solutions out in the market. It is impossible to list all of them so I will list a few of them over here. Tableau – This has to be one of the most popular visualization tools out in the big data market. SAS – A high performance analytics and infrastructure company IBM and Oracle – They have a range of tools for Big Data Analysis Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Data Scientist. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How do I install Revenge of the Titans?

    - by Akash
    I've downloaded the .deb file of Revenge of the Titans, and installed it using Ubuntu Software Center. Now, when I try to launch it using the software launcher nothing happens. Any ideas? The .deb file was downloaded from the Humble Indie Bundle. I am unable to launch it from the terminal ( the command revenge-of-the-titans says command not found ). I also tried the .tar.gz. When I extract it and run ./revenge.sh , nothing happens. No output on the terminal or anything at all. I have set chmod 777 revenge.sh as well. The command /opt/revengeofthetitans/revenge.sh does not give any output. If I run gedit /opt/revengeofthetitans/revenge.sh in the terminal: > #!/bin/bash > # > # revenge.sh > # > ############################################################################### > > SCRIPT="`basename $0`" > GAMEDIR="${HOME}/.revenge_of_the_titans_1.80" LOGFILE="${GAMEDIR}/${SCRIPT}.log" > INSTDIR="`dirname $0`" ; cd > "${INSTDIR}" ; INSTDIR="`pwd`" > > [[ ! -d "${GAMEDIR}" ]] && mkdir -m > 0755 "${GAMEDIR}" > > JARPATH="patch.jar:RevengeOfTheTitans.jar:data-hib.jar:gfx.jar:fonts.jar:images.jar:music.jar:fx-mono.jar:fx-stereo.jar:gamecommerce.jar:common.jar:spgl-lite.jar:lwjgl.jar:lwjgl_util.jar:jorbis.jar:jinput.jar" > > # XMODIFIERS is cleared here to prevent SCIM screwing up keyboard > input XMODIFIERS= java \ > -noverify \ > -Djava.library.path="${INSTDIR}" \ > -Dorg.lwjgl.util.NoChecks=true \ > -Dorg.lwjgl.librarypath="${INSTDIR}" \ > -Dnet.puppygames.applet.Launcher.resources=/resources-hib.dat > \ > -Dnet.puppygames.applet.Game.gameResource=game.hib > \ > -XX:MaxGCPauseMillis=3 \ > -Xms64m \ > -Xmx375m \ > -Xincgc \ > -cp "${JARPATH}" \ > net.puppygames.applet.Launcher \ > "$@" \ > >"${LOGFILE}" 2>&1 > > exit 0 > > # > # EOF > # > ###############################################################################

    Read the article

  • How to use a list of values in Excel as filter in a query

    - by Luca Zavarella
    It often happens that a customer provides us with a list of items for which to extract certain information. Imagine, for example, that our clients wish to have the header information of the sales orders only for certain orders. Most likely he will give us a list of items in a column in Excel, or, less probably, a simple text file with the identification code:     As long as the given values ??are at best a dozen, it costs us nothing to copy and paste those values ??in our SSMS and place them in a WHERE clause, using the IN operator, making sure to include the quotes in the case of alphanumeric elements (the database sample is AdventureWorks2008R2): SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( 'SO43667' ,'SO43709' ,'SO43726' ,'SO43746' ,'SO43782' ,'SO43796') Clearly, the need to add commas and quotes becomes an hassle when dealing with hundreds of items (which of course has happened to us!). It’d be comfortable to do a simple copy and paste, leaving the items as they are pasted, and make sure the query works fine. We can have this commodity via a User Defined Function, that returns items in a table. Simply we’ll provide the function with an input string parameter containing the pasted items. I give you directly the T-SQL code, where comments are there to clarify what was written: CREATE FUNCTION [dbo].[SplitCRLFList] (@List VARCHAR(MAX)) RETURNS @ParsedList TABLE ( --< Set the item length as your needs Item VARCHAR(255) ) AS BEGIN DECLARE --< Set the item length as your needs @Item VARCHAR(255) ,@Pos BIGINT --< Trim TABs due to indentations SET @List = REPLACE(@List, CHAR(9), '') --< Trim leading and trailing spaces, then add a CR\LF at the end of the list SET @List = LTRIM(RTRIM(@List)) + CHAR(13) + CHAR(10) --< Set the position at the first CR/LF in the list SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< If exist other chars other than CR/LFs in the list then... IF REPLACE(@List, CHAR(13) + CHAR(10), '') <> '' BEGIN --< Loop while CR/LFs are over (not found = CHARINDEX returns 0) WHILE @Pos > 0 BEGIN --< Get the heading list chars from the first char to the first CR/LF and trim spaces SET @Item = LTRIM(RTRIM(LEFT(@List, @Pos - 1))) --< If the so calulated item is not empty... IF @Item <> '' BEGIN --< ...insert it in the @ParsedList temporary table INSERT INTO @ParsedList (Item) VALUES (@Item) --(CAST(@Item AS int)) --< Use the appropriate conversion if needed END --< Remove the first item from the list... SET @List = RIGHT(@List, LEN(@List) - @Pos - 1) --< ...and set the position to the next CR/LF SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< Repeat this block while the upon loop condition is verified END END RETURN END At this point, having created the UDF, our query is transformed trivially in: SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( SELECT Item FROM SplitCRLFList('SO43667 SO43709 SO43726 SO43746 SO43782 SO43796') AS SCL) Convenient, isn’t it? You can find the script DBA_SplitCRLFList.sql here. Bye!!

    Read the article

  • Change The Windows 7 Start Orb the Easy Way

    - by Matthew Guay
    Want to make your Windows 7 PC even more unique and personalized?  Then check out this easy guide on how to change your start orb in Windows 7. Getting Started First, download the free Windows 7 Start Button Changer (link below), and extract the contents of the folder.  It contains the app along with a selection of alternate start button orbs you can try out.   Before changing the start button, we advise creating a system restore point in case anything goes wrong.  Enter System Restore in your Start menu search, and select “Create a restore point”. Please note:  We tested this on both the 32 bit and 64 bit editions of Windows 7, and didn’t encounter any problems or stability issues.  That said, it is always prudent to make a restore point just in case a problem did happen. Click the Create button… Then enter a name for the restore point, and click Create. Changing the Start Orb. Once this is finished, run the Windows 7 Start Button Changer as administrator by right-clicking on it and selecting “Run as administrator”.  Accept the UAC prompt that will appear. If you don’t run it as an administrator, you may see the following warning.  Click Quit, and then run again as administrator. You should now see the Windows 7 Start Button Changer.  On the left it shows what your current (default) start orb looks like inactive, when hovered over, and when selected.  Click the orb on the right to select a new start button. Here we browsed to the sample orbs folder, and selected one of them.  Let’s give Windows the Media Center orb for a start orb.  Click the orb you want, and then select open. When you click Open, your screen will momentarily freeze and your taskbar will disappear.  When it reappears, your computer will have gone from having the old, default Start orb style… …to your new, exciting Start orb!  Here it is default, and glowing when hovered over. Now, the Windows 7 Start Orb Changer will change, and show your new Start orb on the left side.  If you would like to revert to the default orb, simply click the folder icon to restore it.  Or, if you would like to change the orb again, restore the original first and then select a new one. The orbs don’t have to be round; here’s a fancy Windows 7 logo as the start button. The start orb change will work in the Aero and Aero basic (which Windows 7 Start uses) themes, but will not show up in the classic, Windows 2000 style themes.  Here’s how the new start button looks with the Aero Classic theme: There are tons of orbs available, including this cute smiley, so choose one that you like to make your computer uniquely yours. Conclusion This is a cute way to make your desktop unique, and can be a great way to make a truly personalized theme.  Let us know your favorite Start orb! Link Download the Windows 7 Start Button Changer Find more Start orbs at deviantART Similar Articles Productive Geek Tips Change the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateQuick Tip: Change the Registered Owner in WindowsSpeed up Windows Vista Start Menu Search By Limiting ResultsWhy Does My Password Expire in Windows?Change Your Computer Name in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Enterprise Integration: Can Companies Afford It?

    - by Ralph Wheaton
    Each year, my company holds a global sales conference where employees and partners from around the world some together to collaborate, share knowledge and ideas and learn about future plans.  As a member of the professional services division, several of us had been asked to make a presentation, an elevator pitch in 3 minutes or less that relates to a success we have worked on or directly relates to our tag (that is, our primary technology focus).  Mine happens to be Enterprise Integration as it relates Business Intelligence.  I found it rather difficult to present that pitch in a short amount of time and had to pare it down.  At any rate, in just a little over 3 minutes, this is the presentation I submitted.  Here is a link to the full presentation video in WMV format.   Many companies today subscribe to a buy versus build mentality in an attempt to drive down costs and improve time to implementation. Sometimes this makes sense, especially as it relates to specialized software or software that performs a small number of tasks extremely well. However, if not carefully considered or planned out, this oftentimes leads to multiple disparate systems with silos of data or multiple versions of the same data. For instance, client data (contact information, addresses, phone numbers, opportunities, sales) stored in your CRM system may not play well with Accounts Receivables. Employee data may be stored across multiple systems such as HR, Time Entry and Payroll. Other data (such as member data) may not originate internally, but be provided by multiple outside sources in multiple formats. And to top it all off, some data may have to be manually entered into multiple systems to keep it all synchronized. When left to grow out of control like this, overall performance is lacking, stability is questionable and maintenance is frequent and costly. Worse yet, in many cases, this topology, this hodgepodge of data creates a reporting nightmare. Decision makers are forced to try to put together pieces of the puzzle attempting to find the information they need, wading through multiple systems to find what they think is the single version of the truth. More often than not, they find they are missing pieces, pieces that may be crucial to growing the business rather than closing the business. across applications. Master data owners are defined to establish single sources of data (such as the CRM system owns client data). Other systems subscribe to the master data and changes are replicated to subscribers as they are made. This can be one way (no changes are allowed on the subscriber systems) or bi-directional. But at all times, the master data owner is current or up to date. And all data, whether internal or external, use the same processes and methods to move data from one place to another, leveraging the same validations, lookups and transformations enterprise wide, eliminating inconsistencies and siloed data. Once implemented, an enterprise integration solution improves performance and stability by reducing the number of moving parts and eliminating inconsistent data. Overall maintenance costs are mitigated by reducing touch points or the number of places that require modification when a business rule is changed or another data element is added. Most importantly, however, now decision makers can easily extract and piece together the information they need to grow their business, improve customer satisfaction and so on. So, in implementing an enterprise integration solution, companies can position themselves for the future, allowing for easy transition to data marts, data warehousing and, ultimately, business intelligence. Along this path, companies can achieve growth in size, intelligence and complexity. Truly, the question is not can companies afford to implement an enterprise integration solution, but can they afford not to.   Ralph Wheaton Microsoft Certified Technology Specialist Microsoft Certified Professional Developer Microsoft VTS-P BizTalk, .Net

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Joy! | Important Information About Your iPad 3G

    - by Jeff Julian
    Looks like I was one of the lucky 114,000 who AT&T lost their email to “hackers”.  Why is “hackers” in “double quotes”.  I can just imagine some executive at AT&T in their “Oh No, We Messed Up Meeting” saying, what happened?  Then someone replied, well we have had a breach and “hackers” broke in (using the quote in the air gesture) and stole our iPad 3G customers emails. Oh well, I am sure my email has been sold and sold again by many different vendors, why not AT&T now.  At least Dorothy Attwood could have gave us her email to give to someone else instead of blinking it through a newsletter system. June 13, 2010 Dear Valued AT&T Customer, Recently there was an issue that affected some of our customers with AT&T 3G service for iPad resulting in the release of their customer email addresses. I am writing to let you know that no other information was exposed and the matter has been resolved.  We apologize for the incident and any inconvenience it may have caused. Rest assured, you can continue to use your AT&T 3G service on your iPad with confidence. Here’s some additional detail: On June 7 we learned that unauthorized computer “hackers” maliciously exploited a function designed to make your iPad log-in process faster by pre-populating an AT&T authentication page with the email address you used to register your iPad for 3G service.  The self-described hackers wrote software code to randomly generate numbers that mimicked serial numbers of the AT&T SIM card for iPad – called the integrated circuit card identification (ICC-ID) – and repeatedly queried an AT&T web address.   When a number generated by the hackers matched an actual ICC-ID, the authentication page log-in screen was returned to the hackers with the email address associated with the ICC-ID already populated on the log-in screen. The hackers deliberately went to great efforts with a random program to extract possible ICC-IDs and capture customer email addresses.  They then put together a list of these emails and distributed it for their own publicity. As soon as we became aware of this situation, we took swift action to prevent any further unauthorized exposure of customer email addresses.  Within hours, AT&T disabled the mechanism that automatically populated the email address. Now, the authentication page log-in screen requires the user to enter both their email address and their password. I want to assure you that the email address and ICC-ID were the only information that was accessible. Your password, account information, the contents of your email, and any other personal information were never at risk.  The hackers never had access to AT&T communications or data networks, or your iPad.  AT&T 3G service for other mobile devices was not affected. While the attack was limited to email address and ICC-ID data, we encourage you to be alert to scams that could attempt to use this information to obtain other data or send you unwanted email. You can learn more about phishing by visiting the AT&T website. AT&T takes your privacy seriously and does not tolerate unauthorized access to its customers’ information or company websites.   We will cooperate with law enforcement in any investigation of unauthorized system access and to prosecute violators to the fullest extent of the law. AT&T acted quickly to protect your information – and we promise to keep working around the clock to keep your information safe.  Thank you very much for your understanding, and for being an AT&T customer. Sincerely, Dorothy Attwood Senior Vice President, Public Policy and Chief Privacy Officer for AT&T Technorati Tags: AT&T,iPad 3G,Email

    Read the article

  • Abstraction, Politics, and Software Architecture

    Abstraction can be defined as a general concept and/or idea that lack any concrete details. Throughout history this type of thinking has led to an array of new ideas and innovations as well as increased confusion and conspiracy. If one was to look back at our history they will see that abstraction has been used in various forms throughout our past. When I was growing up I do not know how many times I heard politicians say “Leave no child left behind” or “No child left behind” as a major part of their campaign rhetoric in regards to a stance on education. As you can see their slogan is a perfect example of abstraction because it only offers a very general concept about improving our education system but they do not mention how they would like to do it. If they did then they would be adding concrete details to their abstraction thus turning it in to an actual working plan as to how we as a society can help children succeed in school and in life, but then they would not be using abstraction. By now I sure you are thinking what does abstraction have to do with software architecture. You are valid in thinking this way, but abstraction is a wonderful tool used in information technology especially in the world of software architecture. Abstraction is one method of extracting the concepts of an idea so that it can be understood and discussed by others of varying technical abilities and backgrounds. One ways in which I tend to extract my architectural design thoughts is through the use of basic diagrams to convey an idea for a system or a new feature for an existing application. This allows me to generically model an architectural design through the use of views and Unified Markup Language (UML). UML is a standard method for creating a 4+1 Architectural View Models. The 4+1 Architectural View Model consists of 4 views typically created with UML as well as a general description of the concept that is being expressed by a model. The 4+1 Architectural View Model: Logical View: Models a system’s end-user functionality. Development View: Models a system as a collection of components and connectors to illustrate how it is intended to be developed.  Process View: Models the interaction between system components and connectors as to indicate the activities of a system. Physical View: Models the placement of the collection of components and connectors of a system within a physical environment. Recently I had to use the concept of abstraction to express an idea for implementing a new security framework on an existing website. My concept would add session based management in order to properly secure and allow page access based on valid user credentials and last user activity.  I created a basic Process View by using UML diagrams to communicate the basic process flow of my changes in the application so that all of the projects stakeholders would be able to understand my idea. Additionally I created a Logical View on a whiteboard while conveying the process workflow with a few stakeholders to show how end-user will be affected by the new framework and gaining additional input about the design. After my Logical and Process Views were accepted I then started on creating a more detailed Development View in order to map how the system will be built based on the concept of components and connections based on the previously defined interactions. I really did not need to create a Physical view for this idea because we were updating an existing system that was already deployed based on an existing Physical View. What do you think about the use of abstraction in the development of software architecture? Please let me know.

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • How to Mentor a Junior Developer

    - by Josh Johnson
    This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently and in the past we've had people start here who seem to have an attitude toward development/programming which is different from mine and hard for me to reconcile; they seem to extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and sort of push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above seems to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done).

    Read the article

  • Information on upgrading Kinect Applications to MS SDK Beta 2.

    - by mbcrump
    Introduction Microsoft recently released the Kinect for Windows SDK Beta 2. It contains many enhancements and fixes that can be found here. The only problem with it is that a lot of current demo applications no longer function properly. Today, I’m going to walk you through a typical scenario of upgrading a Kinect application built with Beta 1 to Beta 2. Note: This tutorial covers WPF, but you can use the same techniques for WinForms. 1) Fix the references Let’s start with a fairly popular Kinect demo called Kinect User Interface Demo. This project uses the beta 1 version of Microsoft.Research.Kinect.dll and version 1.0.0.0 of Coding4Fun’s Kinect library. After you download the source code and extract the zip you will see the following references in Visual Studio 2010: Pay attention to the following references as these are the .dlls that you will have to update: Coding4Fun.Kinect.Wpf Microsoft.Research.Kinect If you click on Coding4Fun.Kinect.Wpf file you will see the following version information (v1.0.0.0): This needs to be upgraded to the Coding4Fun Kinect library built against Beta 2. So head over to http://c4fkinect.codeplex.com/ and hit download and you will have the following files. Go ahead and hit the delete key on your keyboard to remove the Coding4Fun.Kinect.Wpf.dll file from your project. Select “Add Reference” and navigate out to the folder where you extracted the files and select Coding4Fun.Kinect.Wpf.dll. If you click on the Coding4Fun.Kinect.Wpf.dll file and check properties it should be listed at 1.1.0.0: Fix Microsoft.Research.Kinect.dll The official SDK Beta 2 released a new .dll that you will need to reference in your application. Go ahead and select Microsoft.Research.Kinect.dll in your application and hit the Delete key on your keyboard. Go ahead and select Add Reference again and select Microsoft.Research.Kinect.dll from the .NET tab. Double check and make sure the version number is 1.0.0.45 as shown below. References fixed – Runtime needs to be updated. So we have fixed the references in a typical Kinect application that uses Microsoft’s SDK and C4F Kinect libraries. Now, we will need to update the runtime. All Beta 1 Kinect applications will instantiate the Runtime with the following code: Can you see that it is now marked with [Depreciated]? That means we need to update it before Microsoft decides to remove it from future versions of the SDK. We can fix this very easily by replacing this code: readonly Runtime _runtime = new Runtime(); with Microsoft.Research.Kinect.Nui.Runtime _nui; and adding similar code to our Loaded event as shown below public MainWindow() { InitializeComponent(); Loaded += new RoutedEventHandler(MainWindow_Loaded); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { if (Runtime.Kinects.Count == 0) { txtInfo.Text = "Missing Kinect"; } else { _nui = Runtime.Kinects[0]; _nui.Initialize(RuntimeOptions.UseColor); // Video Frame Ready Event can happen now!!! //_nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(_nui_VideoFrameReady); _nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } } In this sample, I am testing to see if a Kinect is detected and if it is then I initialize the runtime with my first Kinect by using the Runtime.Kinects[0]. You can also specify other Kinect devices here. The rest of the code is standard code that you simply modify however you wish (ie Skeletal, Depth, etc) depending on what type of video feed you want. Conclusion As you can see it really wasn’t that painful to upgrade your project to Beta 2. I would recommend that you go ahead and upgrade to Beta 2 as future versions of the SDK will use these methods.  Thanks for reading. Subscribe to my feed

    Read the article

  • PowerShell: Read Excel to Create Inserts

    - by BuckWoody
    I’m writing a series of articles on how to migrate “departmental” data into SQL Server. I also hold workshops on the entire process – from discovering that the data exists to the modeling process and then how to design the Extract, Transform and Load (ETL) process. Finally I write about (and teach) a few methods on actually moving the data. One of those options is to use PowerShell. There are a lot of ways even with that choice, but the one I show is to read two columns from the spreadsheet and output statements that would insert the data using a stored procedure. Of course, you could re-write this as INSERT statements, out to a text file for bcp, or even use a database connection in the script to move the data directly from Excel into SQL Server. This snippet won’t run on your system, of course – it assumes a Microsoft Office Excel 2007 spreadsheet located at c:\temp called VendorList.xlsx. It looks for a tab in that spreadsheet called Vendors. The statement that does the writing just uses one column: Vendor Code. Here’s the breakdown of what I’m doing: In the first block, I connect to Microsoft Office Excel. That connection string is specific to Excel 2007, so if you need a different version you’ll need to look that up. In the second block I set up a selection from the entire spreadsheet based on that tab. Note that if you’re only after certain data you shouldn’t get the whole spreadsheet – that’s just good practice. In the next block I create the text I want, inserting the Vendor Code field as I go. Finally I close the connection. Enjoy! $ExcelConnection= New-Object -com "ADODB.Connection" $ExcelFile="c:\temp\VendorList.xlsx" $ExcelConnection.Open("Provider=Microsoft.ACE.OLEDB.12.0;` Data Source=$ExcelFile;Extended Properties=Excel 12.0;") $strQuery="Select * from [Vendors$]" $ExcelRecordSet=$ExcelConnection.Execute($strQuery) do { Write-Host "EXEC sp_InsertVendors '" $ExcelRecordSet.Fields.Item("Vendor Code").Value "'" $ExcelRecordSet.MoveNext()} Until ($ExcelRecordSet.EOF) $ExcelConnection.Close() Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • Oracle GoldenGate 11g Release 2 Launch Webcast Replay Available

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} For those of you who missed Oracle GoldenGate 11g Release 2 launch webcasts last week, the replay is now available from the following url. Harnessing the Power of the New Release of Oracle GoldenGate 11g I would highly recommend watching the webcast to meet many new features of the new release and hear the product management team respond to the questions from the audience in a nice long Q&A section. In my blog last week I listed the media coverage for this new release. There is a new article published by ITJungle talking about Oracle GoldenGate’s heterogeneity and support for DB2 for iSeries: Oracle Completes DB2/400 Support in Data Replication Tool As mentioned in last week’s blog, we received over 150 questions from the audience and in this blog I'd like to continue to post some of the frequently asked,  questions and their answers: Question: What are the fundamental differences between classic data capture and integrated data capture? Do both use the redo logs in the source database? Answer: Yes, they both use redo logs. Classic capture parses the redo log data directly, whereas the Integrated Capture lets the Oracle database parse the redo log record using an internal API. Question: Does GoldenGate version need to match Oracle Database version? Answer: No, they are not directly linked. Oracle GoldenGate 11g Release 2 supports Oracle Database version 10gR2 as well. For Oracle Database version 10gR1 and Oracle Database version 9i you will need GoldenGate11g Release 1 or lower. And for Oracle Database 8i you need Oracle GoldenGate 10 or earlier versions. Question: If I already use Data Guard, do I need GoldenGate? Answer: Data Guard is designed as the best disaster recovery solution for Oracle Database. If you would like to implement a bidirectional Active-Active replication solution or need to move data between heterogeneous systems, you will need GoldenGate. Question: On Compression and GoldenGate, if the source uses compression, is it required that the target also use compression? Answer: No, the source and target do not need to have the same compression settings. Question: Does GG support Advance Security Option on the Source database? Answer: Yes it does. Question: Can I use GoldenGate to upgrade the Oracle Database to 11g and do OS migration at the same time? Answer: Yes, this is a very common project where GoldenGate can eliminate downtime, give flexibility to test the target as needed, and minimize risks with fail-back option to the old environment. For more information on database upgrades please check out the following white papers: Best Practices for Migrating/Upgrading Oracle Database Using Oracle GoldenGate 11g Zero-Downtime Database Upgrades Using Oracle GoldenGate Question: Does GoldenGate create any trigger in the source database table level or row level to for real-time data integration? Answer: No, GoldenGate does not create triggers. Question: Can transformation be done after insert to destination table or need to be done before? Answer: It can happen in the Capture (Extract) process, in the  Delivery (Replicat) process, or in the target database. For more resources on Oracle GoldenGate 11gR2 please check out our Oracle GoldenGate 11gR2 resource kit as well.

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • How to upgrade a remote server from 8.10 to newer version?

    - by DisgruntledGoat
    I have a remote server still running Ubuntu 8.10 9.04 that I can only access via SSH. If I run apt-get update I get a bunch of 404 errors on the packages. I've asked a few questions on Server Fault but got nowhere. Here's what I've done: Run apt-get update which returns errors like: Err http://gb.archive.ubuntu.com intrepid/main Packages 404 Not Found [and same for many other packages] Run do-release-upgrade which returns: Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Edited /etc/update-manager/release-upgrades and changed from Prompt=normal to Prompt=lts (as suggested here). Running do-release-upgrade after this returns: Checking for a new ubuntu release current dist not found in meta-release file No new release found (Updated) I have followed the advice in this question and changed /etc/apt/sources.list to refer to jaunty instead of intrepid. However, that distro is not online anymore either. A comment there says I have to upgrade in chronological order... So basically, it seems like I cannot upgrade because my current distro is out of date and not supported. Is there a way to upgrade direct to 10.x or 11.x? Note, as this is a server I only have command-line access. UPDATE 24/11: I have managed to upgrade from 8.10 to 9.04. Ubuntu's EOL Upgrades page provides some alternate URLs for apt sources. I also needed to update /var/lib/update-manager/meta-release to point to the old-releases server too. However, now I cannot upgrade from 9.04 to 9.10. Running do-release-upgrade produces the same error as #2 above, except it "Failed to fetch" (the URLs in meta-release are valid). The Ubuntu Jaunty upgrade page says it's necessary to upgrade using a CD image. I followed the instructions here, but it didn't work: A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade is now aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmp.JLhTwVUugb/karmic", line 7, in sys.exit(main()) File "/tmp/tmp.JLhTwVUugb/DistUpgradeMain.py", line 132, in main if app.run(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1590, in run return self.fullUpgrade() File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1506, in fullUpgrade if not self.doPostInitialUpdate(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 762, in doPostInitialUpdate self.quirks.run("PostInitialUpdate") File "/tmp/tmp.JLhTwVUugb/DistUpgradeQuirks.py", line 83, in run for plugin in self.plugin_manager.get_plugins(condition): File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 167, in get_plugins filenames = self.get_plugin_files() File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 120, in get_plugin_files basenames = [x for x in os.listdir(dirname) OSError: [Errno 2] No such file or directory: './plugins' It does say to report the bug, but since this is an old unsupported release I don't know if it's worth doing. However, is there a way round this, to upgrade from 9.04 to 9.10 (And then finally to 10.04 LTS.)

    Read the article

  • Using HTML5 Today part 3&ndash; Using Polyfills

    - by Steve Albers
    Shims helps when adding semantic tags to older IE browsers, but there is a huge range of other new HTML5 features that having varying support on browsers.  Polyfills are JavaScript code and/or browser plug-ins that can provide older or less featured browsers with API support.  The best polyfills will detect the whether the current browser has native support, and only adds the functionality if necessary.  The Douglas Crockford JSON2.js library is an example of this approach: if the browser already supports the JSON object, nothing changes.  If JSON is not available, the library adds a JSON property in the global object. This approach provides some big benefits: It lets you add great new HTML5 features to your web sites sooner. It lets the developer focus on writing to the up-and-coming standard rather than proprietary APIs. Where most one-off legacy code fixes tends to break down over time, well done polyfills will stop executing over time (as customer browsers natively support the feature) meaning polyfill code may not need to be tested against new browsers since they will execute the native methods instead. Your should also remember that Polyfills represent an entirely separate code path (and sometimes plug-in) that requires testing for support.  Also Polyfills tend to run on older browsers, which often have slower JavaScript performance.  As a result you might find that performance on older browsers is not comparable. When looking for Polyfills you can start by checking the Modernizr GitHub wiki or the HTML5 Please site. For an example of a polyfill consider a page that writes a few geometric shapes on a <canvas> <script src="jquery-1.7.1.min.js"><script> <script> $(document).ready(function () { drawCanvas(); }); function drawCanvas() { var context = $("canvas")[0].getContext('2d'); //background context.fillStyle = "#8B0000"; context.fillRect(5, 5, 300, 100); // emptybox context.strokeStyle = "#B0C4DE"; context.lineWidth = 4; context.strokeRect(20, 15, 80, 80); // circle context.arc(160, 55, 40, 0, Math.PI * 2, false); context.fillStyle = "#4B0082"; context.fill(); </script>   The result is a simple static canvas with a box & a circle:   …to enable this functionality on a pre-canvas browser we can find a polyfill.  A check on html5please.com references  FlashCanvas.  Pull down the zip and extract the files (flashcanvas.js, flash10canvas.swf, etc) to a directory on your site.  Then based on the documentation you need to add a single line to your original HTML file: <!--[if lt IE 9]><script src="flashcanvas.js"></script><![endif]—> …and you have canvas functionality!  The IE conditional comments ensure that the library is only loaded in browsers where it is useful, improving page load & processing time. Like all Polyfills, you should test to verify the functionality matches your expectations across browsers you need to support.  For instance the Flash Canvas home page advertises 70% support of HTML5 Canvas spec tests.

    Read the article

  • django/python: is one view that handles two sibling models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. The view alters its behaviour based on input parameter action (better name would be mode), which can be of value "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate (but almost the same) logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Image, rel_model_tag, rel_object_id, mode) def video_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Video, rel_model_tag, rel_object_id, mode) def media_list(request, model, rel_model_tag, rel_object_id, mode): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['event'] = rel_object_id elif rel_model == Member: filter_params['members'] = rel_object_id elif rel_model == Crag: filter_params['crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if mode == 'edit': return media_edit_list(request, model, rel_model_tag, rel_object_id, context) return media_view_list(request, model, rel_model_tag, rel_object_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_edit_record = get_image_edit_record else: get_media_edit_record = get_video_edit_record media_list = [get_media_edit_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_edit_record(context['star_media'], rel_model_tag, rel_object_id) else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) def get_image_edit_record(image, rel_model_tag, rel_object_id): record = { 'url': image.image.url, 'name': image.title or image.filename, 'type': mimetypes.guess_type(image.image.path)[0] or 'image/png', 'thumbnailUrl': image.thumbnail_2.url, 'size': image.image.size, 'id': image.id, 'media_id': image.media_ptr.id, 'starUrl':reverse('image-star', kwargs={'image_id': image.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record def get_video_edit_record(video, rel_model_tag, rel_object_id): record = { 'url': video.embed_url, 'name': video.title or video.url, 'type': None, 'thumbnailUrl': video.thumbnail_2.url, 'size': None, 'id': video.id, 'media_id': video.media_ptr.id, 'starUrl': reverse('video-star', kwargs={'video_id': video.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.ForeignKey('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.ForeignKey('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >