Search Results

Search found 27098 results on 1084 pages for 'oracle it services industries financial services'.

Page 325/1084 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Site Studio Mobile Example - WCM Reuse

    - by john.brunswick
    Mobile internet usage is growing by leaps and bounds and it is theorized that in the not-to-distant future it will eclipse traditional access via desktop browsers. Mary Meeker, a managing director at Morgan Stanley and head of their global technology research team, recently predicted that mobile usage will eclipse desktop usage within the next 5 years in an Events@Google series presentation. In order for organizations to reach their prospects, customers and business partners, they will need to make their content readily available on mobile devices. A few years ago it was fairly challenging to provide a special, separate, site to cater to mobile users using technologies like WML (Wireless Markup Language). Modern mobile browsers have rendered the need for this as irrelevant and now the focus has moved toward providing a browsing experience that works well on small screen sizes and is highly performant. What does all of this mean for Oracle UCM? Taking site content from an existing Site Studio site and targeting it for consumption for mobile devices is a very straightforward process that is aided by a number of native capabilities in the product. The example highlighted in this post takes advantage of dynamic conversion capabilities in Oracle UCM to enable site content to be created and updated via MS Office documents. These documents are then converted to a simple, clean HTML format for consumption in the desktop and mobile browsing experiences. To help better understand how this is possible the example below shows a fictional .COM and its mobile site counterpart that both leverage the same underlying content. The scenario is not complete or production ready, but highlights that a mobile experience may be best delivered by omitting portions of a site that would be present within the version served to desktop clients. If you have browsed CNet (news.com) on a mobile device it becomes quickly apparent that they are serving an optimized version for your mobile device. An iPhone style version can be accessed at http://iphone.cnet.com/. In order to do that they leveraged some work done for the iPhone iUi project developed by Joe Hewitt that provides mobile browsers an experience that is similar to what users may find in a native iPhone application. For our example parts of this framework are used (the CSS) and this approach provides a page that will degrade nicely over a wide range of mobile browsers, since it is comprised of lightweight HTML markup and CSS. The iPhone iUi framework also provides some nice JavaScript to enable animated transitions between pages, but for the widest range of mobile browser compatibility we will only incorporate the CSS and HTML DIV / UL based page markup in our example.

    Read the article

  • No "Terminal Services" branch in "Group Policy Object Editor"

    - by ayavilevich
    Hi, We have several identical several servers in a hosting company. They run Windows 2003 R2 Std SP2 64bit. The servers are not in a domain. We have recently received a new server with the same configuration and hardware. However, the new server is different in some way. When we run "gpedit.msc /s" there are much less options in the tree than the other servers. Specifically we are missing the configuration of "Terminal Services". Many other items are missing under "Administrative templates" and "Windows components". Screenshot of correct server: (can't post link due to SF policy) Screenshot of new server: http://img686.imageshack.us/img686/572/gpowindowscomponentstp5.png What should we try? Thanks, Arik.

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves - June 23-29, 2013

    - by Bob Rhubart
    2,947 people now follow OTN ArchBeat on Facebook. Here are the Top 10 items shared on that page for June 23-29, 2013. Podcast Show Notes: DevOps, Cloud, and Role Creep After some confusion (my bad) all three CORRECT parts of this podcast are now available. The panelists for this discussion are all Oracle ACE Directors: Ron Batra, Basheer Khan, and Cary Millsap. SOA Suite 11g Developers Cookbook Published | Antony Reynolds "The book focuses on areas that we felt we had neglected in the Developers Guide, says co-author Antony Reynolds. "There is more about Java integration and OSB, both of which we see a lot of questions about when working with customers." Using Oracle TimesTen With Oracle BI Applications (Part 2) | Peter Scott Peter Scott follows up an earlier post with a look at some of the OBIA structures and a discussion of some of the features of TimesTen. Linux-Containers — Part 1: Overview | Lenz Grimmer OTN Garage blogger Lenz Grimmer kicks off a series and expands your mind with deep detail on Linux Containers Slides from my ODTUG Kscope13 Presentation | Zeeshan Baig Oracle ACE Zeeshan Baig shares the slides from his KScope13 presentation, "Build Your Business Services Using ADF Task Flows." Fun with Enterprise Manager | Rene van Wijk Oracle ACE Rene van Wijk shares some background and some tuning and other tech tips for working with Oracle Enterprise Manager. Using VirtualBox to test drive Windows Blue | The Fat Bloke The Fat Bloke shares a tech tip for those interested in giving Windows Blue a try on Virtual Box. Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture In this three-part series Oracle Fusion Middleware A-Team members Jennifer Briscoe, Clifford Musante, Mikael Ottosson, and Pardha Reddy talk about the origins and mission of the FMW A-Team and about the great technical content you'll find on the recently launched Oracle A-Team blog. Part one is now available. 5 Best Practices - Laying the Foundation for WebCenter Projects | John Brunswick Oracle WebCenter expert John Brunswick shares best practices that "enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration." Oracle Magazine - July/Aug 2013 The digital edition of the July/August edition of Oracle Magazine is now available. This issue includes my architect community column, "The CX Factor." which features insight from community members on "why and how CX has become a significant factor in enterprise IT." h

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Media Temple-like hosting services?

    - by antonpug
    I have a couple of wordpress sites which do not get much traffic now, but I plan on expanding to something like a 1000-2000visits/day in a year or two. Media Temple has some really nice offerings, but their Wordpress plan is 20/month...which is a little too much, seeing as at this point my site is more of a hobby than a money making machine. I currently host with HostGator (just switched from GodaddyiPageBluehost). All these cheaper/pop hosting services are okay, but it would be nice to find something a little bit more "premium", but at a lower cost than MT. Anyone know anything worth looking at?

    Read the article

  • PeopleSoft HCM @ OHUG 11: Enter the Matrix

    - by Jay Zuckert
    The PeopleSoft HCM team is back from a very busy and exciting OHUG conference in Orlando. The packed, standing-room only PeopleSoft HCM Roadmap keynote was the highlight of the conference for many attendees and the reviews are in : PeopleSoft rocked the house ! Great demonstration of products in the keynote. Best keynote in a long time, and fun. Engaging and entertaining, great demonstration of capabilities. Message received loud and clear, PeopleSoft applications are here to stay.  PeopleSoft has a real vision moving forward. Real-time polls using mobile texting were cutting edge.                          Tracy Martin (as Trinity) and other members of the PeopleSoft HCM team presented a ‘must-see’ Matrix-themed session while dressed as movie characters. The keynote highlighted planned HCM capabilities for Matrix administration and future organization visualization enhancements. The team also previewed the planned Manager Dashboard and Talent Summary.                           Following the keynote, some of the cast posed for photo opportunities at the OHUG booth in the exhibition hall. As you can imagine, they received some interesting looks walking by the other vendor booths. The PeopleSoft HCM team also presented numerous other OHUG sessions covering PeopleSoft Talent Management, Compensation, HR HelpDesk, Payroll, Global HCM Practices, Time & Labor, Absence Management, and Benefits. All of those presentations are available from the OHUG site at www.ohug.org. When not in one of the well-attended PeopleSoft HCM sessions, conference attendees filled the Oracle booth in the exhibition hall to see live product demonstrations. True to their PeopleSoft roots, some of the PeopleSoft HCM team played as hard as they worked in Orlando and enjoyed the OHUG Appreciation event along with customers at the Hard Rock. We are already busy planning for Oracle OpenWorld 2011 and prepping sessions our PeopleSoft HCM customers are sure to like. We hope to see you there in San Francisco from Oct. 2-6. To learn more about OpenWorld or to register, click here.

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology

    - by Michael Hylton
    Do you want to learn how to thrive in an era of connected consumerism and digital disruptions? Come attend this free webinar on December 13th at 10:00 am PST / 1:00 pm EST as Brian Solis, Altimeter Group analyst, shares his thoughts on how our changing society and technology shifts are impacting brands today. Click here to register for this webcast, part of Oracle’s Social Business Thought Leaders Series.

    Read the article

  • Windows Deployment Services

    - by timbrigham
    I have a slightly advanced Windows Deployment Services setup. My router hands out DHCP addresses, including the following config. ip dhcp pool Servers_100 network 192.168.100.0 255.255.255.0 bootfile boot\\x86\\pxelinux.0 next-server 192.168.100.50 default-router 192.168.100.1 dns-server 192.168.100.80 192.168.100.81 This works perfectly for other subnets - I have a couple screens in my pxelinux that allow me to select my various Linux installers or enter the windows preboot environment. For some reason I'm only receiving the default bootfile that opens to the windows preboot environment. Any idea why?

    Read the article

  • Samples for RESTful web services for WCF

    - by George2
    Hello everyone, I am new to RESTful web services in WCF, but not new to WCF. I want to develop some simple RESTful web services in WCF which manually be accessed from browser. Any good samples or documents to recommend? I am using C#. thanks in advance, George

    Read the article

  • Enabling Multiple Monitor Support from Terminal Services/Remote Desktop over Citrix

    - by Nicolas Webb
    Our Remote Desktop/Terminal Services solution where I work relies on Citrix for machines not connected via the VPN. We're using Citrix Xen server (I'm pretty sure) and I'm going to try to connect to a Windows 7 Host (my work computer) and I think the RDC client runs on a Win2003 host (exposed via Citrix). Is it possible to take advantage of Windows 7 multiple monitor support for RDC with this setup? Would I need to try getting my Citrix guys to have a different host machine for the RDC (Win2008, or Win7?)? I'm probably going to connect using the OS X Citrix client, but I'd be willing to BootCamp/Fusion up a Windows instance to work remotely, as well. I really want to be able to use multiple monitors remotely. It does "span" multiple montiors currently (I have a 3000x1024 desktop, for example) but I'd rather it be "true" multiple monitor instead of one giant desktop, if possible.

    Read the article

  • 11.2 Upgrade Companion has been updated!

    - by roy.swonger
    The long-awaited update of the 11.2 Upgrade Companion is now available in My Oracle Support, in the usual location (Note 785351.1). This comprehensive update incorporates lessons learned from adoption of the 11.2.0.2 patchset release. We have also included many more links for customers in a RAC/ASM (Grid Infrastructure) environment, information about the GoldenGate 11g release, and more! As always, the Upgrade Companion is available in PDF and HTML format in addition to the web-viewable java-based document.

    Read the article

  • Fusion Applications Enablement Toolkit: the Partner's single place of information for all OPN Fusion Apps resources

    - by Richard Lefebvre
    Take a look and then come back regularly at https://blogs.oracle.com/opnenablement/resource/fusion_applications.html ... a micro site designed to give our EMEA Fusion Partners all the Fusion enablement critical information (Key links, event, materials, etc.) that they need to achieve specialization. This site will be updated on a regular basis, especially for OPN events and training sessions.

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • 466 ADF sample applications and growing - ADF EMG Kaleidoscope announcement

    - by Chris Muir
    Interested in finding more ADF sample applications?  How does 466 applications take your fancy? Today at ODTUG's Kaleidoscope conference in San Antonio the ADF EMG announced the launch of a new ADF Samples website, an index of 466 ADF applications gathered from expert ADF bloggers including customers and Oracle staff. For more details on this great ADF community resource head over to the ADF EMG announcement.

    Read the article

  • Installing Forms and Reports on a development system

    - by Duncan Mills
    By popular demand I've resurrected / updated one of the old blog postings from Jan Carlin's Blog on GroundSide here. A recent (lengthy) post on the Forms forums chronicles the problems some of you have had installing F&R on a development machine. See the link in the headline of this post for the main one. When installing, here are some points to bear in mind: Download and install Weblogic Server first. http://www.oracle.com/technology/software/products/middleware/index.htmlFind the Forms and Reports (and Disco and Portal) zip files here. Download them to the desktop (or some other temporary directory of your choosing). Unzip both of the two zip files into the same new directory (maybe called 'stage') and check that you have 4 directories in the stage dir when you are finished unzipping: 'Disk1', 'Disk2', 'Disk3' and 'Disk4'. These folders are specified in the zip file structure and must be preserved for the setup executable to work. If you use WinZip and have a right click menu option that say "Extract to here", use that by right click-dragging the zip file onto the newly created directory. Don't use the "Extract to folder %HOME%\Desktop\ofm_pfrd...disk_1of2" option. That will get you into the trouble that was reported early in this thread. Free up as much memory as you can. Stop services and background processes and virus scanners and databases (you don't need a DB to install Forms) and other things lurking about on your machine. You can restart them when the install is done. Around 1.5 GB free real memory should do it. If it doesn't, free up more if you can. Don't change the swap space unless you know what you are doing. Let Windows handle it. A 1 GB machine will likely not be enough. You will likely need at least 2GB of RAM.Start the install with setup.exe from the 'Disk1' directoryChoose the Install and Configure option unless you have a good reason not to.Choose a unique instance name even if you deinstalled and removed the last install. I suggest using 'asinst_20090722_1' (today's date in ISO format with a running incremented number at the end if you install more than two times on a particular day).Unselect Portal and Discoverer and select the Builders you want.Unselect WebCacheUnselect OHS.Unselect the single sign-on option Check for any failures and choose the retry option if any occur. If that doesn't fix the problem, call Oracle Customer Support .

    Read the article

  • Discover How to Deliver Measurable Business Value from your HCM Strategy

    - by Jay Richey, HCM Product Marketing
    Join our live Webcast on Wednesday, July 13 to learn how to fine tune your HCM strategy and better utlize your Oracle HCM investment.  In this session you'll learn how to access, analyze and act on information from multiple sources to ensure that all workforce decisions are focused on meeting overall business objectives. Date:Wednesday, July 13, 2011Time:10:00 a.m. PT / 1:00 p.m. ET Register now!

    Read the article

  • How to fill DataGridView from nested table oracle

    - by arkadiusz85
    I want to create my type: CREATE TYPE t_read AS OBJECT ( id_worker NUMBER(20), how_much NUMBER(5,2), adddate_r DATE, date_from DATE, date_to DATE ); I create a table of my type: CREATE TYPE t_tab_read AS TABLE OF t_read; Next step is create a table with my type: enter code hereCREATE TABLE Reading ( id_watermeter NUMBER(20) constraint Watermeter_fk1 references Watermeters(id_watermeter), read t_tab_read ) NESTED TABLE read STORE AS store_read ; Microsoft Visual Studio can not display this type in DataGridView. I use Oracle.Command: C# using Oracle.DataAccess; using Oracle.DataAccess.Client; private void button1_Click(object sender, EventArgs e) { try { //my working class to connect to database ConnectionClass.BeginConnection(); OracleDataAdapter tmp = new OracleDataAdapter(); tmp = ConnectionClass.ReadCommand(ReadClass.test()); DataSet dataset4 = new DataSet(); tmp.Fill(dataset4, "Read1"); dataGridView4.DataSource = dataset4.Tables["Read1"]; } catch (Exception o) { MessageBox.Show(o.Message); } public class ReadClass { public static OracleCommand test() { string sql = "select c.id_watermeter, a. from reading c , table (c.read) a where id_watermeter=1"; ConnectionClass.Command1= new OracleCommand(sql, ConnectionClass.Connection); ConnectionClass.Command1.CommandType = CommandType.Text; return ConnectionClass.Command1; } } I tray: string sql = "select r.id_watermeter, o.id_worker, o.how_much, o.adddate_r, o.date_from, o.date_to from reading r, table (r.read) o where r.id_watermeter=1" string sql = "select a.from reading c , Table (c.read) a where id_watermeter=1" string sql = "select a.id_worker, a.how_much, a.adddate_r, a.date_from, a.date_to from reading c , table (c.read) a where id_watermeter=1" string sql = "select c.id_watermeter, a. from reading c , table (c.read) a where id_watermeter=1" Error : Unsuported Oracle data type USERDEFINED encountered Sombady can help me how to fill DataGridView using data from nested table. I am using Oracle 10g XE

    Read the article

  • Source Code Linux built in Services

    - by Sirish Kumar
    Hi, I am looking at linux startup services, like Cron which runs at level 5 located in init.d, in the startup script I can only see the script file and location of binary file which is executed on startup. Where can I see the actual source code of this services

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >