Search Results

Search found 14532 results on 582 pages for 'dynamic types'.

Page 159/582 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Easy Profiling Point Insertion

    - by Geertjan
    One really excellent feature of NetBeans IDE is its Profiler. What's especially cool is that you can analyze code fragments, that is, you can right-click in a Java file and then choose Profiling | Insert Profiling Point. When you do that, you're able to analyze code fragments, i.e., from one statement to another statement, e.g., how long a particular piece of code takes to execute: https://netbeans.org/kb/docs/java/profiler-profilingpoints.html However, right-clicking a Java file and then going all the way down a longish list of menu items, to find "Profiling", and then "Insert Profiling Point" is a lot less easy than right-clicking in the sidebar (known as the glyphgutter) and then setting a profiling point in exactly the same way as a breakpoint: That's much easier and more intuitive and makes it far more likely that I'll use the Profiler at all. Once profiling points have been set then, as always, another menu item is added for managing the profiling point: To achieve this, I added the following to the "layer.xml" file: <folder name="Editors"> <folder name="AnnotationTypes"> <file name="profiler.xml" url="profiler.xml"/> <folder name="ProfilerActions"> <file name="org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.shadow"> <attr name="originalFile" stringvalue="Actions/Profile/org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.instance"/> <attr name="position" intvalue="300"/> </file> </folder> </folder> </folder> Notice that a "profiler.xml" file is referred to in the above, in the same location as where the "layer.xml" file is found. Here is the content: <!DOCTYPE type PUBLIC '-//NetBeans//DTD annotation type 1.1//EN' 'http://www.netbeans.org/dtds/annotation-type-1_1.dtd'> <type name='editor-profiler' description_key='HINT_PROFILER' localizing_bundle='org.netbeans.eppi.Bundle' visible='true' type='line' actions='ProfilerActions' severity='ok' browseable='false'/> Only disadvantage is that this registers the profiling point insertion in the glyphgutter for all file types. But that's true for the debugger too, i.e., there's no MIME type specific glyphgutter, instead, it is shared by all MIME types. Little bit confusing that the profiler point insertion can now, in theory, be set for all MIME types, but that's also true for the debugger, even though it doesn't apply to all MIME types. That probably explains why the profiling point insertion can only be done, officially, from the right-click popup menu of Java files, i.e., the developers wanted to avoid confusion and make it available to Java files only. However, I think that, since I'm already aware that I can't set the Java debugger in an HTML file, I'm also aware that the Java profiler can't be set that way as well. If you find this useful too, you can download and install the NBM from here: http://plugins.netbeans.org/plugin/55002

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • YAHOO and BING support for Index, Image and Mobile sitemaps

    - by kishore
    I know Google webmaster supports submitting Image, mobile, video and other types of sitemaps. YAHOO also mentions about mobile site map here. But does it support Image and video sitemaps. I could not find if BING supports any of these types other than XML sitemaps. Can someone please point me to any documentation on submitting Index, Image and Mobile sitemaps. Also does YAHOO and Bing support index sitemap files?

    Read the article

  • How To: Using spatial data with Entity Framework and Connector/Net

    - by GABMARTINEZ
    One of the new features introduced in Entity Framework 5.0 is the incorporation of some new types of data within an Entity Data Model: the spatial data types. These types allow us to perform operations on coordinates values in an easier way. There's no need to add stored routines or functions for every operation among these geometry types, now the user can have the alternative to put this logic on his application or keep it in the database. In the new 6.7.4 version there's also this new feature incorporated to Connector/Net library so our users can start exploring it and could provide us some feedback or comments about this new functionality. Through this tutorial on how to create a Code First Entity Model with a geometry column, we'll show an example on using Geometry types and some common operations when using geometry types inside an application. Requirements: - Connector/Net 6.7.4 - Entity Framework 5.0 version - .NET Framework 4.5 version - Basic understanding on Entity Framework and C# language. - An installed and running instance of MySQL Server 5.5.x or 5.6.10 version- Visual Studio 2012. Step One: Create a new Console Application  Inside Visual Studio select File->New Project menu option and select the Console Application template. Also make sure the .Net 4.5 version is selected so the new features for EF 5.0 will work with the application. Step Two: Add the Entity Framework Package For adding the Entity Framework Package there is more than one option: the package manager console or the Manage Nuget Packages option dialog. If you want to open the Package Manager Console, go to the Tools Menu -> Library Package Manager -> Package Manager Console. On the Package Manager Console Type:Install-Package EntityFrameworkThis will add the reference to the project of the latest released No alpha version of Entity Framework. Step Three: Adding Entity class and DBContext We'll add a simple class that represents a table entity to save some places and its location using a DBGeometry column that will be mapped to a Geometry type in MySQL. After that some operations can be performed using this data. public class MyPlace { [Key] public int Id { get; set; } public string name { get; set; } public DbGeometry location { get; set; } } public class JourneyDb : DbContext { public DbSet<MyPlace> MyPlaces { get; set; } }  Also make sure to add the connection string to the App.Config file as in the example: <?xml version="1.0" encoding="utf-8"?> <configuration>   <configSections>     <!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->     <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />   </configSections>   <startup>     <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />   </startup>   <connectionStrings>     <add name="JourneyDb" connectionString="server=localhost;userid=root;pwd=;database=journeydb" providerName="MySql.Data.MySqlClient"/>   </connectionStrings>   <entityFramework>     </entityFramework> </configuration> Note also that the <entityFramework> section is empty.Step Four: Adding some new records.On the Program.cs file add the following code for the Main method so the Database gets created and also some new data can be added to the new table. This code adds some records containing some determinate locations. After being added a distance function will be used to know how much distance has each location in reference to the Queens Village Station in New York. static void Main(string[] args)    {     using (JourneyDb cxt = new JourneyDb())      {        cxt.Database.Delete();        cxt.Database.Create();         cxt.MyPlaces.Add(new MyPlace()        {          name = "JFK INTERNATIONAL AIRPORT OF NEW YORK",          location = DbGeometry.FromText("POINT(40.644047 -73.782291)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "ALLEY POND PARK",          location = DbGeometry.FromText("POINT(40.745696 -73.742638)"),        });       cxt.MyPlaces.Add(new MyPlace()        {          name = "CUNNINGHAM PARK",          location = DbGeometry.FromText("POINT(40.735031 -73.768387)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "QUEENS VILLAGE STATION",          location = DbGeometry.FromText("POINT(40.717957 -73.736501)"),        });         cxt.SaveChanges();         var points = (from p in cxt.MyPlaces                      select new { p.name, p.location });        foreach (var item in points)       {         Console.WriteLine("Location " + item.name + " has a distance in Km from Queens Village Station " + DbGeometry.FromText("POINT(40.717957 -73.736501)").Distance(item.location) * 100);       }       Console.ReadKey();      }  }}Output : Location JFK INTERNATIONAL AIRPORT OF NEW YORK has a distance from Queens Village Station 8.69448802402959 Km. Location ALLEY POND PARK has a distance from Queens Village Station 2.84097675104912 Km. Location CUNNINGHAM PARK has a distance from Queens Village Station 3.61695793727275 Km. Location QUEENS VILLAGE STATION has a distance from Queens Village Station 0 Km. Conclusion:Adding spatial data to a table is easier than before when having Entity Framework 5.0. This new Entity Framework feature that handles spatial data columns within the Data layer has a lot of integrated functions and methods toease this type of tasks.Notes:This version of Connector/Net is just released as GA so is preatty much stable to be used on a ProductionEnvironment. Please send us your comments or questions using this blog or at the Forums where we keep answering any questions you have about Connector/Net and MySQL Server.A copy of this sample project can be downloaded here. This application does not include any library so you will haveto add them before running it. Happly MySQL/.Net Coding.

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • Has this server been compromised?

    - by Griffo
    A friend is running a VPS (CentOS) His business partner was the sysadmin but has left him high and dry to look after the system. So, I've been asked to help out in fixing an apparent spam problem. His IP address got blacklisted for unsolicited mail. I'm not sure where to look for a problem, but I started with netstat to see what open connections were running. It looks to me like he has remote hosts connected to his SMTP server. Here's the output: Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10029 ESTABLISHED tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10010 ESTABLISHED tcp 0 1 78.153.208.195:35563 news.avanport.pt:smtp SYN_SENT tcp 0 0 78.153.208.195:35559 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 0 0 78.153.208.195:35560 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11647 CLOSING tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11645 CLOSING tcp 0 0 78.153.208.195:35562 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:35561 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:imap 86-41-8-64-dynamic.b-:49446 ESTABLISHED Does this indicate that his server may be acting as an open relay? Mail should only be outgoing from localhost. Apologies for my lack of knowledge but I don't work on linux in my day job. EDIT: Here's some output from /var/log/maillog which looks like it may be the result of spam. If it appears to be the case to others, where should I look next to investigate a root cause? I put the server IP through www.checkor.com and it came back clean. Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.721674 status: local 0/10 remote 9/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886182 delivery 74116: deferral: 200.147.36.15_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_200.147.36.15./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886255 status: local 0/10 remote 8/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898266 delivery 74115: deferral: 187.31.0.11_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_187.31.0.11./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898327 status: local 0/10 remote 7/20 Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137833 delivery 74111: deferral: Sorry,_I_wasn't_able_to_establish_an_SMTP_connection._(#4.4.1)/ Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137914 status: local 0/10 remote 6/20 Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903536 delivery 74000: failure: 209.85.143.27_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_[78.153.208.195_______1]_Our_system_has_detected_an_unusual_rate_of/550-5.7.1_unsolicited_mail_originating_from_your_IP_address._To_protect_our/550-5.7.1_users_from_spam,_mail_sent_from_your_IP_address_has_been_blocked./550-5.7.1_Please_visit_http://www.google.com/mail/help/bulk_mail.html_to_review/550_5.7.1_our_Bulk_Email_Senders_Guidelines._e25si1385223wes.137/ Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903606 status: local 0/10 remote 5/20 Jun 29 00:02:19 vps-1001108-595 qmail-queue-handlers[15501]: Handlers Filter before-queue for qmail started ... EDIT #2 Here's the output of netstat -p with the imap and imaps lines removed. I also removed my own ssh session Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 1 78.153.208.195:40076 any-in-2015.1e100.net:smtp SYN_SENT 24096/qmail-remote. tcp 0 1 78.153.208.195:40077 any-in-2015.1e100.net:smtp SYN_SENT 24097/qmail-remote. udp 0 0 78.153.208.195:48515 125.64.11.158:4225 ESTABLISHED 20435/httpd

    Read the article

  • Lookup for data sources in a query

    - by DAXShekhar
    public static str lookupDatasourceOfQuery(Query _query) {     Query                   query = _query;     QueryBuildDataSource    qbds;     int                     dsIterator;     Map                     map = new Map(Types::String, Types::String);     ;     for (dsIterator = 1; dsIterator <= query.dataSourceCount(); dsIterator++)     {         qbds = query.dataSourceNo(dsIterator);         map.insert(qbds.name(), qbds.name());     }     return pickList(map, "Data source", "Data sources"); }

    Read the article

  • T-SQL Tuesday #006 Round-up!

    - by Mike C
    T-SQL Tuesday this month was all about LOB (large object) data. Thanks to all the great bloggers out there who participated! The participants this month posted some very impressive articles with information running the gamut from Reporting Services to SQL Server spatial data types to BLOB-handling in SSIS. One thing I noticed immediately was a trend toward articles about spatial data (SQL Server 2008 Geography and Geometry data types, a very fun topic to explore if you haven’t played around with...(read more)

    Read the article

  • WebCenter Customer Spotlight: SICE

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummarySociedad Ibérica de Construcciones Eléctricas, S.A. (SICE) is a Spanish company specializes in engineering and technology integration for intelligent transport systems and environmental control systems. They had a large quantity of engineering and environmental planning documents  which they wanted to manage, classify and integrate with their existing enterprise resource planning (ERP) system. SICE adapted  Oracle WebCenter Content to classify and manage more than 30 different types, defined a security plan to ensure the integrity and recovery of various document types and integrated the document management solution with SICE’s third-party enterprise resource planning (ERP) system. SICE  accelerated time to market for all projects, minimized time required to identify and recover documents  and achieved greater efficiency in all operations. Company Overview Created in 1921, Sociedad Ibérica de Construcciones Eléctricas, S.A. (SICE) currently specializes in engineering and technology integration for intelligent transport systems and environmental control systems. It has more than 2,500 employees, with operations in Spain and various locations in Latin America, the United States, Africa, and Australia. Business Challenges They had a large quantity of engineering and environmental planning documents generated in research and projects which they wanted to manage, classify and integrate with their existing enterprise resource planning (ERP) system. Solution Deployed SICE worked with the Oracle Partner ABAST Solutions to evaluate and choose the best document management system, ultimately selecting Oracle WebCenter Content over other options including  Documentum, SharePoint, OpenText, and Alfresco.They adapted Oracle WebCenter Content to classify and manage more than 30 different types, defined a security plan to ensure the integrity and recovery of various document types and integrated the document management solution  with SICE’s third-party enterprise resource planning (ERP) system to accelerate incorporation with the documentation system and ensure integrity ERP system data. Business Results SICE  accelerated time to market for all projects by releasing reports and information that support and validate engineering projects, stored all documents in a single repository with organizationwide accessibility, minimizing time required to identify and recover documents needed for reports to initiate and execute engineering and building projects. Overall they achieved greater efficiency in all operations, including technical and impact report development and construction documentation management. “The correct and efficient management of information is vital to our environmental management activity. Oracle WebCenter Content  serves as a basis for knowledge management practices, with the objective of adding greater value to everything that we do.” Manuel Delgado, IT Project Engineering, Sociedad Ibérica de Construcciones Eléctricas, S.A Additional Information SICE Customer Snapshot Oracle WebCenter Content

    Read the article

  • Lookup table display methods

    - by DAXShekhar
    public static client str lookupTableDisplayMethod(str _tableId) {     SysDictTable        dictTable   = new SysDictTable(str2int(_tableId));     ListEnumerator      enum;     Map                 map         = new Map(Types::String, Types::String);     ;     if (dictTable &&         dictTable.rights() > AccessType::NoAccess)     {         enum = dictTable.getListOfDisplayMethods().getEnumerator();         while (enum.moveNext())         {             map.insert(enum.current(), enum.current());         }     }     return pickList(map, "Display method", tableid2pname(_tableId)); }

    Read the article

  • Animation in Silverlight

    In this chapter you will be learning the basic fundamental concepts of Animations in Silverlight Application, which includes Animation Types, namespace details, classes, objects used, implementation of different types of animations with XAML and with C# code and some more interesting samples for each animation.

    Read the article

  • Animation in Silverlight

    In this chapter, you will be learning the fundamental concepts of Animations in Silverlight Application, which includes Animation Types, namespace details, classes, objects used, implementation of different types of animations with XAML and with C# code ...

    Read the article

  • Animation in Silverlight

    In this chapter you will be learning the basic fundamental concepts of Animations in Silverlight Application, which includes Animation Types, namespace details, classes, objects used, implementation of different types of animations with XAML and with C# code and some more interesting samples for each animation.

    Read the article

  • How to Convert HTML to PDF Using PHP?

    - by Meng Longlong
    PDF or Portable Document Format is a popular file type that is often used for online documents. It's great for distributing downloadable written content, and is frequently used by governments and businesses alike. Because it's a format that's familiar to all, many applications allow the user to convert other document types to the PDF format. PHP is one programming language that has a built-in ability to convert to PDF. PHP scripts can be used to transform file types such as HTML into PDF files.

    Read the article

  • Shakespeare and storing Unicode characters

    - by John Paul Cook
    This post is about the political issues involved with using multiple languages in a global organization and how to troubleshoot the technical details. The CHAR and VARCHAR data types are NOT suitable for global data. Some people still cling to CHAR and VARCHAR justifying their use by truthfully saying that they only take up half the space of NCHAR and NVARCHAR data types. But you’ll never be able to store Chinese, Korean, Greek, Japanese, Arabic, or many other languages unless you use NCHAR and NVARCHAR...(read more)

    Read the article

  • why there is no power operator in java / c ++?

    - by RanZilber
    While there is such operator - ** in Python , i was wondering why java and c++ havent got one too. It is easy to make one for classes you define in C++ with operator overloading ( and i believe such thing is possible also in java) , but when talking about primitive types such as int, double and so on , you'll have to use library function like Math.power (and usaully have to cast both to double). So - why not define such operator for primitive types ?

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Comparison Of The Best Style Sheet Languages

    CSS, Cascading Style Sheets, is one of the most popular types of style sheet languages used by many web developers today. Part of what made it popular is its flexibility in almost all types of browse... [Author: Margarette Mcbride - Web Design and Development - May 05, 2010]

    Read the article

  • How should I refactor switch statements like this (Switching on type) to be more OO?

    - by Taytay
    I'm seeing some code like this in our code base, and want to refactor it: (Typescript psuedocode follows): class EntityManager{ private findEntityForServerObject(entityType:string, serverObject:any):IEntity { var existingEntity:IEntity = null; switch(entityType) { case Types.UserSetting: existingEntity = this.getUserSettingByUserIdAndSettingName(serverObject.user_id, serverObject.setting_name); break; case Types.Bar: existingEntity = this.getBarByUserIdAndId(serverObject.user_id, serverObject.id); break; //Lots more case statements here... } return existingEntity; } } The downsides of switching on type are self-explanatory. Normally, when switching behavior based on type, I try to push the behavior into subclasses so that I can reduce this to a single method call, and let polymorphism take care of the rest. However, the following two things are giving me pause: 1) I don't want to couple the serverObject with the class that is storing all of these objects. It doesn't know where to look for entities of a certain type. And unfortunately, the identity of a type of ServerObject varies with the type of ServerObject. (So sometimes it's just an ID, other times it's a combination of an id and a uniquely identifying string, etc). And this behavior doesn't belong down there on those subclasses. It is the responsibility of the EntityManager and its delegates. 2) In this case, I can't modify the ServerObject classes since they're plain old data objects. It should be mentioned that I've got other instances of the above method that take a parameter like "IEntity" and proceed to do almost the same thing (but slightly modify the name of the methods they're calling to get the identity of the entity). So, we might have: case Types.Bar: existingEntity = this.getBarByUserIdAndId(entity.getUserId(), entity.getId()); break; So in that case, I can change the entity interface and subclasses, but this isn't behavior that belongs in that class. So, I think that points me to some sort of map. So eventually I will call: private findEntityForServerObject(entityType:string, serverObject:any):IEntity { return aMapOfSomeSort[entityType].findByServerObject(serverObject); } private findEntityForEntity(someEntity:IEntity):IEntity { return aMapOfSomeSort[someEntity.entityType].findByEntity(someEntity); } Which means I need to register some sort of strategy classes/functions at runtime with this map. And again, I darn well better remember to register one for each my my types, or I'll get a runtime exception. Is there a better way to refactor this? I feel like I'm missing something really obvious here.

    Read the article

  • ASP.NET MVC OutputCache with POST Controller Actions

    - by Maxim Z.
    I'm fairly new to using the OutputCache attribute in ASP.NET MVC. Static Pages I've enabled it on static pages on my site with code such as the following: [OutputCache(Duration = 7200, VaryByParam = "None")] public class HomeController : Controller { public ActionResult Index() { //... If I understand correctly, I made the whole controller cache for 7200 seconds (2 hours). Dynamic Pages However, how does it work with dynamic pages? By dynamic, I mean where the user has to submit a form. As an example, I have a page with an email form. Here's what that code looks like: public class ContactController : Controller { // // GET: /Contact/ public ActionResult Index() { return RedirectToAction("SubmitEmail"); } public ActionResult SubmitEmail() { //In view for CAPTCHA: <%= Html.GenerateCaptcha() %> return View(); } [CaptchaValidator] [AcceptVerbs(HttpVerbs.Post)] public ActionResult SubmitEmail(FormCollection formValues, bool captchaValid) { //Validate form fields, send email if everything's good... if (isError) { return View(); } else { return RedirectToAction("Index", "Home"); } } public void SendEmail(string title, string name, string email, string message) { //Send an email... } } What would happen if I applied OutputCache to the whole controller here? Would the HTTP POST form submission work? Also, my form has a CAPTCHA; would that change anything in the equation? In other words, what's the best way to approach caching with dynamic pages? Thanks in advance.

    Read the article

  • iPhone and Core Data: how to retain user-entered data between updates?

    - by Shaggy Frog
    Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information. We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date. But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner? My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct? My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • 12c - flashforward, flashback or see it as of now...

    - by noreply(at)blogger.com (Thomas Kyte)
    Oracle 9i exposed flashback query to developers for the first time.  The ability to flashback query dates back to version 4 however (it just wasn't exposed).  Every time you run a query in Oracle it is in fact a flashback query - it is what multi-versioning is all about.However, there was never a flashforward query (well, ok, the workspace manager has this capability - but with lots of extra baggage).  We've never been able to ask a table "what will you look like tomorrow" - but now we do.The capability is called Temporal Validity.  If you have a table with data that is effective dated - has a "start date" and "end date" column in it - we can now query it using flashback query like syntax.  The twist is - the date we "flashback" to can be in the future.  It works by rewriting the query to transparently the necessary where clause and filter out the right rows for the right period of time - and since you can have records whose start date is in the future - you can query a table and see what it would look like at some future time.Here is a quick example, we'll start with a table:ops$tkyte%ORA12CR1> create table addresses  2  ( empno       number,  3    addr_data   varchar2(30),  4    start_date  date,  5    end_date    date,  6    period for valid(start_date,end_date)  7  )  8  /Table created.the new bit is on line 6 (it can be altered into an existing table - so any table  you have with a start/end date column will be a candidate).  The keyword is PERIOD, valid is an identifier I chose - it could have been foobar, valid just sounds nice in the query later.  You identify the columns in your table - or we can create them for you if they don't exist.  Then you just create some data:ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '123 Main Street', trunc(sysdate-5), trunc(sysdate-2) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '456 Fleet Street', trunc(sysdate-1), trunc(sysdate+1) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '789 1st Ave', trunc(sysdate+2), null );1 row created.and you can either see all of the data:ops$tkyte%ORA12CR1> select * from addresses;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13      1234 456 Fleet Street               01-JUL-13 03-JUL-13      1234 789 1st Ave                    04-JUL-13or query "as of" some point in time - as  you can see in the predicate section - it is just doing a query rewrite to automate the "where" filters:ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate-3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  cthtvvm0dxvva, child number 0-------------------------------------select * from addresses as of period for valid sysdate-3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!-3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!-3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 456 Fleet Street               01-JUL-13 03-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  26ubyhw9hgk7z, child number 0-------------------------------------select * from addresses as of period for valid sysdatePlan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate+3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 789 1st Ave                    04-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  36bq7shnhc888, child number 0-------------------------------------select * from addresses as of period for valid sysdate+3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!+3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!+3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.All in all a nice, easy way to query effective dated information as of a point in time without a complex where clause.  You need to maintain the data - it isn't that a delete will turn into an update the end dates a record or anything - but if you have tables with start/end dates, this will make it much easier to query them.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >