Search Results

Search found 8125 results on 325 pages for 'metadata cache'.

Page 222/325 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • Announcing Oracle Enterprise Content Management Suite 11g

    - by [email protected]
    Today Oracle announced Oracle Enterprise Content Management Suite 11g. This is a major release for us, and reinforces our three key themes at Oracle: Complete New in this release - Oracle ECM Suite 11g is built on a single, unified repository. Every piece of content - documents, HTML pages, digital assets, scanned images - is stored and accessbile directly from the repository, whether you are working on websites, creating brand logos, processing accounts payable invoices, or running records and retention functions. It makes complete, end-to-end management of content possible, from the point it enters the organization, through its entire lifecycle. Also new in this release, the installation, access, monitoring and administration of Oracle ECM Suite 11g is centralized. As a complete system, organizations can lower the costs of training and usage by having a centralized source of information that is easily administered. As part of this new unified repository release, Oracle has released a benchmarking white paper that shows the extreme performance and scalability of Oracle ECM Suite. When tested on a two node UCM Server running on Sun Oracle DB Machine Half Rack Hardware with an Exadata storage server, Oracle ECM Suite 11g is able to ingest over 178 million documents per day. Open Oracle ECM Suite 11g is built on a service-oriented architecture. All functions are available through standards-based services calls in Web Services or Java. In this release Oracle unveils Open Web Content Management. Open Web Content Management is a revolutionary approach to web content management that decouples the content management process from the process of creating web applications. One piece of this approach is our one-click web content management. With one click, a web application builder can drag content services into their application, enabling their users to also edit content with just one click. Open Web Content Management is also open because it enables Web developers to add Web content management to new and existing JavaServer Pages (JSP), JavaServer Faces (JSF) and Oracle Application Development Framework (ADF) Faces applications Open content distribution - Oracle ECM Suite 11g offers flexible deployment options with a built-in smart cache so organizations can deliver Web sites or Web applications without requiring Oracle ECM Suite as part of the delivery system Integrated Oracle ECM Suite 11g also offers a series of next generation desktop integrations, providing integrations such as: New MS Office integration with menus to access managed content, insert managed links, and compare managed documents using standard MS Office reviewing tools Automatic identity tagging of documents on download - to help users understand which versions they are viewing and prevent duplicate content items in the content repository. New "smart productivity folders" to show a users workflow inbox, saved searches and checked out content directly from Windows Explorer Drag and drop metadata pop-ups Check in and check out for all file formats with any standard WebDAV server As part of Oracle's Enterprise Application Documents initiative, Oracle Content Management 11g also provides certified application integrations with solution templates You can read the press release here. You can see more assets at the launch center here. You can sign up for the announcement webinar and hear more about the new features here. You can read the benchmarking study here.

    Read the article

  • Dependency injection with n-tier Entity Framework solution

    - by Matthew
    I am currently designing an n-tier solution which is using Entity Framework 5 (.net 4) as its data access strategy, but am concerned about how to incorporate dependency injection to make it testable / flexible. My current solution layout is as follows (my solution is called Alcatraz): Alcatraz.WebUI: An asp.net webform project, the front end user interface, references projects Alcatraz.Business and Alcatraz.Data.Models. Alcatraz.Business: A class library project, contains the business logic, references projects Alcatraz.Data.Access, Alcatraz.Data.Models Alcatraz.Data.Access: A class library project, houses AlcatrazModel.edmx and AlcatrazEntities DbContext, references projects Alcatraz.Data.Models. Alcatraz.Data.Models: A class library project, contains POCOs for the Alcatraz model, no references. My vision for how this solution would work is the web-ui would instantiate a repository within the business library, this repository would have a dependency (through the constructor) of a connection string (not an AlcatrazEntities instance). The web-ui would know the database connection strings, but not that it was an entity framework connection string. In the Business project: public class InmateRepository : IInmateRepository { private string _connectionString; public InmateRepository(string connectionString) { if (connectionString == null) { throw new ArgumentNullException("connectionString"); } EntityConnectionStringBuilder connectionBuilder = new EntityConnectionStringBuilder(); connectionBuilder.Metadata = "res://*/AlcatrazModel.csdl|res://*/AlcatrazModel.ssdl|res://*/AlcatrazModel.msl"; connectionBuilder.Provider = "System.Data.SqlClient"; connectionBuilder.ProviderConnectionString = connectionString; _connectionString = connectionBuilder.ToString(); } public IQueryable<Inmate> GetAllInmates() { AlcatrazEntities ents = new AlcatrazEntities(_connectionString); return ents.Inmates; } } In the Web UI: IInmateRepository inmateRepo = new InmateRepository(@"data source=MATTHEW-PC\SQLEXPRESS;initial catalog=Alcatraz;integrated security=True;"); List<Inmate> deathRowInmates = inmateRepo.GetAllInmates().Where(i => i.OnDeathRow).ToList(); I have a few related questions about this design. 1) Does this design even make sense in terms of Entity Frameworks capabilities? I heard that Entity framework uses the Unit-of-work pattern already, am I just adding another layer of abstract unnecessarily? 2) I don't want my web-ui to directly communicate with Entity Framework (or even reference it for that matter), I want all database access to go through the business layer as in the future I will have multiple projects using the same business layer (web service, windows application, etc.) and I want to have it easy to maintain / update by having the business logic in one central area. Is this an appropriate way to achieve this? 3) Should the Business layer even contain repositories, or should that be contained within the Access layer? If where they are is alright, is passing a connection string a good dependency to assume? Thanks for taking the time to read!

    Read the article

  • Cloud Application Management for Platforms

    - by user756764
    Today Oracle, along with CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG, published the Cloud Application Management for Platforms (CAMP) specification. This spec deals with application management in the context of PaaS. It defines a model (consisting of a set resources and their relationships), a REST-based API for manipulating that model, and a packaging format for getting applications (and their attendant metadata) into and out of the platform. My colleague, Mark Carlson, has already provided an excellent writeup on the spec here. The following, additional points bear emphasizing: CAMP is language, framework and platform neutral; it should be equally applicable to the task of deploying and managing Ruby on Rails applications as Java/Spring applications (as Node.js applications, etc.) CAMP only covers the interactions between a Cloud Consumer and a Cloud Provider (using the definitions of these terms provided in the NIST Cloud Computing Reference Architecture). The internal APIs used by the Cloud Provider to, for example, deploy additional platform services (e.g. a new message queuing service) are out of CAMP's scope. CAMP supports the management of the entire lifecycle of the application (e.g. start/stop, suspend/resume, etc.) not just the deployment of the components that make up the application. Complexity is the antithesis of interoperability. One of CAMP's goals is to be as broadly interoperable as possible. To this end, the authors of CAMP tried to "make things as simple as possible, but no simpler". For example, JSON is the only serialization format used in the spec (although Providers can extend this to support additional serialization formats such as XML). It remains to be seen whether we can preserve this simplicity as the spec is processed by OASIS. So far, those who have indicated an interest in collaborating on the spec seem to be of a like mind with regards to the need for simplicity. The flip side to simplicity is the knowledge that you undoubtedly missed something that is important to someone. To make up for this, CAMP is designed to be extensible. The idea is to ship what we know will work, allow implementers to extend the spec, then re-factor the spec to incorporate the most popular extensions. Anyone interested in this effort, particularly those of you using PaaS-level services, is encouraged to join the forthcoming OASIS TC. As you may have noticed, CAMP is a bit of a departure from some of the more monolithic management standards that have preceded it. The idea is to develop simple, discrete standards targeted to address specific interoperability and portability problems and tie these standards together with common patterns based on REST and HATEOAS. I'm excited to see how this idea plays out.

    Read the article

  • Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile

    - by Michelle Kimihira
    With the upcoming release of Oracle ADF Mobile, I caught up with Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware post OpenWorld to learn about the cool hands-on lab at OpenWorld.  For those of you who missed it, you will want to keep reading... Author: Srikant Subramaniam, Senior Principal Product Manager,Oracle Fusion Middleware Oracle ADF Mobile enables rapid and declarative development of native on-device mobile applications. These native applications provide a richer experience for smart devices users running Apple iOS or other mobile platforms. Oracle ADF Mobile protects Oracle customers from technology shifts by adopting a metadata-based development framework that enables developer to develop one app (using Oracle JDeveloper), and deploy to multiple device platforms (starting with iOS and Android).  Oracle ADF Mobile also enables IT organizations to leverage existing expertise in web-based and Java development by adopting a hybrid application architecture that brings together HTML5, Java, and device native container: HTML5 allows developer to deliver device-native user experiences while maintaining portability across different platforms Java allows developers to create modules to support business logic and data services Native container provides integration into device services such as camera, contacts, etc All these technologies are packaged into a development framework that supports declarative application development through Oracle JDeveloper. ADF Mobile also provides out of box integratoin with key Fusion Middleware components, such as SOA Suite and Business Process Management (BPM). Oracle Fusion Middleware provides the necessary infrastructure to extend business processes and services to the mobile device -- enabling the mobile user to participate in human tasks – without the additional “mobile middleware” layer. When coupled with Oracle SOA Suite, this combination can execute business transactions on Oracle E-Business Suite (or any Oracle Application). Demo Use Case: Mobile E-Business Suite (iExpense) Approvals Using an employee expense approval scenario, we illustrate how to use Oracle Fusion Middleware and Oracle ADF Mobile to build application extensions that integrate intelligently with Oracle Applications (For example, E-Business Suite). Building these extensions using Oracle Fusion middleware and ADF makes modifications simple, quick to implement, and easy to maintain/upgrade. As described earlier, this approach also extends Fusion Middleware to mobile users without the additional "Mobile Middleware" layer. The approver is presented with a list of expense reports that have been submitted for approval. These expense reports are retrieved from the backend E-Business Suite and displayed on the mobile device. Approval (or rejection) of the expense report kicks off the workflow in E-Business Suite and takes it to completion. The demo also shows how to integrate with native device services such as email, contacts, BI dashboards as well as a prebuilt PDF viewer (this is especially useful in the expense approval scenario, as there is often a need for the approver to access the submitted receipts). Summary Oracle recommends Fusion Middleware as the application integration platform to deliver critical enterprise data and processes to mobile applications.  Pre-built connectors between Fusion Middleware and Applications greatly accelerates the integration process.  Instead of building individual integration points between mobile applications and individual enterprise applications, Oracle Fusion Middleware enables IT organizations to leverage a common platform to support both desktop and mobile application.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • SQL SERVER – Sharing your ETL Resources Across Applications with Ease

    - by pinaldave
    Frequently an organization will find that the same resources are used in multiple ETL applications, for example, the same database, general purpose processing logic, or file system locations.  Creating an easy way to reuse these resources across multiple applications would increase efficiency and reduce errors.  Moreover, not every ETL developer has the same skill set, and it is likely that one developer will be more adept at writing code while another is more comfortable configuring database connections.  Real productivity gains will come when these developers are able to work independently while still making their work available to others assigned to the same project.  These are the benefits of a centralized version control system. Of course, most version control systems could be used to store and serve files, but the real need is to store and serve entire ETL applications so that each developer’s ongoing work can immediately benefit from another developer’s completed work.  In other words, the version control system needs to be tightly integrated with the tools used to develop the ETL application. The following screen shot shows such a tool. Desktop ETL tool that tightly integrates with a central version control system Developers can checkout or commit entire projects or just a single artifact.  Each artifact may be managed independently so if you need to go back to an earlier version of one artifact, changes you may have made to other artifacts are not lost.  By being tightly integrated into the graphical environment used to create and edit the project artifacts, it is extremely easy and straight-forward to move your files to and from the version control system and there is no dependency on another vendor’s version control system.  The built in version control system is optimized for managing the artifacts of ETL applications. It is equally important that the version control system supports all of the actions one typically performs such as rollbacks, locking and unlocking of files, and the ability to resolve conflicts.  Note that this particular ETL tool also has the capability to switch back and forth between multiple version control systems. It also needs to be easy to determine the status of an artifact.  Not just that it has been committed or modified, but when and by whom.  Generally you must query the version control system for this information, but having it displayed within the development environment is more desirable. Who’s ETL tool works in this fashion?  Last month I mentioned the data integration solution offered by expressor software.  The version control features I described in this post are all available in their just released expressor 3.1 Standard Edition through the integration of their expressor Studio development environment with a centralized metadata repository and version control system. You can download their Studio application, which is free, or evaluate the full Standard Edition on your own hardware.  It may be worth your time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Generate DROP statements for all extended properties

    - by jamiet
    This evening I have been attempting to migrate an existing on-premise database to SQL Azure using the wizard that is built-in to SQL Server Management Studio (SSMS). When I did so I received the following error: The following objects are not supported = [MS_Description] = Extended Property Evidently databases containing extended properties can not be migrated using this particular wizard so I set about removing all of the extended properties – unfortunately there were over a thousand of them so I needed a better way than simply deleting each and every one of them manually. I found a couple of resources online that went some way toward this: Drop all extended properties in a MSSQL database by Angelo Hongens Modifying and deleting extended properties by Adam Aspin Unfortunately neither provided a script that exactly suited my needs. Angelo’s covered extended properties on tables and columns however I had other objects that had extended properties on them. Adam’s looked more complete but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So, both great resources but I wasn’t able to use either on their own to get rid of all of my extended properties. Hence, I combined the excellent work that Angelo and Adam had provided in order to manufacture my own script which did successfully manage to generate calls to sp_dropextendedproperty for all of my extended properties. If you think you might be able to make use of such a script then feel free to download it from https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16707&parid=550F681DAD532637!16706&authkey=!APxPIQCatzC7BQ8. This script will remove extended properties on tables, columns, check constraints, default constraints, views, sprocs, foreign keys, primary keys, table triggers, UDF parameters, sproc parameters, databases, schemas, database files and filegroups. If you have any object types with extended properties on them that are not in that list then consult Adam’s aforementioned article – it should prove very useful. I repeat here the message that I have placed at the top of the script: /* This script will generate calls to sp_dropextendedproperty for every extended property that exists in your database. Actually, a caveat: I don't promise that it will catch each and every extended property that exists, but I'm confident it will catch most of them! It is based on this: http://blog.hongens.nl/2010/02/25/drop-all-extended-properties-in-a-mssql-database/ by Angelo Hongens. Also had lots of help from this: http://www.sqlservercentral.com/articles/Metadata/72609/ by Adam Aspin Adam actually provides a script at that link to do something very similar but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So I put together this version instead. Use at your own risk. Jamie Thomson 2012-03-25 */ Hope this is useful to someone! @Jamiet

    Read the article

  • Announcing MySQL Enterprise Backup 3.7.1

    - by Hema Sridharan
    The MySQL Enterprise Backup (MEB) Team is pleased to announce the release of MEB 3.7.1, a maintenance release version that includes bug fixes and enhancements to some of the existing features. The most important feature introduced in this release is Automatic Incremental Backup. The new  argument syntax for the --incremental-base option is introduced which makes it simpler to perform automatic incremental backups. When the options --incremental & --incremental-base=history:last_backup are combined, the mysqlbackup command  uses the metadata in the mysql.backup_history table to determine the LSN to use as the lower limit of the incremental backup. You no longer need to keep track of the actual LSN (as in the option --start-lsn=LSN) or even the location of the previous backup (as in the option --incremental-base=dir:directory_path)This release also incudes various bug fixes related to some options used in MEB. The most important are few of them as listed below,1. The option --force now allows overwriting InnoDB data and log files in  combination with the apply-log and apply-incremental-backup options, and replacing the image file in combination with the backup-to-image and backup-dir-to-image options. 2. Resolved a bug that prevented MEB to interface with third-party storage managers to execute backup and restore jobs in combination with the SBT interface and associated --sbt* options for mysqlbackup. 3. When MEB is run with the copy-back option,  it now displays warnings as existing files are overwritten.For more information about other bug fixes, please refer to the change-log in http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/meb-news.html The complete MEB documentation is located at http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/index.html. You will find the binaries for the new release in My Oracle Support,  https://support.oracle.comChoose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. If you haven't looked at MEB 3.7.1 recently, please do so now and let us know how MEB works for you. Send your feedback to [email protected].

    Read the article

  • ie7 innerhtml strange display problem

    - by thoraniann
    Hello, I am having a strange problem with ie7 (ie8 in compatibility mode). I have div containers where I am updating values using javascript innhtml to update the values. This works fine in Firefox and ie8. In ie7 the values do not update but if a click on the values and highlight them then they update, also if a change the height of the browser then on the next update the values get updated correctly. I have figured out that if I change the position property of the outer div container from relative to static then the updates work correctly. The page can be viewed here http://islendingasogur.net/test/webmap_html_test.html In internet explorer 8 with compatibility turned on you can see that the timestamp in the gray box only gets updated one time, after that you see no changes. The timestamp in the lower right corner gets updated every 10 seconds. But if you highlight the text in the gray box then the updated timestamp values appears! Here is the page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <meta http-equiv="cache-control" content="no-cache"/> <meta http-equiv="pragma" content="no-cache"/> <meta http-equiv="expires" content="Mon, 22 Jul 2002 11:12:01 GMT"/> <title>innerhtml problem</title> <script type="text/javascript"> <!-- var alarm_off_color = '#00ff00'; var alarm_low_color = '#ffff00'; var alarm_lowlow_color = '#ff0000'; var group_id_array = new Array(); var var_alarm_array = new Array(); var timestamp_color = '#F3F3F3'; var timestamp_alarm_color = '#ff00ff'; group_id_array[257] = 0; function updateParent(var_array, group_array) { //Update last update time var time_str = "Last Reload Time: "; var currentTime = new Date(); var hours = currentTime.getHours(); var minutes = currentTime.getMinutes(); var seconds = currentTime.getSeconds(); if(minutes < 10) {minutes = "0" + minutes;} if(seconds < 10) {seconds = "0" + seconds;} time_str += hours + ":" + minutes + ":" + seconds; document.getElementById('div_last_update_time').innerHTML = time_str; //alert(time_str); alarm_var = 0; //update group values for(i1 = 0; i1 < var_array.length; ++i1) { if(document.getElementById(var_array[i1][0])) { document.getElementById(var_array[i1][0]).innerHTML = unescape(var_array[i1][1]); if(var_array[i1][2]==0) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_off_color} else if(var_array[i1][2]==1) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_low_color} else if(var_array[i1][2]==2) {document.getElementById(var_array[i1][0]).style.backgroundColor=alarm_lowlow_color} //check if alarm is new var_id = var_array[i1][3]; if(var_array[i1][2]==1 && var_array[i1][4]==0) { alarm_var = 1; } else if(var_array[i1][2]==2 && var_array[i1][4]==0) { alarm_var = 1; } } } //Update group timestamp and box alarm color for(i1 = 0; i1 < group_array.length; ++i1) { if(document.getElementById(group_array[i1][0])) { //set timestamp for group document.getElementById(group_array[i1][0]).innerHTML = group_array[i1][1]; if(group_array[i1][4] != -1) { //set data update error status current_timestamp_color = timestamp_color; if(group_array[i1][4] == 1) {current_timestamp_color = timestamp_alarm_color;} document.getElementById(group_array[i1][0]).style.backgroundColor = current_timestamp_color; } } } } function update_map(map_id) { document.getElementById('webmap_update').src = 'webmap_html_test_sub.html?first_time=1&map_id='+map_id; } --> </script> <style type="text/css"> body { margin:0; border:0; padding:0px;background:#eaeaea;font-family:verdana, arial, sans-serif; text-align: center; } A:active { color: #000000;} A:link { color: #000000;} A:visited { color: #000000;} A:hover { color: #000000;} #div_header { /*position: absolute;*/ background: #ffffff; width: 884px; height: 60px; display: block; float: left; font-size: 14px; text-align: left; /*overflow: visible;*/ } #div_container{ background: #ffffff;border-left:1px solid #000000; border-right:1px solid #000000; border-bottom:1px solid #000000; float: left; width: 884px;} #div_image_container{ position: relative; width: 884px; height: 549px; background: #ffffff; font-family:arial, verdana, arial, sans-serif; /*display: block;*/ float:none!important; float/**/:left; border:1px solid #00ff00; padding: 0px; } .div_group_box{ position: absolute; width: -2px; height: -2px; background: #FFFFFF; opacity: 1; filter: alpha(opacity=100); border:1px solid #000000; font-size: 2px; z-index: 0; padding: 0px; } .div_group_container{ position: absolute; opacity: 1; filter: alpha(opacity=100); z-index: 5; /*display: block;*/ /*border:1px solid #000000;*/ } .div_group_container A:active {text-decoration: none; display: block;} .div_group_container A:link { color: #000000;text-decoration: none; display: block;} .div_group_container A:visited { color: #000000;text-decoration: none; display: block;} .div_group_container A:hover { color: #000000;text-decoration: none; display: block;} .div_group_header{ background: #17B400; border:1px solid #000000;font-size: 12px; color: #FFFFFF; padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; text-align: center; } .div_var_name_container{ color: #000000;background: #FFFFFF; border-left:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; display: block; text-align: left; } .div_var_name{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; display: block; } .div_var_value_container{ color: #000000;background: #FFFFFF; border-left:1px solid #000000; border-right:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; text-align: center; } .div_var_value{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; } .div_var_unit_container{ color: #000000;background: #FFFFFF; border-right:1px solid #000000; border-top:0px solid #000000; border-bottom:0px solid #000000;font-size: 12px; float: left; text-align: left; } .div_var_unit{ padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; } .div_timestamp{ float: none; color: #000000;background: #F3F3F3; border:1px solid #000000;font-size: 12px; padding-top: 1px; padding-bottom: 1px; padding-left: 2px; padding-right: 2px; text-align: center; clear: left; z-index: 100; position: relative; } #div_last_update_time{ height: 14px; width: 210px; text-align: right; padding: 1px; font-size: 10px; float: right; } .copyright{ height: 14px; width: 240px; text-align: left; color: #777; padding: 1px; font-size: 10px; float: left; } a img { border: 1px solid #000000; } .clearer { clear: both; display: block; height: 1px; margin-bottom: -1px; font-size: 1px; line-height: 1px; } </style> </head> <body onload="update_map(1)"> <div id="div_container"><div id="div_header"></div><div class="clearer"></div><div id="div_image_container"><img id="map" src="Images/maps/0054_gardabaer.jpg" title="My map" alt="" align="left" border="0" usemap ="#_area_links" style="padding: 0px; margin: 0px;" /> <div id="group_container_257" class="div_group_container" style="visibility:visible; top:10px; left:260px; cursor: pointer;"> <div class="div_group_header" style="clear:right">Site</div> <div class="div_var_name_container"> <div id="group_name_257_var_8" class="div_var_name" >variable 1</div> <div id="group_name_257_var_7" class="div_var_name" style="border-top:1px solid #000000;">variable 2</div> <div id="group_name_257_var_9" class="div_var_name" style="border-top:1px solid #000000;">variable 3</div> </div> <div class="div_var_value_container"> <div id="group_value_257_var_8" class="div_var_value" >0</div> <div id="group_value_257_var_7" class="div_var_value" style="border-top:1px solid #000000;">0</div> <div id="group_value_257_var_9" class="div_var_value" style="border-top:1px solid #000000;">0</div> </div> <div class="div_var_unit_container"> <div id="group_unit_257_var_8" class="div_var_unit" >N/A</div> <div id="group_unit_257_var_7" class="div_var_unit" style="border-top:1px solid #000000;">N/A</div> <div id="group_unit_257_var_9" class="div_var_unit" style="border-top:1px solid #000000;">N/A</div> </div> <div id="group_257_timestamp" class="div_timestamp" style="">-</div> </div> </div><div class="clearer"></div><div class="copyright">© Copyright</div><div id="div_last_update_time">-</div> </div> <iframe id="webmap_update" style="display:none;" width="0" height="0"></iframe></body> </html> The divs with class div_var_value, div_timestamp & div_last_update_time all get updated by the javascript function. The div "div_image_container" is the one that is causing this it seems, atleast if I change the position property for it from relative to static the values get updated correctly This is the page that updates the values: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Loader</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <script type="text/javascript"> <!-- window.onload = doLoad; function refresh() { //window.location.reload( false ); var _random_num = Math.floor(Math.random()*1100); window.location.search="?map_id=54&first_time=0&t="+_random_num; } var var_array = new Array(); var timestamp_array = new Array(); var_array[0] = Array('group_value_257_var_9','41.73',-1, 9, 0); var_array[1] = Array('group_value_257_var_7','62.48',-1, 7, 0); var_array[2] = Array('group_value_257_var_8','4.24',-1, 8, 0); var current_time = new Date(); var current_time_str = current_time.getHours(); current_time_str += ':'+current_time.getMinutes(); current_time_str += ':'+current_time.getSeconds(); timestamp_array[0] = Array('group_257_timestamp',current_time_str,'box_group_container_206',-1, -1); //timestamp_array[0] = Array('group_257_timestamp','11:33:16 23.Nov','box_group_container_257',-1, -1); window.parent.updateParent(var_array, timestamp_array); function doLoad() { setTimeout( "refresh()", 10*1000 ); } //--> </script> </head> <body> </body> </html> I edited the post and added a link to the webpage in question, I have also tested the webpage in internet explorer 7 and this error does not appear there. I have only seen this error in ie8 with compatibility turned on. If anybody has seen this before and has a fix, I would be very grateful. Thanks.

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Getting started with Document Set in SharePoint2010

    - by ybbest
    Folders are widely used in traditional file based system, in SharePoint world you can create folder in the document library as well. However, there is a new improved feature in SharePoint called Document Set; you can attach metadata to the document set. To get start with Document set, you can perforce the following steps. 1. Go to Site Settings >>Site collection features >>Activate the Document Sets feature. 2. After the Document Sets feature is activated, you will get a new content type called Document Set. 3. Next, we can create a custom content type called Loan Application Document Set that inherited from Document Set Content Type. 4. Then I create a new column called Application Number. 5. Add this field to the loan application content type 6. Create a new Content Type called Loan Contract form that inherited from Document content type. 7. Add the Application Number to the Loan Contract form content type. 8. Create a new Content Type called Loan Application form that inherited from Document content type and add Application Number to it.(The same step as above.) 9.Go to the Loan Application Document Set content type and go to the Document Set Settings. 10. You can define which content type you would like this Document set contains and you can also define the default document for each content type. When you create a new document set, those default documents will get automatically created in the document set. You can also define the Shared field that shared across content types; in my case I define the Application number and description as my shared fields. Finally, you can define the fields that you’d like to show in the document set welcome page. 11. Now create a new document library and attach those content types to the document library and create a new loan application document set. 12. You will see the default document created in the document set.If you updated Application Number on the document set , the field will get updated in the documents inside the document set as well.

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Android app crashes on emulator - logCat shows no errors

    - by David Miler
    I have just added the SherlockActionBar library to my android project. After some small changes (FragmentActivity - SherlockFragmentActivity, getActionBar() - getSupportActionBar(), imports) it all compiled nicely. After I run the app, however, the debugger stops, as though it had encountered an exception. However, there are no errors shown in the LogCat output. I just can't wrap my head around what's going on. Here is the logCat output after I terminate the app. 10-02 14:11:19.227: I/SystemUpdateService(174): UpdateTask at time 1349187079227 10-02 14:11:19.237: I/ActivityThread(328): Pub com.android.email.attachmentprovider: com.android.email.provider.AttachmentProvider 10-02 14:11:19.687: I/dalvikvm(81): Jit: resizing JitTable from 512 to 1024 10-02 14:11:19.809: D/MediaScannerService(150): start scanning volume internal: [/system/media] 10-02 14:11:20.047: V/AlarmClock(239): AlarmInitReceiver finished 10-02 14:11:20.087: I/ActivityManager(81): Start proc com.android.quicksearchbox for broadcast com.android.quicksearchbox/.SearchWidgetProvider: pid=346 uid=10012 gids={3003} 10-02 14:11:20.127: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = false, running = false 10-02 14:11:20.427: I/ActivityThread(346): Pub com.android.quicksearchbox.google: com.android.quicksearchbox.google.GoogleSuggestionProvider 10-02 14:11:20.497: I/ActivityThread(346): Pub com.android.quicksearchbox.shortcuts: com.android.quicksearchbox.ShortcutsProvider 10-02 14:11:20.657: I/ActivityManager(81): Start proc com.android.music for broadcast com.android.music/.MediaAppWidgetProvider: pid=358 uid=10028 gids={3003, 1015} 10-02 14:11:20.927: D/ExchangeService(320): !!! EAS ExchangeService, onCreate 10-02 14:11:20.967: D/dalvikvm(260): GC_CONCURRENT freed 213K, 6% free 6409K/6791K, paused 5ms+101ms 10-02 14:11:21.077: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = true, running = false 10-02 14:11:21.567: D/GTalkService(174): [ReonnectMgr] ### report Inet condition: status=false, networkType=0 10-02 14:11:21.587: D/ConnectivityService(81): reportNetworkCondition(0, 0) 10-02 14:11:21.597: D/ConnectivityService(81): Inet connectivity change, net=0, condition=0,mActiveDefaultNetwork=0 10-02 14:11:21.597: D/ConnectivityService(81): starting a change hold 10-02 14:11:21.697: D/GTalkService(174): [RawStanzaProvidersMgr] ##### searchProvidersFromIntent 10-02 14:11:21.697: D/GTalkService(174): [RawStanzaProvidersMgr] no intent receivers found 10-02 14:11:21.847: I/SystemUpdateService(174): cancelUpdate (empty URL) 10-02 14:11:21.847: E/TelephonyManager(174): Hidden constructor called more than once per process! 10-02 14:11:21.867: D/dalvikvm(174): GC_CONCURRENT freed 337K, 7% free 6561K/7047K, paused 5ms+4ms 10-02 14:11:21.917: D/GTalkService(174): [ReonnectMgr] ### report Inet condition: status=false, networkType=0 10-02 14:11:21.917: D/ConnectivityService(81): reportNetworkCondition(0, 0) 10-02 14:11:21.917: D/ConnectivityService(81): Inet connectivity change, net=0, condition=0,mActiveDefaultNetwork=0 10-02 14:11:21.917: D/ConnectivityService(81): currently in hold - not setting new end evt 10-02 14:11:21.990: E/TelephonyManager(174): Original: com.google.android.location, new: com.google.android.gsf 10-02 14:11:22.027: I/SystemUpdateService(174): removeAllDownloads (cancelUpdate) 10-02 14:11:22.127: D/dalvikvm(328): GC_CONCURRENT freed 205K, 6% free 6506K/6855K, paused 660ms+3ms 10-02 14:11:22.197: D/Eas Debug(320): Logging: 10-02 14:11:22.319: D/dalvikvm(81): GREF has increased to 401 10-02 14:11:22.947: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = true, running = false 10-02 14:11:23.130: D/Eas Debug(320): Logging: 10-02 14:11:23.307: I//system/bin/fsck_msdos(29): Attempting to allocate 2044 KB for FAT 10-02 14:11:23.560: I/ActivityManager(81): Starting: Intent { flg=0x10000000 cmp=com.google.android.gsf/.update.SystemUpdateInstallDialog } from pid 174 10-02 14:11:23.587: I/ActivityManager(81): Starting: Intent { flg=0x10000000 cmp=com.google.android.gsf/.update.SystemUpdateDownloadDialog } from pid 174 10-02 14:11:24.087: W/ActivityManager(81): Activity pause timeout for ActivityRecord{407c7320 com.android.launcher/com.android.launcher2.Launcher} 10-02 14:11:24.237: E/TelephonyManager(174): Hidden constructor called more than once per process! 10-02 14:11:24.237: E/TelephonyManager(174): Original: com.google.android.location, new: com.google.android.gsf 10-02 14:11:24.507: D/dalvikvm(174): GC_EXPLICIT freed 231K, 7% free 6596K/7047K, paused 4ms+6ms 10-02 14:11:24.607: D/ConnectivityService(81): Inet hold end, net=0, condition =0, published condition =0 10-02 14:11:24.607: D/ConnectivityService(81): no change in condition - aborting 10-02 14:11:24.707: D/dalvikvm(174): GC_EXPLICIT freed 17K, 7% free 6579K/7047K, paused 4ms+4ms 10-02 14:11:24.947: I//system/bin/fsck_msdos(29): ** Phase 2 - Check Cluster Chains 10-02 14:11:25.117: I//system/bin/fsck_msdos(29): ** Phase 3 - Checking Directories 10-02 14:11:25.128: I//system/bin/fsck_msdos(29): ** Phase 4 - Checking for Lost Files 10-02 14:11:25.167: I//system/bin/fsck_msdos(29): 12 files, 1044448 free (522224 clusters) 10-02 14:11:25.227: I/Vold(29): Filesystem check completed OK 10-02 14:11:25.227: I/Vold(29): Device /dev/block/vold/179:0, target /mnt/sdcard mounted @ /mnt/secure/staging 10-02 14:11:25.237: D/Vold(29): Volume sdcard state changing 3 (Checking) -> 4 (Mounted) 10-02 14:11:25.257: I/PackageManager(81): Updating external media status from unmounted to mounted 10-02 14:11:25.457: D/dalvikvm(303): GC_EXPLICIT freed 35K, 6% free 6242K/6595K, paused 3ms+312ms 10-02 14:11:25.987: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = true, running = false 10-02 14:11:26.157: D/MediaScanner(150): prescan time: 2905ms 10-02 14:11:26.167: D/MediaScanner(150): scan time: 148ms 10-02 14:11:26.167: D/MediaScanner(150): postscan time: 2ms 10-02 14:11:26.167: D/MediaScanner(150): total time: 3055ms 10-02 14:11:26.197: D/MediaScannerService(150): done scanning volume internal 10-02 14:11:26.237: D/MediaScannerService(150): start scanning volume external: [/mnt/sdcard] 10-02 14:11:26.497: D/dalvikvm(143): GC_EXPLICIT freed 234K, 8% free 7735K/8327K, paused 3ms+5ms 10-02 14:11:27.180: D/dalvikvm(143): GC_CONCURRENT freed 150K, 4% free 8004K/8327K, paused 7ms+3ms 10-02 14:11:27.397: D/dalvikvm(143): GC_FOR_ALLOC freed 96K, 6% free 8310K/8775K, paused 76ms 10-02 14:11:27.580: D/dalvikvm(143): GC_FOR_ALLOC freed 515K, 11% free 8135K/9095K, paused 79ms 10-02 14:11:27.829: D/dalvikvm(143): GC_CONCURRENT freed 3K, 5% free 8694K/9095K, paused 7ms+6ms 10-02 14:11:28.137: V/TLINE(143): new: android.text.TextLine@4065b280 10-02 14:11:28.527: D/dalvikvm(143): GC_CONCURRENT freed 729K, 10% free 8764K/9671K, paused 5ms+13ms 10-02 14:11:28.677: D/dalvikvm(143): GC_FOR_ALLOC freed 152K, 11% free 8683K/9671K, paused 99ms 10-02 14:11:28.717: I/dalvikvm-heap(143): Grow heap (frag case) to 11.434MB for 2975968-byte allocation 10-02 14:11:28.807: D/dalvikvm(143): GC_FOR_ALLOC freed 0K, 9% free 11589K/12615K, paused 84ms 10-02 14:11:29.159: D/dalvikvm(143): GC_CONCURRENT freed 197K, 7% free 12195K/12999K, paused 8ms+6ms 10-02 14:11:29.647: D/dalvikvm(143): GC_EXPLICIT freed 351K, 6% free 12790K/13511K, paused 8ms+17ms 10-02 14:11:29.717: I/SurfaceFlinger(32): Boot is finished (70768 ms) 10-02 14:11:29.877: I/ARMAssembler(32): generated scanline__00000177:03010104_00000002_00000000 [ 44 ipp] (66 ins) at [0x407c7290:0x407c7398] in 990662 ns 10-02 14:11:29.907: I/ARMAssembler(32): generated scanline__00000177:03515104_00000001_00000000 [ 73 ipp] (95 ins) at [0x407c73a0:0x407c751c] in 989381 ns 10-02 14:11:30.287: D/dalvikvm(174): GC_EXPLICIT freed 25K, 8% free 6554K/7047K, paused 4ms+32ms 10-02 14:11:30.380: D/dalvikvm(143): GC_EXPLICIT freed 349K, 6% free 13124K/13895K, paused 5ms+25ms 10-02 14:11:30.957: D/dalvikvm(143): GC_FOR_ALLOC freed 1069K, 10% free 13860K/15239K, paused 81ms 10-02 14:11:32.177: D/dalvikvm(150): GC_CONCURRENT freed 183K, 6% free 6438K/6791K, paused 5ms+4ms 10-02 14:11:32.187: W/ActivityManager(81): No content provider found for: 10-02 14:11:32.607: V/MediaScanner(150): pruneDeadThumbnailFiles... android.database.sqlite.SQLiteCursor@406724a8 10-02 14:11:32.617: V/MediaScanner(150): /pruneDeadThumbnailFiles... android.database.sqlite.SQLiteCursor@406724a8 10-02 14:11:32.640: W/ActivityManager(81): No content provider found for: 10-02 14:11:32.640: D/VoldCmdListener(29): asec list 10-02 14:11:32.647: I/PackageManager(81): No secure containers on sdcard 10-02 14:11:32.667: D/MediaScanner(150): prescan time: 107ms 10-02 14:11:32.667: D/MediaScanner(150): scan time: 89ms 10-02 14:11:32.667: D/MediaScanner(150): postscan time: 61ms 10-02 14:11:32.667: D/MediaScanner(150): total time: 257ms 10-02 14:11:32.697: W/PackageManager(81): Unknown permission android.permission.ADD_SYSTEM_SERVICE in package com.android.phone 10-02 14:11:32.707: W/PackageManager(81): Unknown permission com.android.smspush.WAPPUSH_MANAGER_BIND in package com.android.phone 10-02 14:11:32.737: W/PackageManager(81): Not granting permission android.permission.SEND_DOWNLOAD_COMPLETED_INTENTS to package com.android.browser (protectionLevel=2 flags=0x9be45) 10-02 14:11:32.737: W/PackageManager(81): Not granting permission android.permission.BIND_APPWIDGET to package com.android.widgetpreview (protectionLevel=3 flags=0x28be44) 10-02 14:11:32.767: W/PackageManager(81): Unknown permission android.permission.READ_OWNER_DATA in package com.android.exchange 10-02 14:11:32.778: W/PackageManager(81): Unknown permission android.permission.READ_OWNER_DATA in package com.android.email 10-02 14:11:32.788: W/PackageManager(81): Unknown permission com.android.providers.im.permission.READ_ONLY in package com.google.android.apps.maps 10-02 14:11:32.797: W/PackageManager(81): Not granting permission android.permission.DEVICE_POWER to package com.android.deskclock (protectionLevel=2 flags=0x8be45) 10-02 14:11:33.137: D/MediaScannerService(150): done scanning volume external 10-02 14:11:33.197: D/PackageParser(81): Scanning package: /data/app/vmdl257911298.tmp 10-02 14:11:33.837: I/InputReader(81): Device reconfigured: id=0, name='qwerty2', surface size is now 1024x800 10-02 14:11:34.097: D/dalvikvm(81): GC_CONCURRENT freed 12185K, 47% free 13966K/26311K, paused 8ms+23ms 10-02 14:11:36.798: I/TabletStatusBar(124): DISABLE_CLOCK: no 10-02 14:11:36.798: I/TabletStatusBar(124): DISABLE_NAVIGATION: no 10-02 14:11:37.348: I/ARMAssembler(32): generated scanline__00000177:03515104_00001001_00000000 [ 91 ipp] (114 ins) at [0x407c7520:0x407c76e8] in 919320 ns 10-02 14:11:37.598: I/TabletStatusBar(124): DISABLE_BACK: no 10-02 14:11:37.710: I/ActivityManager(81): Displayed com.android.launcher/com.android.launcher2.Launcher: +46s212ms 10-02 14:11:38.817: D/dalvikvm(143): GC_CONCURRENT freed 969K, 8% free 14867K/16007K, paused 4ms+10ms 10-02 14:11:39.437: I/dalvikvm(81): Jit: resizing JitTable from 1024 to 2048 10-02 14:11:40.267: D/dalvikvm(143): GC_FOR_ALLOC freed 2357K, 16% free 14395K/17031K, paused 80ms 10-02 14:11:40.717: D/dalvikvm(143): GC_EXPLICIT freed 742K, 16% free 14358K/17031K, paused 8ms+4ms 10-02 14:11:41.617: D/dalvikvm(81): GC_CONCURRENT freed 1955K, 48% free 13869K/26311K, paused 9ms+10ms 10-02 14:11:42.559: D/dalvikvm(81): GC_CONCURRENT freed 1830K, 48% free 13881K/26311K, paused 9ms+9ms 10-02 14:11:42.758: I/PackageManager(81): Removing non-system package:cz.trilimi.sfaui 10-02 14:11:42.758: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:42.967: D/PackageManager(81): Scanning package cz.trilimi.sfaui 10-02 14:11:42.967: I/PackageManager(81): Package cz.trilimi.sfaui codePath changed from /data/app/cz.trilimi.sfaui-1.apk to /data/app/cz.trilimi.sfaui-2.apk; Retaining data and using new 10-02 14:11:42.967: I/PackageManager(81): Unpacking native libraries for /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:43.097: D/installd(35): DexInv: --- BEGIN '/data/app/cz.trilimi.sfaui-2.apk' --- 10-02 14:11:45.317: D/dalvikvm(391): DexOpt: load 434ms, verify+opt 1260ms 10-02 14:11:45.407: D/installd(35): DexInv: --- END '/data/app/cz.trilimi.sfaui-2.apk' (success) --- 10-02 14:11:45.407: W/PackageManager(81): Code path for pkg : cz.trilimi.sfaui changing from /data/app/cz.trilimi.sfaui-1.apk to /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:45.407: W/PackageManager(81): Resource path for pkg : cz.trilimi.sfaui changing from /data/app/cz.trilimi.sfaui-1.apk to /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:45.407: D/PackageManager(81): Activities: cz.trilimi.sfaui.ItemListActivity cz.trilimi.sfaui.ItemDetailActivity 10-02 14:11:45.427: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:45.657: I/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 10-02 14:11:45.657: D/PackageManager(81): New package installed in /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:45.997: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:46.147: D/dalvikvm(143): GC_EXPLICIT freed 3K, 16% free 14356K/17031K, paused 10ms+9ms 10-02 14:11:46.237: D/PackageManager(81): generateServicesMap(android.accounts.AccountAuthenticator): 3 services unchanged 10-02 14:11:46.277: D/PackageManager(81): generateServicesMap(android.content.SyncAdapter): 5 services unchanged 10-02 14:11:46.337: D/PackageManager(81): generateServicesMap(android.accounts.AccountAuthenticator): 3 services unchanged 10-02 14:11:46.347: D/PackageManager(81): generateServicesMap(android.content.SyncAdapter): 5 services unchanged 10-02 14:11:46.437: D/dalvikvm(208): GC_EXPLICIT freed 258K, 7% free 6488K/6919K, paused 3ms+5ms 10-02 14:11:46.477: W/RecognitionManagerService(81): no available voice recognition services found 10-02 14:11:46.897: I/ActivityManager(81): Start proc com.svox.pico for broadcast com.svox.pico/.VoiceDataInstallerReceiver: pid=398 uid=10006 gids={} 10-02 14:11:47.087: I/ActivityThread(398): Pub com.svox.pico.providers.SettingsProvider: com.svox.pico.providers.SettingsProvider 10-02 14:11:47.138: D/GTalkService(174): [GTalkService.1] handlePackageInstalled: re-initialize providers 10-02 14:11:47.147: D/GTalkService(174): [RawStanzaProvidersMgr] ##### searchProvidersFromIntent 10-02 14:11:47.147: D/GTalkService(174): [RawStanzaProvidersMgr] no intent receivers found 10-02 14:11:47.718: I/AccountTypeManager(208): Loaded meta-data for 1 account types, 0 accounts in 186ms 10-02 14:11:48.377: D/dalvikvm(143): GC_CONCURRENT freed 1865K, 15% free 14513K/17031K, paused 7ms+4ms 10-02 14:11:48.917: D/dalvikvm(208): GC_CONCURRENT freed 219K, 6% free 6788K/7175K, paused 7ms+73ms 10-02 14:11:49.207: D/dalvikvm(143): GC_FOR_ALLOC freed 4558K, 31% free 11866K/17031K, paused 89ms 10-02 14:11:49.587: D/dalvikvm(143): GC_CONCURRENT freed 713K, 24% free 13010K/17031K, paused 5ms+4ms 10-02 14:11:49.967: D/dalvikvm(143): GC_CONCURRENT freed 1046K, 19% free 13922K/17031K, paused 5ms+4ms 10-02 14:11:50.437: D/dalvikvm(81): GC_EXPLICIT freed 898K, 47% free 13955K/26311K, paused 6ms+39ms 10-02 14:11:50.467: I/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 10-02 14:11:50.477: D/AndroidRuntime(227): Shutting down VM 10-02 14:11:50.507: D/dalvikvm(227): GC_CONCURRENT freed 97K, 84% free 331K/2048K, paused 1ms+2ms 10-02 14:11:50.507: I/AndroidRuntime(227): NOTE: attach of thread 'Binder Thread #3' failed 10-02 14:11:50.517: D/jdwp(227): adbd disconnected 10-02 14:11:51.177: D/AndroidRuntime(410): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<< 10-02 14:11:51.177: D/AndroidRuntime(410): CheckJNI is ON 10-02 14:11:51.897: D/AndroidRuntime(410): Calling main entry com.android.commands.am.Am 10-02 14:11:51.937: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:51.937: I/ActivityManager(81): Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=cz.trilimi.sfaui/.ItemListActivity } from pid 410 10-02 14:11:51.968: W/WindowManager(81): Failure taking screenshot for (230x179) to layer 21005 10-02 14:11:51.997: I/ActivityManager(81): Start proc cz.trilimi.sfaui for activity cz.trilimi.sfaui/.ItemListActivity: pid=418 uid=10036 gids={} 10-02 14:11:52.007: D/AndroidRuntime(410): Shutting down VM 10-02 14:11:52.057: I/AndroidRuntime(410): NOTE: attach of thread 'Binder Thread #3' failed 10-02 14:11:52.097: D/dalvikvm(410): GC_CONCURRENT freed 98K, 83% free 360K/2048K, paused 1ms+0ms 10-02 14:11:52.097: D/jdwp(410): adbd disconnected 10-02 14:11:53.147: W/ActivityThread(418): Application cz.trilimi.sfaui is waiting for the debugger on port 8100... 10-02 14:11:53.207: I/System.out(418): Sending WAIT chunk 10-02 14:11:53.217: I/dalvikvm(418): Debugger is active 10-02 14:11:53.447: I/System.out(418): Debugger has connected 10-02 14:11:53.457: I/System.out(418): waiting for debugger to settle... 10-02 14:11:53.637: I/ARMAssembler(32): generated scanline__00000177:03515104_00001002_00000000 [ 87 ipp] (110 ins) at [0x407c76f0:0x407c78a8] in 598498 ns 10-02 14:11:53.660: I/System.out(418): waiting for debugger to settle... 10-02 14:11:53.857: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.057: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.257: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.317: V/TLINE(81): new: android.text.TextLine@4155dde8 10-02 14:11:54.467: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.667: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.870: I/System.out(418): waiting for debugger to settle... 10-02 14:11:55.027: D/dalvikvm(143): GC_EXPLICIT freed 900K, 16% free 14420K/17031K, paused 7ms+4ms 10-02 14:11:55.067: I/System.out(418): waiting for debugger to settle... 10-02 14:11:55.292: I/System.out(418): debugger has settled (1315) 10-02 14:12:02.008: W/ActivityManager(81): Launch timeout has expired, giving up wake lock! 10-02 14:12:02.971: W/ActivityManager(81): Activity idle timeout for ActivityRecord{4078c6b0 cz.trilimi.sfaui/.ItemListActivity} 10-02 14:12:08.359: D/ExchangeService(320): Received deviceId from Email app: androidc259148960 10-02 14:12:08.507: D/ExchangeService(320): Reconciling accounts... 10-02 14:16:11.437: D/SntpClient(81): request time failed: java.net.SocketException: Address family not supported by protocol 10-02 14:17:21.573: W/jdwp(418): Debugger is telling the VM to exit with code=1 10-02 14:17:21.573: I/dalvikvm(418): GC lifetime allocation: 8642 bytes 10-02 14:17:21.637: D/Zygote(33): Process 418 exited cleanly (1) 10-02 14:17:21.651: I/ActivityManager(81): Process cz.trilimi.sfaui (pid 418) has died. 10-02 14:17:21.847: D/dalvikvm(143): GC_EXPLICIT freed <1K, 16% free 14420K/17031K, paused 7ms+7ms 10-02 14:17:21.917: W/InputManagerService(81): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@40bfbf28

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • It's intellisense for SQL Server

    - by Nick Harrison
    It's intellisense for SQL Server Anyone who has ever worked with me, heard me speak, or read any of writings knows that I am a HUGE fan of Reflector.    By extension,  I am a big fan of Red - Gate   I have recently begun exploring some of their other offerings and came across this jewel. SQL Prompt is a plug in for Visual Studio and SQL Server Management Studio.    It provides several tools to make dealing with SQL a little easier for your friendly neighborhood developer. When you a query window in a database, the plugin kicks in and gathers the metadata for the database that you are in.    As you type a query, you get handy feedback like a list of tables after you type select.    You can select one of the tables, specify * and then tab to expand the select clause to include all of the columns from the selected table.    As you are building up the where clause, you are prompted by the names of columns in the selected tables. If you spend any time writing ad hoc queries or building stored procedures by hand, this can save you substantial time. If you are learning a new data model, this can greatly cut down on your frustration level. The other really cool thing here is Format SQL.   I have searched all over the place for a really good SQL formatter.    Badly formatted  SQL is so much harder to read than well formatted SQL.   Unfortunately, management studio offers no support for keeping your SQL well formatted.    There are many tools available to format your SQL.   Some work better than others.    Some don't work that well at all.   Most will give you some measure of control over how the formatted SQL looks.    SQL Prompt produces good results and is easy to configure. Sadly no tool is perfect, and what would we be without a wish list.    There are some features that I would like to see: Make it easier to paste SQL in and out of code.    Strip off string builder, etc Automate replacing hard coded values with bind variables or parameters In addition to reformatting SQL, which is a huge refactor, support for other SQL refactors would be nice.    Convert join to sub query and vice versa come to mind Wish list a side, this is a wonderful tool that easily saves me an hour or more on most weeks.

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • Web 2.0 Solutions with Oracle WebCenter 11g &ndash; Book Review

    - by juan.ruiz
    Recently I obtained a copy of the book Web 2.0 Solutions with Oracle Web Center 11g from Packt Publishing, right away I noticed that one of the authors of this book is a good and long time colleague of  mine Plinio Arbizu, whom I have joined for different developer events in Latin America in the past. In this entry you will find my review of the book. Chapter 1: What's Oracle WebCenter? Provides you with basic knowledge to understand the pieces of WebCenter and the role that these pieces play in the overall Oracle Fusion Middleware strategy. Chapter 2 and 3: Will guide you through installation process and set up instructions, required to start developing Web2.0 applications. The screenshots are very helpful. Chapter 4: The chapter will guide you through a series of steps for creating a basic HelloWorld application that uses ADF/Webservices/WebCenter framework to understand the relevant pieces that are part of the architecture in large Web2.0 solutions for WebCenter. One caveat on this chapter is that the use HTML in combination with ADF Faces is not a recommended practice, because in some cases (not in this one) HTML code generated by the components can conflict with existent HTML code place on the same page... so be careful. Chapter 5: Describes the basics to understand the usage of ADF Faces Rich Client Components, with templates and ADF Business components. Chapter 6: This chapter explains how to encapsulate, deploy and consume ADF UIs as JSR 168 portlets in a declarative way Chapter 7: Explains some of the WebCenter services and the different ways that these services can be integrated within WebCenter applications. Chapter 8: Goes over how to include a series of  WebCenter services provided out-of-the-box within applications. This chapter presents a simple and clear way of how to include RSS feeds, search capabilities, tagging and discussions using practical samples that are easy to follow. Chapter 9: Presents an important component of Oracle WebCenter - the composer. Through the composer and Oracle Metadata Services the composer adds all the functionality to perform end-user personalizations, which is a very common user case when working with portals. The concept is self-explanatory when running over the practice developed in this chapter. Chapter 10: Provides an introduction to WebCenter spaces, explaining common concepts about installation, administration (role creation, group creation, etc) and through a sample, the readers can put everything in practice on their own environments. Summary: This book would provide the reader with a fast start to work with Oracle WebCenter 11g  and its different components. In my opinion the book targets the developer audience, rather than the Portal type of audience, or content generator. For the readers of this book I recommend that to better understand the concepts discussed, first you need to understand the basics on Oracle Application Development Framework. Believe me you can thank me later!

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-19

    - by Bob Rhubart
    BPM Process Accelerator Packs – Update | Pat Shepherd Architect Pat Shepherd shares several resources relevant to the new Oracle Process Accelerators for Oracle Business Process Management. Oracle BI EE Management Pack Now Available for Oracle Enterprise Manager 12cR2 | Mark Rittman A handy and informative overview from Oracle ACE Director Mark Rittman. WebSockets on WebLogic Server | Steve Button "As there's no standard WebSocket Java API at this point, we've chosen to model the API on the Grizzly WebSocket API with some minor changes where necessary," says James "Buttso" Buttons. "Once the results of JSR-356 (Java API for WebSocket) becomes public, we'll look to implement and support that." Oracle Reference Architecture: Software Engineering This document from the IT Strategies from Oracle library focuses on integrated asset management and the need for efffective asset metadata management to insure that assets are properly tracked and reused in a manner that provides a holistic functional view of the enterprise. The tipping point for cloud management is nigh | Cloud Computing - InfoWorld "Businesses typically don't think too much about managing IT resources until they become too numerous and cumbersome to deal with on an ad hoc basis—a point many companies will soon hit in their adoption of cloud computing." — David Linthicum DevOps Basics: Track Down High CPU Thread with ps, top and the new JDK7 jcmd Tool | Frank Munz "The approach is very generic and works for WebLogic, Glassfish or any other Java application," say Frank Munz. "UNIX commands in the example are run on CentOS, so they will work without changes for Oracle Enterprise Linux or RedHat. Creating the thread dump at the end of the video is done with the jcmd tool from JDK7." Frank has captured the process in the posted video. OIM 11g R2 UI customization | Daniel Gralewski "OIM user interface customizations are easier now, and they survive patch applications--there is no need to reapply them after patching," says Fusion Middleware A-Team member Daniel Gralewski. "Adding new artifacts, new skins, and plugging code directly into the user interface components became an easier task." Daniel shows just how easy in this post. Thought for the Day "I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated." — Poul Anderson (November 25, 1926 – July 31, 2001) Source: SoftwareQuotes.com

    Read the article

  • Tackling Big Data Analytics with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}  By Mike Eisterer  The term big data draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and documents that can be mined for useful information.  Companies are facing emerging technologies, increasing data volumes, numerous data varieties and the processing power needed to efficiently analyze data which changes with high velocity. Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships Oracle Data Integrator Enterprise Edition(ODI) is critical to any enterprise big data strategy. ODI and the Oracle Data Connectors provide native access to Hadoop, leveraging such technologies as MapReduce, HDFS and Hive. Alongside with ODI’s metadata driven approach for extracting, loading and transforming data; companies may now integrate their existing data with big data technologies and deliver timely and trusted data to their analytic and decision support platforms. In this session, you’ll learn about ODI and Oracle Big Data Connectors and how, coupled together, they provide the critical integration with multiple big data platforms. Tackling Big Data Analytics with Oracle Data Integrator October 1, 2012 12:15 PM at MOSCONE WEST – 3005 For other data integration sessions at OpenWorld, please check our Focus-On document.  If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • BI&EPM in Focus December 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Share with your customers: October Edition of Business Analytics Customer Newsletter (link) Oracle OpenWorld Presentation pdf's available for download (link) OOW Mark Hurd Recap: Business Analytics at Oracle OpenWorld (video | blog) Register your customers for Oracle Days 2012 (link | video) BI & EPM Business Analytics Advisor Webcasts on My.Oracle.Support - Current Schedule and Archived (link) Customers Wüstenrot Efficiently Generates Reports and Analyzes Data with Enterprise Reporting Solution Empresas Públicas Medellin Gathers Data for Annual, Financial Projections 70% Faster ICON Improves Month-End Reporting Significantly Using a Single Source for Timely Consistent Business Intelligence, Reduces Reliance on Spreadsheets  Gilead Sciences, a science-led company backed by business-led IT, uses Oracle solutions to simplify business processes and establish a foundation for continued growth Dell Enhances the Customer Experience with Oracle’s RTD (video) Link to Complete Archive Enterprise Performance Management eBook: Transforming Enterprise Business Planning (link) Blog: Why CFO's should care about Big Data (link) Oracle Hyperion Project Financial Planning - New Projects Feature Release 11.1.2.2 Video Feature Overview. Now Available with many other Hyperion overviews on the YouTube Oracle EPM Channel (link) Available Patch Sets and Patch Set Updates for Oracle Hyperion Enterprise Performance Management Products on My.Oracle.Support (link) Hyperion Disclosure Management supplementary materials provides a set of guides for Disclosure Management and Taxonomy Designer users, including best practices guidelines, a full Disclosure Management sample report, a webinar series, and other guiding materials on My.Oracle.Support (link) See the selection of EPM Customer Videos at MediaNetwork (Hyperion) Business Intelligence Webcast Replay: Big Data, Bright Future, featuring Andrew McAfee (link) Webinar series and guides on Getting Started With Hyperion Interactive Reporting Translation Workbench, a tool that accelerates metadata conversion from IR to Oracle BI Enterprise Edition (OBIEE) on My.Oracle.Support (link) See the selection of BI Customer Videos at MediaNetwork (BI) and for (Exalytics) and (Endeca) ORACLE TEAM USA Analytics Dashboard demo - Now Available (link)

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-05

    - by Bob Rhubart
    Webcast: Oracle Maximum Availability Architecture Best Practices event.on24.com Date: Thursday, April 12, 2012 Time: 10:00 AM PDT Oracle expert Tom Kyte discusses how Oracle’s Maximum Availability Architecture can help to minimize the costs and risk of downtime. Oracle Enterprise Manager Ops Center 12c Launch - Interactive Webcast and Live Chat www.oracle.com Thursday, April 12, 2012. 9 a.m. PT / 12 p.m. ET / 4 p.m. GMT. Speakers: Steve Wilson (VP Systems Management, Oracle) John Fowler (Exec VP Systems, Oracle) Brad Cameron (VP Development, Oracle Fusion Middleware) Bill Nesheim (VP Oracle Solaris) Dennis Reno (VP Customer Portal Experience, Oracle) Mike Wookey (Chief Architect, Oracle Enterprise Manager Ops Center) Prasad Pai (Sr Director, Oracle Enterprise Manager Ops Center) 2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya blogs.oracle.com An illustrated how-to from Oracle Fusion Middleware A-Team blogger Nicolas Montoya. Get Proactive with Fusion Middleware | Daniel Mortimer blogs.oracle.com Daniel Mortimer shows how to access "a one stop shop for navigating to proactive support material, tools, and communication channels related to Oracle Fusion Middleware." Build an enterprise on 'other peoples' work', via SOA and cloud | Joe McKendrick www.zdnet.com Are you down with OPW? Joe McKendrick's synopsis of a recent presentation by David Linthicum focuses on reuse. Oracle Fusion Middleware Security: Unsolicited login with OAM 11g | Chris Johnson fusionsecurity.blogspot.com Chris Johnson shows how to create a shopping cart login model using "plain old HTML." How to use the Human WorkFlow Web Services | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows how to invoke two WorkFlow web services to query the Human task in Oracle SOA Suite with your own ordering and restrictions. Bad Practice Use Case for LOV Performance Implementation in ADF BC | Andrejus Baranovskis andrejusb.blogspot.com "If you want to learn something well, there is nothing better [than] to learn bad practices first," says Oracle ACE Director Andrejus Baranovskis. Thought for the Day "The best meetings get real work done. When your people learn that your meetings actually accomplish something, they will stop making excuses to be elsewhere." — Larry Constantine

    Read the article

  • mongoDB Management Studio

    - by Liam McLennan
    This weekend I have been in Sydney at the MS Web Camp, learning about web application development. At the end of the first day we came up with application ideas and pitched them. My idea was to build a web management application for mongoDB. mongoDB I pitched my idea, put down the microphone, and then someone asked, “what’s mongo?”. Good question. MongoDB is a document database that stores JSON style documents. This is a JSON document for a tweet from twitter: db.tweets.find()[0] { "_id" : ObjectId("4bfe4946cfbfb01420000001"), "created_at" : "Thu, 27 May 2010 10:25:46 +0000", "profile_image_url" : "http://a3.twimg.com/profile_images/600304197/Snapshot_2009-07-26_13-12-43_normal.jpg", "from_user" : "drearyclocks", "text" : "Does anyone know who has better coverage, Optus or Vodafone? Telstra is still too expensive.", "to_user_id" : null, "metadata" : { "result_type" : "recent" }, "id" : { "floatApprox" : 14825648892 }, "geo" : null, "from_user_id" : 6825770, "search_term" : "telstra", "iso_language_code" : "en", "source" : "&lt;a href=&quot;http://www.tweetdeck.com&quot; rel=&quot;nofollow&quot;&gt;TweetDeck&lt;/a&gt;" } A mongodb server can have many databases, each database has many collections (instead of tables) and a collection has many documents (instead of rows). Development Day 2 of the Sydney MS Web Camp was allocated to building our applications. First thing in the morning I identified the stories that I wanted to implement: Scenario: View databases Scenario: View Collections in a database Scenario: View Documents in a Collection Scenario: Delete a Collection Scenario: Delete a Database Scenario: Delete Documents Over the course of the day the team (3.5 developers) implemented all of the planned stories (except ‘delete a database’) and also implemented the following: Scenario: Create Database Scenario: Create Collection Lessons Learned I’m new to MongoDB and in the past I have only accessed it from Ruby (for my hare-brained scheme). When it came to implementing our MongoDB management studio we discovered that their is no official MongoDB driver for .NET. We chose to use NoRM, honestly just because it was the only one I had heard of. NoRM was a challenge. I think it is a fine library but it is focused on mapping strongly typed objects to MongoDB. For our application we had no prior knowledge of the types that would be in the MongoDB database so NoRM was probably a poor choice. Here are some screens (click to enlarge):

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >