Search Results

Search found 3163 results on 127 pages for 'schema'.

Page 119/127 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • How to create a simple adf dashboard application with EJB 3.0

    - by Rodrigues, Raphael
    In this month's Oracle Magazine, Frank Nimphius wrote a very good article about an Oracle ADF Faces dashboard application to support persistent user personalization. You can read this entire article clicking here. The idea in this article is to extend the dashboard application. My idea here is to create a similar dashboard application, but instead ADF BC model layer, I'm intending to use EJB3.0. There are just a one small trick here and I'll show you. I'm using the HR usual oracle schema. The steps are: 1. Create a ADF Fusion Application with EJB as a layer model 2. Generate the entities from table (I'm using Department and Employees only) 3. Create a new Session Bean. I called it: HRSessionEJB 4. Create a new method like that: public List getAllDepartmentsHavingEmployees(){ JpaEntityManager jpaEntityManager = (JpaEntityManager)em.getDelegate(); Query query = jpaEntityManager.createNamedQuery("Departments.allDepartmentsHavingEmployees"); JavaBeanResult.setQueryResultClass(query, AggregatedDepartment.class); return query.getResultList(); } 5. In the Departments entity, create a new native query annotation: @Entity @NamedQueries( { @NamedQuery(name = "Departments.findAll", query = "select o from Departments o") }) @NamedNativeQueries({ @NamedNativeQuery(name="Departments.allDepartmentsHavingEmployees", query = "select e.department_id, d.department_name , sum(e.salary), avg(e.salary) , max(e.salary), min(e.salary) from departments d , employees e where d.department_id = e.department_id group by e.department_id, d.department_name")}) public class Departments implements Serializable {...} 6. Create a new POJO called AggregatedDepartment: package oramag.sample.dashboard.model; import java.io.Serializable; import java.math.BigDecimal; public class AggregatedDepartment implements Serializable{ @SuppressWarnings("compatibility:5167698678781240729") private static final long serialVersionUID = 1L; private BigDecimal departmentId; private String departmentName; private BigDecimal sum; private BigDecimal avg; private BigDecimal max; private BigDecimal min; public AggregatedDepartment() { super(); } public AggregatedDepartment(BigDecimal departmentId, String departmentName, BigDecimal sum, BigDecimal avg, BigDecimal max, BigDecimal min) { super(); this.departmentId = departmentId; this.departmentName = departmentName; this.sum = sum; this.avg = avg; this.max = max; this.min = min; } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setSum(BigDecimal sum) { this.sum = sum; } public BigDecimal getSum() { return sum; } public void setAvg(BigDecimal avg) { this.avg = avg; } public BigDecimal getAvg() { return avg; } public void setMax(BigDecimal max) { this.max = max; } public BigDecimal getMax() { return max; } public void setMin(BigDecimal min) { this.min = min; } public BigDecimal getMin() { return min; } } 7. Create the util java class called JavaBeanResult. The function of this class is to configure a native SQL query to return POJOs in a single line of code using the utility class. Credits: http://onpersistence.blogspot.com.br/2010/07/eclipselink-jpa-native-constructor.html package oramag.sample.dashboard.model.util; /******************************************************************************* * Copyright (c) 2010 Oracle. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * @author shsmith ******************************************************************************/ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.exceptions.ConversionException; import org.eclipse.persistence.internal.helper.ConversionManager; import org.eclipse.persistence.internal.sessions.AbstractRecord; import org.eclipse.persistence.internal.sessions.AbstractSession; import org.eclipse.persistence.jpa.JpaHelper; import org.eclipse.persistence.queries.DatabaseQuery; import org.eclipse.persistence.queries.QueryRedirector; import org.eclipse.persistence.sessions.Record; import org.eclipse.persistence.sessions.Session; /*** * This class is a simple query redirector that intercepts the result of a * native query and builds an instance of the specified JavaBean class from each * result row. The order of the selected columns musts match the JavaBean class * constructor arguments order. * * To configure a JavaBeanResult on a native SQL query use: * JavaBeanResult.setQueryResultClass(query, SomeBeanClass.class); * where query is either a JPA SQL Query or native EclipseLink DatabaseQuery. * * @author shsmith * */ public final class JavaBeanResult implements QueryRedirector { private static final long serialVersionUID = 3025874987115503731L; protected Class resultClass; public static void setQueryResultClass(Query query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); DatabaseQuery databaseQuery = JpaHelper.getDatabaseQuery(query); databaseQuery.setRedirector(javaBeanResult); } public static void setQueryResultClass(DatabaseQuery query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); query.setRedirector(javaBeanResult); } protected JavaBeanResult(Class resultClass) { this.resultClass = resultClass; } @SuppressWarnings("unchecked") public Object invokeQuery(DatabaseQuery query, Record arguments, Session session) { List results = new ArrayList(); try { Constructor[] constructors = resultClass.getDeclaredConstructors(); Constructor javaBeanClassConstructor = null; // (Constructor) resultClass.getDeclaredConstructors()[0]; Class[] constructorParameterTypes = null; // javaBeanClassConstructor.getParameterTypes(); List rows = (List) query.execute( (AbstractSession) session, (AbstractRecord) arguments); for (Object[] columns : rows) { boolean found = false; for (Constructor constructor : constructors) { javaBeanClassConstructor = constructor; constructorParameterTypes = javaBeanClassConstructor.getParameterTypes(); if (columns.length == constructorParameterTypes.length) { found = true; break; } // if (columns.length != constructorParameterTypes.length) { // throw new ColumnParameterNumberMismatchException( // resultClass); // } } if (!found) throw new ColumnParameterNumberMismatchException( resultClass); Object[] constructorArgs = new Object[constructorParameterTypes.length]; for (int j = 0; j < columns.length; j++) { Object columnValue = columns[j]; Class parameterType = constructorParameterTypes[j]; // convert the column value to the correct type--if possible constructorArgs[j] = ConversionManager.getDefaultManager() .convertObject(columnValue, parameterType); } results.add(javaBeanClassConstructor.newInstance(constructorArgs)); } } catch (ConversionException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalArgumentException e) { throw new ColumnParameterMismatchException(e); } catch (InstantiationException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalAccessException e) { throw new ColumnParameterMismatchException(e); } catch (InvocationTargetException e) { throw new ColumnParameterMismatchException(e); } return results; } public final class ColumnParameterMismatchException extends RuntimeException { private static final long serialVersionUID = 4752000720859502868L; public ColumnParameterMismatchException(Throwable t) { super( "Exception while processing query results-ensure column order matches constructor parameter order", t); } } public final class ColumnParameterNumberMismatchException extends RuntimeException { private static final long serialVersionUID = 1776794744797667755L; public ColumnParameterNumberMismatchException(Class clazz) { super( "Number of selected columns does not match number of constructor arguments for: " + clazz.getName()); } } } 8. Create the DataControl and a jsf or jspx page 9. Drag allDepartmentsHavingEmployees from DataControl and drop in your page 10. Choose Graph > Type: Bar (Normal) > any layout 11. In the wizard screen, Bars label, adds: sum, avg, max, min. In the X Axis label, adds: departmentName, and click in OK button 12. Run the page, the result is showed below: You can download the workspace here . It was using the latest jdeveloper version 11.1.2.2.

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Full-text Indexing Books Online

    - by Most Valuable Yak (Rob Volk)
    While preparing for a recent SQL Saturday presentation, I was struck by a crazy idea (shocking, I know): Could someone import the content of SQL Server Books Online into a database and apply full-text indexing to it?  The answer is yes, and it's really quite easy to do. The first step is finding the installed help files.  If you have SQL Server 2012, BOL is installed under the Microsoft Help Library.  You can find the install location by opening SQL Server Books Online and clicking the gear icon for the Help Library Manager.  When the new window pops up click the Settings link, you'll get the following: You'll see the path under Library Location. Once you navigate to that path you'll have to drill down a little further, to C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store.  This is where the help file content is kept if you downloaded it for offline use. Depending on which products you've downloaded help for, you may see a few hundred files.  Fortunately they're named well and you can easily find the "SQL_Server_Denali_Books_Online_" files.  We are interested in the .MSHC files only, and can skip the Installation and Developer Reference files. Despite the .MHSC extension, these files are compressed with the standard Zip format, so your favorite archive utility (WinZip, 7Zip, WinRar, etc.) can open them.  When you do, you'll see a few thousand files in the archive.  We are only interested in the .htm files, but there's no harm in extracting all of them to a folder.  7zip provides a command-line utility and the following will extract to a D:\SQLHelp folder previously created: 7z e –oD:\SQLHelp "C:\ProgramData\Microsoft\HelpLibrary\content\Microsoft\store\SQL_Server_Denali_Books_Online_B780_SQL_110_en-us_1.2.mshc" *.htm Well that's great Rob, but how do I put all those files into a full-text index? I'll tell you in a second, but first we have to set up a few things on the database side.  I'll be using a database named Explore (you can certainly change that) and the following setup is a fragment of the script I used in my presentation: USE Explore; GO CREATE SCHEMA help AUTHORIZATION dbo; GO -- Create default fulltext catalog for later FT indexes CREATE FULLTEXT CATALOG FTC AS DEFAULT; GO CREATE TABLE help.files(file_id int not null IDENTITY(1,1) CONSTRAINT PK_help_files PRIMARY KEY, path varchar(256) not null CONSTRAINT UNQ_help_files_path UNIQUE, doc_type varchar(6) DEFAULT('.xml'), content varbinary(max) not null); CREATE FULLTEXT INDEX ON help.files(content TYPE COLUMN doc_type LANGUAGE 1033) KEY INDEX PK_help_files; This will give you a table, default full-text catalog, and full-text index on that table for the content you're going to insert.  I'll be using the command line again for this, it's the easiest method I know: for %a in (D:\SQLHelp\*.htm) do sqlcmd -S. -E -d Explore -Q"set nocount on;insert help.files(path,content) select '%a', cast(c as varbinary(max)) from openrowset(bulk '%a', SINGLE_CLOB) as c(c)" You'll need to copy and run that as one line in a command prompt.  I'll explain what this does while you run it and watch several thousand files get imported: The "for" command allows you to loop over a collection of items.  In this case we want all the .htm files in the D:\SQLHelp folder.  For each file it finds, it will assign the full path and file name to the %a variable.  In the "do" clause, we'll specify another command to be run for each iteration of the loop.  I make a call to "sqlcmd" in order to run a SQL statement.  I pass in the name of the server (-S.), where "." represents the local default instance. I specify -d Explore as the database, and -E for trusted connection.  I then use -Q to run a query that I enclose in double quotes. The query uses OPENROWSET(BULK…SINGLE_CLOB) to open the file as a data source, and to treat it as a single character large object.  In order for full-text indexing to work properly, I have to convert the text content to varbinary. I then INSERT these contents along with the full path of the file into the help.files table created earlier.  This process continues for each file in the folder, creating one new row in the table. And that's it! 5 SQL Statements and 2 command line statements to unzip and import SQL Server Books Online!  In case you're wondering why I didn't use FILESTREAM or FILETABLE, it's simply because I haven't learned them…yet. I may return to this blog after I figure that out and update it with the steps to do so.  I believe that will make it even easier. In the spirit of exploration, I'll leave you to work on some fulltext queries of this content.  I also recommend playing around with the sys.dm_fts_xxxx DMVs (I particularly like sys.dm_fts_index_keywords, it's pretty interesting).  There are additional example queries in the download material for my presentation linked above. Many thanks to Kevin Boles (t) for his advice on (re)checking the content of the help files.  Don't let that .htm extension fool you! The 2012 help files are actually XML, and you'd need to specify '.xml' in your document type column in order to extract the full-text keywords.  (You probably noticed this in the default definition for the doc_type column.)  You can query sys.fulltext_document_types to get a complete list of the types that can be full-text indexed. I also need to thank Hilary Cotter for giving me the original idea. I believe he used MSDN content in a full-text index for an article from waaaaaaaaaaay back, that I can't find now, and had forgotten about until just a few days ago.  He is also co-author of Pro Full-Text Search in SQL Server 2008, which I highly recommend.  He also has some FTS articles on Simple Talk: http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features/ http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features,-part-2/

    Read the article

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • ASP.NET WebAPI Security 3: Extensible Authentication Framework

    - by Your DisplayName here!
    In my last post, I described the identity architecture of ASP.NET Web API. The short version was, that Web API (beta 1) does not really have an authentication system on its own, but inherits the client security context from its host. This is fine in many situations (e.g. AJAX style callbacks with an already established logon session). But there are many cases where you don’t use the containing web application for authentication, but need to do it yourself. Examples of that would be token based authentication and clients that don’t run in the context of the web application (e.g. desktop clients / mobile). Since Web API provides a nice extensibility model, it is easy to implement whatever security framework you want on top of it. My design goals were: Easy to use. Extensible. Claims-based. ..and of course, this should always behave the same, regardless of the hosting environment. In the rest of the post I am outlining some of the bits and pieces, So you know what you are dealing with, in case you want to try the code. At the very heart… is a so called message handler. This is a Web API extensibility point that gets to see (and modify if needed) all incoming and outgoing requests. Handlers run after the conversion from host to Web API, which means that handler code deals with HttpRequestMessage and HttpResponseMessage. See Pedro’s post for more information on the processing pipeline. This handler requires a configuration object for initialization. Currently this is very simple, it contains: Settings for the various authentication and credential types Settings for claims transformation Ability to block identity inheritance from host The most important part here is the credential type support, but I will come back to that later. The logic of the message handler is simple: Look at the incoming request. If the request contains an authorization header, try to authenticate the client. If this is successful, create a claims principal and populate the usual places. If not, return a 401 status code and set the Www-Authenticate header. Look at outgoing response, if the status code is 401, set the Www-Authenticate header. Credential type support Under the covers I use the WIF security token handler infrastructure to validate credentials and to turn security tokens into claims. The idea is simple: an authorization header consists of two pieces: the schema and the actual “token”. My configuration object allows to associate a security token handler with a scheme. This way you only need to implement support for a specific credential type, and map that to the incoming scheme value. The current version supports HTTP Basic Authentication as well as SAML and SWT tokens. (I needed to do some surgery on the standard security token handlers, since WIF does not directly support string-ified tokens. The next version of .NET will fix that, and the code should become simpler then). You can e.g. use this code to hook up a username/password handler to the Basic scheme (the default scheme name for Basic Authentication). config.Handler.AddBasicAuthenticationHandler( (username, password) => username == password); You simply have to provide a password validation function which could of course point back to your existing password library or e.g. membership. The following code maps a token handler for Simple Web Tokens (SWT) to the Bearer scheme (the currently favoured scheme name for OAuth2). You simply have to specify the issuer name, realm and shared signature key: config.Handler.AddSimpleWebTokenHandler(     "Bearer",     http://identity.thinktecture.com/trust,     Constants.Realm,     "Dc9Mpi3jaaaUpBQpa/4R7XtUsa3D/ALSjTVvK8IUZbg="); For certain integration scenarios it is very useful if your Web API can consume SAML tokens. This is also easily accomplishable. The following code uses the standard WIF API to configure the usual SAMLisms like issuer, audience, service certificate and certificate validation. Both SAML 1.1 and 2.0 are supported. var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri(Constants.Realm)); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from configuration section) adfsConfig.ServiceTokenResolver = FederatedAuthentication.ServiceConfiguration.CreateAggregateTokenResolver(); config.Handler.AddSaml11SecurityTokenHandler("SAML", adfsConfig); Claims Transformation After successful authentication, if configured, the standard WIF ClaimsAuthenticationManager is called to run claims transformation and validation logic. This stage is used to transform the “technical” claims from the security token into application claims. You can either have a separate transformation logic, or share on e.g. with the containing web application. That’s just a matter of configuration. Adding the authentication handler to a Web API application In the spirit of Web API this is done in code, e.g. global.asax for web hosting: protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     ConfigureApis(GlobalConfiguration.Configuration);     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     BundleTable.Bundles.RegisterTemplateBundles(); } private void ConfigureApis(HttpConfiguration configuration) {     configuration.MessageHandlers.Add( new AuthenticationHandler(ConfigureAuthentication())); } private AuthenticationConfiguration ConfigureAuthentication() {     var config = new AuthenticationConfiguration     {         // sample claims transformation for consultants sample, comment out to see raw claims         ClaimsAuthenticationManager = new ApiClaimsTransformer(),         // value of the www-authenticate header, // if not set, the first scheme added to the handler collection is used         DefaultAuthenticationScheme = "Basic"     };     // add token handlers - see above     return config; } You can find the full source code and some samples here. In the next post I will describe some of the samples in the download, and then move on to authorization. HTH

    Read the article

  • CodePlex Daily Summary for Sunday, September 02, 2012

    CodePlex Daily Summary for Sunday, September 02, 2012Popular ReleasesThisismyusername's codeplex page.: HTML5 Multitouch Example - Fruit Ninja in HTML5: This is an example of how you could create a game such as Fruit Ninja using HTML5's multitouch capabilities. This example isn't responsive enough, so I will be working on that, and it doesn't have great graphics, either. If I had my own webpage, I could store some graphics and upload the game there and it might look halfway decent, but here the fruits are just circles. I hope you enjoy reading the source code anyway.GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixRuminate XNA 4.0 GUI: Release 1.1.1: Fixed bugs with Slider and TextBox. Added ComboBox.Confuser: Confuser build 76542: This is a build of changeset 76542.SharePoint Column & View Permission: SharePoint Column and View Permission v1.2: Version 1.2 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerMihmojsos OS: Mihmojsos OS 3 (Smart Rabbit): !Mihmojsos OS 3 Smart Rabbit Mihmojsos Smart Rabbit is now availableDotNetNuke Translator: 01.00.00 Beta: First release of the project.YNA: YNA 0.2 alpha: Wath's new since 0.1 alpha ? A lot of changes but there are the most interresting : StateManager is now better and faster Mouse events for all YnObjects (Sprites, Images, texts) A really big improvement for YnGroup Gamepad support And the news : Tiled Map support (need refactoring) Isometric tiled map support (need refactoring) Transition effect like "FadeIn" and "FadeOut" (YnTransition) Timers (YnTimer) Path management (YnPath, need more refactoring) Downloads All downloads...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.2: fixed several issues with streaming modeUrlPager: UrlPager 1.2: Fixed bug in which url parameters will lost after paging; ????????url???bug;Sofire Suite: Sofire v1.5.0.0: Sofire v1.5.0.0 ?? ???????? ?????: 1、?? 2、????EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0.0901.1830 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install ...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3406 (September 2012): New features: Extended ReflectionClass libxml error handling, constants DateTime::modify(), DateTime::getOffset() TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode DateTime methods (WordPress posting fix) Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging ...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....New ProjectsATSV: this is a student project for making a new silverlight UI Bookmark Collector: This project is a best practice example of how to use content items in DotNetNuke. It allows you to quickly and easily manage a listing of external links.BPVotingmachine: BP Vote SystemClean My Space: Sort your files in a fun and fast! With Clean My Space you can!CutePlatform: CutePlatform is a platform game based around the PlanetCute graphics pack from Daniel cook, make him a visit in www.lostgardem.comDancTeX: This project is targeting the integration of LaTeX into VisusalStudio. Epi Info™ Companion for Android: A mobile companion to the Epi Info™ 7 desktop tool for epidemiologic data collection and analysis.Flucene: Object Document Mapper for Lucene.Netfluentserializer: FluentSerializer is a library for .NET usable to create serialize/deserialize data through its fluent interface. The methods it creates are compiled.hongjiapp: hongjiappidealthings educational comprehensive administration system: ?????????????????????????????????????????????.Java Accounting Library: The project aims at providing a Financial Accounting Java Library which may be integrated to any other Java Application independent of its Backend Database.mycnblogs: mycnblogsNETPack: Lightweight and flexible packer for .NETRandom Useful Code: This project is where I will store various useful classes I have built over time. Only the code will be provided, no Library or the like.Suleymaniye Tavimi: Namaz vakitleri hesaplama uygulamasidir. Istenilen yer için hesaplama yapar.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • ODEE Green Field (Windows) Part 4 - Documaker

    - by AndyL-Oracle
    Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • CodePlex Daily Summary for Monday, September 03, 2012

    CodePlex Daily Summary for Monday, September 03, 2012Popular ReleasesMetodología General Ajustada - MGA: 03.01.03: Cambios Aury: Ajuste del margen del reporte. Visualización de la columna de Supuestos en la parte del módulo de Decisión. Cambios John: Integración de código con cambios enviados por Aury Niño. Generación de instaladores. Soporte técnico por correo electrónico y telefónico.Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...Thisismyusername's codeplex page.: HTML5 Mulititouch Fruit Ninja Proof of Concept: This is an example of how you could create a game such as Fruit Ninja using HTML5's multitouch capabilities. Sorry this example doesn't have great graphics. If I had my own webpage, I could store some graphics and upload the game there and it might look halfway decent, but since I'm only using a Codeplex page and most mobile devices can't open .zip files, the fruits are just circles. I hope you enjoy reading the source code anyway.GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.TSQL Code Smells Finder: POC 1.01: Proof of concept 1.01 TSQLDomTest.ps1 and Errors.Txt are requiredConfuser: Confuser build 76542: This is a build of changeset 76542.Reactive State Machine: ReactiveStateMachine-beta: TouchStateMachine now supports Microsoft Surface 2.0 SDK. The TouchStateMachine is an extension to the Reactive State Machine. Reactive State Machine uses NuGet for dependency managementSharePoint Column & View Permission: SharePoint Column and View Permission v1.2: Version 1.2 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerMihmojsos OS: Mihmojsos OS 3 (Smart Rabbit): !Mihmojsos OS 3 Smart Rabbit Mihmojsos Smart Rabbit is now availableDotNetNuke Translator: 01.00.00 Beta: First release of the project.YNA: YNA 0.2 alpha: Wath's new since 0.1 alpha ? A lot of changes but there are the most interresting : StateManager is now better and faster Mouse events for all YnObjects (Sprites, Images, texts) A really big improvement for YnGroup Gamepad support And the news : Tiled Map support (need refactoring) Isometric tiled map support (need refactoring) Transition effect like "FadeIn" and "FadeOut" (YnTransition) Timers (YnTimer) Path management (YnPath, need more refactoring) Downloads All downloads...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.2: fixed several issues with streaming modeUrlPager: UrlPager 1.2: Fixed bug in which url parameters will lost after paging; ????????url???bug;Sofire Suite: Sofire v1.5.0.0: Sofire v1.5.0.0 ?? ???????? ?????: 1、?? 2、????EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...New ProjectsBPVote4PPT: BPVote For PowerPointCosmo OS: La semplicità in un OSFinancial Analytic Tools: C#.Net Financial Analytic ToolsGeminiMVC: An Open Source CMS written in ASP.net MVC 4 with speed, extensibility, and ease-of-us in mind.JQuery SharePoint Autocomplete People Picker: This JQUery bundle provides an autocomplete people picker based on SharePoint profiles. It can be hosted on the SharePoint itself or on remote applications.Kerbal Space Program PartModule Library: This project is designed to add various functionalities to custom parts for the space program simulation game Kerbal Space Program.KeyboardRemapper: This tool to remaps keys in the keyboard. If you have more than one keyboard or an additional keypad, you can remap the keys of the each keyboard independentlyKHStudent: ??????Localized DataAnnotations with T4 templates: Simplified DataAnnotations localization using T4 templates.MfcLightToolkit: Supports development for small and simple MFC application. Provides asynchronous programming model like .NET, file download, easy control resizing, and so on.Müslüm ÖZTÜRK Code Lib: Test amaçli olusturulan projemdirPolska: Testproject in how a polish grammerprogram can look like.QueueLessApp: Here is the codeRusIS.CMS: aaaSGPS: Projeto de controle de produtos e serviçosStemmersNet: Stemmers pack for .Net FrameworkTrabajo Final de Ingenieria - Javier Vallejos: Tesis Final de la carrera de Ingenieria - Universidad Abierta Interamericana.TSQL Code Smells Finder: TSQL 'smells' findersXNA and Data Driven Design: This project includes links for XNA and Data Driven DesignXNA and System Testing: This project includes code for XNA and System TestingYUGI-AR Project: an open source project for yugioh based augmented reality???????? ? ?????????????: ???? ??????? ??????? ?????????????? ??????????? ?????????? ??? ? ????? ?????? ? ? ??? ??? ????? ? ??? ?????????? ????????????.

    Read the article

  • How To: Using SimpleMembserhipProvider with MySql Connector/Net.

    - by Francisco Tirado
    Now on Connector/Net 6.9 the users will have the ability to use SimpleMembership Provider on MVC4 templates. The configuration is very simple and also have compatibility with OAuth, in this post we'll explain step by step how to configure it in a MVC 4 Web Application. Requirements  The requirements to use SimpleMembership with Connector/Net are: Install Connector/Net 6.9, or download the No Install version. Net Framework 4.0 or greater. MVC 4  Visual Studio 2012 or newer version Creating and configuring a new project In this example we'll use VS2012 to create the project basis on the Internet Aplication template and using Entity Framework to manage the User model. Open VS 2012 and create a new project, we'll create a new MVC 4 Web Application and configure the project to use Net Framework 4.5. Type a name for the project and then click “Ok”. In the next dialog we'll choose the “Internet Application” template and use Razor as engine without creating a test project. Click “Ok” to continue. Now we have a new project with the templates necessaries to run a Web Application with the default values. We'll use the current files to continue working. If you have installed Connector/Net you can skip this step, if you don't have installed but you're planning to do it, please install it and continue with the next step. If you're using the No Install version of Connector/Net we'll need to add the references to our project, the assemblies needed are: MySql.Data, MySql.Data.Entities and MySql.Web. Be sure that the assemblies chosen match the Net Framework version used in our project and the MySql.Data.Entities is compatible with EF5 (EF5 is the default added by the project). Now open the “web.config” file, and under the <connectionStrings> node add a connection string that points to a MySql instance. We'll use the following connection configuration: <add name="MyConnection" connectionString="server=localhost;UserId=root;password=pass;database=MySqlSimpleMembership;" providerName="MySql.Data.MySqlClient"/> Under the node <system.web> we'll add the following configuration: <membership defaultProvider="MySqlSimpleMembershipProvider"><providers><clear/><add name="MySqlSimpleMembershipProvider" type="MySql.Web.Security.MySqlSimpleMembershipProvider,MySql.Web,Version=6.9.3.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" applicationName="MySqlSimpleMembershipTest" description="MySQLdefaultapplication" connectionStringName="MyConnection"  userTableName="UserProfile" userIdColumn="UserId" userNameColumn="UserName" autoGenerateTables="True"/></providers></membership> In the previous configuration the mandatory properties are: connectionStringName, userTableName, userIdColumn, userNameColumn and autoGenerateTables. If the other properties are not provided a default value is set to it but if the mandatory properties are not set a ProviderException will be thrown. The valid properties for the MySqlSimpleMembership are the same used for MySqlMembership plus the mandatory fields. UserTableName: Name of the table where will be stored the user, this table is independent from the schema generated by the provider and can be edited later by the user. UserId: name of the column that will store the id for the records in the userTableName. UserName : name of the column that will store the name/user for the records in the userTableName. The connectionStringName property must match a connection string defined in web.config file. Once the configuration is done in web.config, we need to be sure that our database context for the Users Table point to the right connection string. In our case we just need to update the class UsersContext in the file AcountModel.cs in the Models folder. The file also contains the UserProfile class which match the configuration for our UserTable. Other class that needs to be updated is the SimpleMembershipInitializer in the file InitializeSimpleMembershipAttribute.cs in the Filters folder. In that class we'll see a call to the method “WebSecurity.InitializeDatabaseConnection”, in that call is where we need to update the parameters to match our configuration. If the database that you configure in your connection string doesn't exists, you need to create it empty. Now we're ready to run our web application, press F5 or the Run button in the tool bar. You'll see the following screen: If you go to your database used by the application you'll see some tables created, now we are using SimpleMembership. Now create a user, click on “Register” at the top-right in the web page. Type your user name and password, then click on “Register”. You'll be redirected to the home page and you'll see the name of your user at the top-right page. If you take a look on the tables just created in your database you will find the data about the user you just register. In our case the tables that contains the information are UserProfile and Webpages_Membership.  Configuring OAuth Other option to access your website will be using OAuth, so you can validate an user using an external account like Facebook, Twitter, Google, etc. In this post we'll enable the authentication for Google account in our application. Go to the class AuthConfig.cs in the folder App_Start. In the method “RegisterAuth” uncomment the last line where is the call to the method “OauthWebSecurity.RegisterGoogleClient”. Run the application. Once the application is running click on “Login”. You will see at the right side the option to login using a Google account, click on “Google”.  You will be asked for Google credentials. If your login is successful you'll see a message asking for your approval to give permission to your site to access your information. Click on “Accept”. Now a page to register your user will be shown, click on “Register”. Now your new user is logged in in your application. You can take a look of the user information created in the tables  UserProfile and Webpages_OauthMembership. If you want to use another external option to authenticate users you must enable the client in the same class where we enable the Google authentication, but for others providers is mandatory to register your Application in their site. Once you have register your application they will give you a token/key and the id for your application, that information you're going to use it to register the client. Thanks for reading.

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    2012?2???????????????WebLogic Server 12c?????????Java EE 6?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse 12c??WebLogic Server 12c(???)????Java EE 6??????3??????????????????????????????JPA 2.0??????????·?????????EJB 3.1???????·???????????????(???)???????O/R?????????????JPA 2.0 Java EE 6????????????????????Web?????????????3?????(3????)???????·????????????·????????????????????????????????JPA(Java Persistence API) 2.0???EJB(Enterprise JavaBeans) 3.1???JSF(JavaServer Faces) 2.0????3????????????????·???????????JPA??Java??????????????·?????????????O/R?????????????????????·???????????EJB?Session Bean??????????????????·??????????????????????JSF??????????????????????????????????????? ??????JPA????Oracle Database??EMPLOYEES?????Java??????????????????????Entity Bean??????XML?????????????????????????XML????????????????????????????????????????????????????·?????????????????????????????????????????????????????????????Java EE 6??????JPA 2.0??????????·???????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse(OEPE)??????File????????New?-?Other??????? ??????New??????????????????????????Web?-?Dynamic Web Project???????Next????????????????Dynamic Web Project?????????????Project name????OOW???????????Target Runtime????New Runtime????????? ???New Server Runtime Environment???????????????Oracle?-?Oracle WebLogic Server 12c(12.1.1)???????Next???????????????????????????WebLogic home????C:\Oracle\Middleware\wlserver_12.1???????Finish?????????????WebLogic Home????????????????????????Java home?????????????????????Finish??????????????????????Dynamic Web Project????????????????Finish??????????????????JPA 2.0??????????·?????? ???????????????JPA 2.0???????????????·??????????????????Eclipse??Project Explorer?(??????·???)?????????OOW?????????????????????????????·???????????????Properties?????????????????·???·????????????????????????????Project Facets?????????????JPA??????(?????????????Details?????JPA 2.0?????????????????????)???????????????????Further configuration available????????? ???Modify Faceted Project??????????????????????????????????Connection????????????????????????????Add Connection????????? ??????New Connection Profile????????????????Connection Profile Type????Oracle Database Connection??????Next???????????? ???Specify a Driver and Connection Details???????Drivers????Oracle Database 10g Driver Default???????????Properties?????????????????????SIDxeHostlocalhostPort number1521User nameHRPasswordhr ???????????Test Connection??????????????????Ping Succeeded!?????????????????????????????Finish???????????Modify Faceted Project????????OK????????????????Properties for OOW????????OK?????????????????? ?????????Eclipse????????????????OOW?????????????????·???????????????JPA Tools?-?Generate Entities from Tables...??????? ????Generate Custom Entities???????????????????????????????Schema????HR??????Tables????EMPLOYEES???????????Next???????????? ???????????Next???????????Customize Default Entity Generation??????Package????model???????Finish?????????????JPQL?????????? ?????????Oracle Database??EMPLOYEES??????????????????·????model.Employee.java?????????????????????????????????·?????OOW????Java Resources?-?src?-?model???????Employee.java????????????????????????????????·???Employee????(Employee.java)?package model; import java.io.Serializable; import java.math.BigDecimal; import java.util.Date; import java.util.Set; import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?// Apublic class Employee implements Serializable {        private static final long serialVersionUID = 1L;       @Id  // ?       @Column(name="EMPLOYEE_ID")        private long employeeId;        @Column(name="COMMISSION_PCT")        private BigDecimal commissionPct;        @Column(name="DEPARTMENT_ID")        private BigDecimal departmentId;        private String email;        @Column(name="FIRST_NAME")        private String firstName;       @Temporal( TemporalType.DATE)  //?       @Column(name="HIRE_DATE")        private Date hireDate;        @Column(name="JOB_ID")        private String jobId;        @Column(name="LAST_NAME")        private String lastName;        @Column(name="PHONE_NUMBER")        private String phoneNumber;        private BigDecimal salary;        //bi-directional many-to-one association to Employee<...?...>}  ???????????????·???????????????????????????????????????????@Table(name="")??????@Table??????????????????????????????????????? ?????????????????????????????????????·???????????????? ?????????????????????????????SQL?Data?????????? ???????????????A?????JPA?????????JPQL(Java Persistence Query Language)?????????????JPQL?????SQL???????????????????????????????????????????????????????????????????????????????????Employee.selectByNameEmployee??firstName????????????????????employeeId????????? ?????????????????????import java.util.Date;import java.util.Set;import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?@NamedQueries({       @NamedQuery(name="Employee.selectByName" , query="select e from Employee e where e.firstName like :name order by e.employeeId")})<...?...> ?????????·??????OOW?-?JPA Content?-?persistent.xml??????Connection???????????????Database????JTA data source:???jdbc/test????????????????????????Java EE 6??????JPA 2.0???????????????????????????????????·??????????????????????????????????????SQL????????????????????????·????????????·??????????????XML??????????????????1??????????????????????????????????????????????????????????????????EJB 3.1????????·???????????EJB 3.1????????·?????????????????EJB 3.1?Stateless Session Bean?????·????????????????·???????????????????·??????????????????? EJB3.1?????JPA 2.0???????????·???????????????????????XML???????????????????????????????EJB 3.1?????????·????EJB?????????????????????????????????????????????????????????????? ????????EJB 3.1?Session Bean?????·????????????????????????????????????????????????????public List<Employee> getEmp(String keyword)firstName????????????Employee?????? ????????????????????·???????????OOW????????????·???????????????New?-?Other???????????????????????????????????EJB?-?Session Bean(EJB 3.x)??????NEXT????????????????????Create EJB 3.x Session Bean?????????????Java Package????ejb???class name????EmpLogic???????????State Type????Stateless?????????No-interface???????????????????????Finish???????????? ?????????Stateless Session Bean??????·?????EmpLogic.java????????????????????EmpLogic????·????????EJB?????????????Stateless Session Bean?????????@Stateless?????????????????????????????????????EmpLogic????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       public EmpLogic() {       }} ??????????????????????????????????????·???????????????????????import??????????????????EmpLogic??????????????????????????·???????????????????????import????????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ?import javax.persistence.PersistenceContext;  // ?<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {      @PersistenceContext(unitName = "OOW")  // ?      private EntityManager em;  // ?       public EmpLogic() {       }} ?????????·???????JPA???????????????????·????????????????????????????CRUD???????????????????·????????????EntityManager???????????????????????????1????????????????·???????????????????????@PersistenceContext?????unitName?????????????persistence.xml????persistence-unit???name?????????????? ???????EmpLogic?????·???????????????????????????????????????????????????????????????????????????????EmpLogic????????·???????(EmpLogic.java)?package ejb;import java.util.List;  // ? import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ? import javax.persistence.PersistenceContext;  // ? <...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       @PersistenceContext(unitName = "OOW")  // ?        private EntityManager em;  // ?        public EmpLogic() {       }      @SuppressWarnings("unchecked")  // ?      public List<Employee> getEmp(String keyword) {  // ?             StringBuilder param = new StringBuilder();  // ?             param.append("%");  // ?             param.append(keyword);  // ?             param.append("%");  // ?             return em.createNamedQuery("Employee.selectByName")  // ?                    .setParameter("name", param.toString()).getResultList();  // ?      }} ???EJB 3.1???Stateless Session Bean?????????? ???JSF 2.0???????????????????????????????????????????????????JAX-RS????RESTful?Web??????????????????????

    Read the article

  • LDAP query on linux against AD returns groups with no members

    - by SethG
    I am using LDAP+kerberos to authenticate against Active Directory on Windows 2003 R2. My krb5.conf and ldap.conf appear to be correct (according to pretty much every sample I found on the 'net). I can login to the host with both password and ssh keys. When I run getent passwd, all my ldap user accounts are listed with all the important attributes. When I run getent group, all the ldap groups and their gid's are listed, but no group members. If I run ldapsearch and filter on any group, the members are all listed with the "member" attribute. So the data is there for the taking, it's just not being parsed properly. It would appear that I simply am using an incorrect mapping in ldap.conf, but I can't see it. I've tried several variations and all give the same result. Here is my current ldap.conf: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy hard nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=User pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad Here's the kicker: this config works 100% fine on a different linux box with a different distro. It does not work on the distro I am planning on switching to. I have installed from source the versions of pam_ldap and nss_ldap on the new box to match the old box, which fixed another problem I was having with this setup. Other relevant info is the original AD box was Windows 2003. It's mirror died a horrible hardware death so I'm trying to add two more 2003-R2 servers to the mirror tree and ultimately drop the old 2003 box. The new R2 boxes appear to have joined the DC forest properly. What do I need to do to get groups working? I've exhausted all the resources I could find and need a different angle. Any input is appreciated. Status update, 7/31/09 I have managed to tweak my config file to get full info from the AD and performance is nice and snappy. I replaced the back-rev'd copies of pam_ldap and nss_ldap with the current ones for the distro I'm using, so it's back to a standard out-of-the-box install. Here's my current config: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy soft nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_connect_policy oneshot referrals no nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=CN=Person,CN=Schema,CN=Configuration,DC=w2k,DC=cis,DC=ksu,DC=edu pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad ssl off tls_checkpeer no sasl_secprops maxssf=0 The remaining problem now is when you run the groups command, not all subscribed groups are listed. Some are (one or two), but not all. Group memberships are still honored, such as file and printer access. getent group foo still shows that the user is a member of group foo. So it appears to be a presentation bug, and does not interfere with normal operation. It also appears that some (I have not determined exactly how many) group searches do not resolve correctly, even though the group is listed. eg, when you run "getent group bar", nothing is returned, but if you run "getent group|grep bar" or "getent group|grep <bar_gid>" you can see that it indeed listed and your group name and gid are correct. This still seems like an LDAP search or mapping error, but I can't figure out what it is. I'm a heckuva lot closer than earlier in the week, but I'd really like to get this last detail ironed out.

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >