Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 613/1698 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • Nashorn in the Twitterverse, Continued

    - by jlaskey
    After doing the Twitter example, it seemed reasonable to try graphing the result with JavaFX.  At this time the Nashorn project doesn't have an JavaFX shell, so we have to go through some hoops to create an JavaFX application.  I thought showing you some of those hoops might give you some idea about what you can do mixing Nashorn and Java (we'll add a JavaFX shell to the todo list.) First, let's look at the meat of the application.  Here is the repackaged version of the original twitter example. var twitter4j      = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query          = twitter4j.Query; function getTrendingData() {     var twitter = new TwitterFactory().instance;     var query   = new Query("nashorn OR nashornjs");     query.since("2012-11-21");     query.count = 100;     var data = {};     do {         var result = twitter.search(query);         var tweets = result.tweets;         for each (tweet in tweets) {             var date = tweet.createdAt;             var key = (1900 + date.year) + "/" +                       (1 + date.month) + "/" +                       date.date;             data[key] = (data[key] || 0) + 1;         }     } while (query = result.nextQuery());     return data; } Instead of just printing out tweets, getTrendingData tallies "tweets per date" during the sample period (since "2012-11-21", the date "New Project: Nashorn" was posted.)   getTrendingData then returns the resulting tally object. Next, use JavaFX BarChart to display that data. var javafx         = Packages.javafx; var Stage          = javafx.stage.Stage var Scene          = javafx.scene.Scene; var Group          = javafx.scene.Group; var Chart          = javafx.scene.chart.Chart; var FXCollections  = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis   = javafx.scene.chart.CategoryAxis; var NumberAxis     = javafx.scene.chart.NumberAxis; var BarChart       = javafx.scene.chart.BarChart; var XYChart        = javafx.scene.chart.XYChart; var Series         = XYChart.Series; var Data           = XYChart.Data; function graph(stage, data) {     var root = new Group();     stage.scene = new Scene(root);     var dates = Object.keys(data);     var xAxis = new CategoryAxis();     xAxis.categories = FXCollections.observableArrayList(dates);     var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0);     var series = FXCollections.observableArrayList();     for (var date in data) {         series.add(new Data(date, data[date]));     }     var tweets = new Series("Tweets", series);     var barChartData = FXCollections.observableArrayList(tweets);     var chart = new BarChart(xAxis, yAxis, barChartData, 25.0);     root.children.add(chart); } I should point out that there is a lot of subtlety going on in the background.  For example; stage.scene = new Scene(root) is equivalent to stage.setScene(new Scene(root)). If Nashorn can't find a property (scene), then it searches (via Dynalink) for the Java Beans equivalent (setScene.)  Also note, that Nashorn is magically handling the generic class FXCollections.  Finally,  with the call to observableArrayList(dates), Nashorn is automatically converting the JavaScript array dates to a Java collection.  It really is hard to identify which objects are JavaScript and which are Java.  Does it really matter? Okay, with the meat out of the way, let's talk about the hoops. When working with JavaFX, you start with a main subclass of javafx.application.Application.  This class handles the initialization of the JavaFX libraries and the event processing.  This is what I used for this example; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } To initialize Nashorn, we use JSR-223's javax.script.  private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); This code sets up an instance of the Nashorn engine for evaluating scripts. The  load method reads a script into memory and then gets engine to eval that script.  Note, that load also returns the result of the eval. Now for the fun part.  There are several different approaches we could use to communicate between the Java main and the script.  In this example we'll use a Java interface.  The JavaFX main needs to do at least start and stop, so the following will suffice as an interface; public interface Trending {     public void start(Stage stage) throws Exception;     public void stop() throws Exception; } At the end of the example's script we add; (function newTrending() {     return new Packages.Trending() {         start: function(stage) {             var data = getTrendingData();             graph(stage, data);             stage.show();         },         stop: function() {         }     } })(); which instantiates a new subclass instance of Trending and overrides the start and stop methods.  The result of this function call is what is returned to main via the eval. trending = (Trending) load("Trending.js"); To recap, the script Trending.js contains functions getTrendingData, graph and newTrending, plus the call at the end to newTrending.  Back in the Java code, we cast the result of the eval (call to newTrending) to Trending, thus, we end up with an object that we can then use to call back into the script.  trending.start(stage); Voila. ?

    Read the article

  • Accessing Oracle 6i and 9i/10g Databases using C#

    - by Mike M
    Hi all, I am making two build files using NAnt. The first aims to automatically compile Oracle 6i forms and reports and the second aims to compile Oracle 9i/10g forms and reports. Within the NAnt task is a C# script which prompts the developer for database credentials (username, password, database) in order to compile the forms and reports. I want to then run these credentials against the relevant database to ensure the credentials entered are correct and, if they are not, prompt the user to re-enter their credentials. My script currently looks as follows: class GetInput { public static void ScriptMain(Project project) { Console.Clear(); Console.WriteLine("==================================================================="); Console.WriteLine("Welcome to the Compile and Deploy Oracle Forms and Reports Facility"); Console.WriteLine("==================================================================="); Console.WriteLine(); Console.WriteLine("Please enter the acronym of the project to work on from the following list:"); Console.WriteLine(); Console.WriteLine("--------"); Console.WriteLine("- BCS"); Console.WriteLine("- COPEN"); Console.WriteLine("- FCDD"); Console.WriteLine("--------"); Console.WriteLine(); Console.Write("Selection: "); project.Properties["project.type"] = Console.ReadLine(); Console.WriteLine(); Console.Write("Please enter username: "); string username = Console.ReadLine(); project.Properties["username"] = username; string password = ReturnPassword(); project.Properties["password"] = password Console.WriteLine(); Console.Write("Please enter database: "); string database = Console.ReadLine(); project.Properties["database"] = database Console.WriteLine(); //Call method to verify user credentials Console.WriteLine(); Console.WriteLine("Compiling files..."; } public static string ReturnPassword() { Console.Write("Please enter password: "); string password = ""; ConsoleKeyInfo nextKey = Console.ReadKey(true); while (nextKey.Key != ConsoleKey.Enter) { if (nextKey.Key == ConsoleKey.Backspace) { if (password.Length > 0) { password = password.Substring(0, password.Length - 1); Console.Write(nextKey.KeyChar); Console.Write(" "); Console.Write(nextKey.KeyChar); } } else { password += nextKey.KeyChar; Console.Write("*"); } nextKey = Console.ReadKey(true); } return password; } } Having done a bit of research, I find that you can connect to Oracle databases using the System.Data.OracleClient namespace clicky. However, as mentioned in the link, Microsoft is discontinuing support for this so it is not a desireable solution. I have also fonud that Oracle provides its own classes for connecting to Oracle databases clicky. However, this only seems to support connecting to Oracle 9 or newer databases (clicky) so it is not feasible solution as I also need to connect to Oracle 6i databases. I could achieve this by calling a bat script from within the C# script, but I would much prefer to have a single build file for simplicity. Ideally, I would like to run a series of commands such as is contained in the following .bat script: rem -- Set Database SID -- set ORACLE_SID=%DBSID% sqlplus -s %nameofuser%/%password%@%dbsid% set cmdsep on set cmdsep '"'; --" set term on set echo off set heading off select '========================================' || CHR(10) || 'Have checked and found both Password and ' || chr(10) || 'Database Identifier are valid, continuing ...' || CHR(10) || '========================================' from dual; exit; This requires me to set the environment variable of ORACLE_SID and then run sqlplus in silent mode (-s) followed by a series of sql set commands (set x), the actual select statement and an exit command. Can I achieve this within a c# script without calling a bat script, or am I forced to call a bat script? Thanks in advance!

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • Achieve Named Criteria with multiple tables in EJB Data control

    - by Deepak Siddappa
    In EJB create a named criteria using sparse xml and in named criteria wizard, only attributes related to the that particular entities will be displayed.  So here we can filter results only on particular entity bean. Take a scenario where we need to create Named Criteria based on multiple tables using EJB. In BC4J we can achieve this by creating view object based on multiple tables. So in this article, we will try to achieve named criteria based on multiple tables using EJB.Implementation StepsCreate Java EE Web Application with entity based on Departments and Employees, then create a session bean and data control for the session bean.Create a Java Bean, name as CustomBean and add below code to the file. Here in java bean from both Departments and Employees tables three fields are taken. public class CustomBean { private BigDecimal departmentId; private String departmentName; private BigDecimal locationId; private BigDecimal employeeId; private String firstName; private String lastName; public CustomBean() { super(); } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setLocationId(BigDecimal locationId) { this.locationId = locationId; } public BigDecimal getLocationId() { return locationId; } public void setEmployeeId(BigDecimal employeeId) { this.employeeId = employeeId; } public BigDecimal getEmployeeId() { return employeeId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Open the sessionEJb file and add the below code to the session bean and expose the method in local/remote interface and generate a data control for that. Note:- Here in the below code "em" is a EntityManager. public List<CustomBean> getCustomBeanFindAll() { String queryString = "select d.department_id, d.department_name, d.location_id, e.employee_id, e.first_name, e.last_name from departments d, employees e\n" + "where e.department_id = d.department_id"; Query genericSearchQuery = em.createNativeQuery(queryString, "CustomQuery"); List resultList = genericSearchQuery.getResultList(); Iterator resultListIterator = resultList.iterator(); List<CustomBean> customList = new ArrayList(); while (resultListIterator.hasNext()) { Object col[] = (Object[])resultListIterator.next(); CustomBean custom = new CustomBean(); custom.setDepartmentId((BigDecimal)col[0]); custom.setDepartmentName((String)col[1]); custom.setLocationId((BigDecimal)col[2]); custom.setEmployeeId((BigDecimal)col[3]); custom.setFirstName((String)col[4]); custom.setLastName((String)col[5]); customList.add(custom); } return customList; } Open the DataControls.dcx file and create sparse xml for customBean. In sparse xml navigate to Named criteria tab -> Bind Variable section, create two binding variables deptId,fName. In sparse xml navigate to Named criteria tab ->Named criteria, create a named criteria and map the query attributes to the bind variables. In the ViewController create a file jspx page, from data control palette drop customBeanFindAll->Named Criteria->CustomBeanCriteria->Query as ADF Query Panel with Table. Run the jspx page and enter values in search form with departmentId as 50 and firstName as "M". Named criteria will filter the query of a data source and display the result like below.

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • SEO non-English domain name advice

    - by Dominykas Mostauskis
    I'm starting a website, that is meant for a non-English region, using an alphabet that is a bit different than that of English. Current plan is as follows. The website name, and the domain name, will be in the local language (not English); however, domain name will be spelled in the English alphabet, while the website's title will be the same word(s), but spelled properly with accents. E.g.: 'www.litterat.fr' and 'Littérat'. Does the difference between domain name and website name character use influence the site's SEO? Is it better, SEO-wise, to choose a name that can be spelled the same way in the English alphabet? From my experience, when searching online, invariably, the English alphabet is used, no matter the language, so people will still be searching 'litterat' (without accents and such). Edit: To sum up: Things have been said about IDN (Internationalized domain name). To make it simple, they are second-level domain names that contain language specific characters (LSP)(e.g. www.café.fr). Here you can check what top-level domains support what LSPs. Check initall's answer for more info on using LSPs in paths and queries. To answer my question about how and if search engines relate keywords spelled with and without language specific characters: Google can potentially tell that series and séries is the same keyword. However, (most relevant for words that are spelled differently across languages and have different meanings, like séries), for Google to make the connection (or lack thereof) between e and é, it has to deduce two things: Language that you are searching in. Language of your query. You can specify it manually through Advanced search or it guesses it, sometimes. I presume it can guess it wrong too. The more keywords specific to this language you use the higher Google's chance to guess the language. Language of the crawled document, against which the ASCII version of the word will be compared (in this example – series). Again, check initall's answer for how to help Google in understanding what language your document is in. Once it has that it can tell whether or not these two spellings should be treated as the same keyword. Google has to understand that even though you're not using french (in this example) specific characters, you're searching in French. The reason why I used the french word séries in this example, is that it demonstrates this very well. You have it in French and you have it in English without the accent. So if your search query is ambiguous like our series, unless Google has something more to go on, it will presume that there's no correlation between your search and séries in French documents. If you augment your query to series romantiques (try it), Google will understand that you're searching in French and among your results you'll see séries as well. But this does not always work, you should test it out with your keywords first. For example, if you search series francaises, it will associate francaises with françaises, but it will not associate series with séries. It depends on the words. Note: worth stressing that this problem is very relevant to words that, written in plain ASCII, might have some other meanings in other languages, it is less relevant to words that can be, by a distinct margin, just some one language. Tip: I've noticed that sometimes even if my non-accented search query doesn't get associated with the properly spelled word in a document (especially if it's the title or an important keyword in the doc), it still comes up. I followed the link, did a Ctrl-F search for my non-accented search query and found nothing, then checked the meta-tags in the source and you had the page's title in both accented and non-accented forms. So if you have meta-tags that can be spelled with language specific characters and without, put in both. Footnote: I hope this helps. If you have anything to add or correct, go ahead.

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • JSON error Caused by: java.lang.NullPointerException

    - by user3821853
    im trying to make a register page on android using JSON. everytime i press register button on avd, i get an error "unfortunately database has stopped". i have a error on my logcat that i cannot understand. this my code. please someone help me. this my register.java import android.app.Activity; import android.app.ProgressDialog; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import java.util.ArrayList; import java.util.List; public class Register extends Activity implements OnClickListener{ private EditText user, pass; private Button mRegister; // Progress Dialog private ProgressDialog pDialog; // JSON parser class JSONParser jsonParser = new JSONParser(); //php register script //localhost : //testing on your device //put your local ip instead, on windows, run CMD > ipconfig //or in mac's terminal type ifconfig and look for the ip under en0 or en1 // private static final String REGISTER_URL = "http://xxx.xxx.x.x:1234/webservice/register.php"; //testing on Emulator: private static final String REGISTER_URL = "http://10.0.2.2:1234/webservice/register.php"; //testing from a real server: //private static final String REGISTER_URL = "http://www.mybringback.com/webservice/register.php"; //ids private static final String TAG_SUCCESS = "success"; private static final String TAG_MESSAGE = "message"; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.register); user = (EditText)findViewById(R.id.username); pass = (EditText)findViewById(R.id.password); mRegister = (Button)findViewById(R.id.register); mRegister.setOnClickListener(this); } @Override public void onClick(View v) { // TODO Auto-generated method stub new CreateUser().execute(); } class CreateUser extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(Register.this); pDialog.setMessage("Creating User..."); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } @Override protected String doInBackground(String... args) { // TODO Auto-generated method stub // Check for success tag int success; String username = user.getText().toString(); String password = pass.getText().toString(); try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("password", password)); Log.d("request!", "starting"); //Posting user data to script JSONObject json = jsonParser.makeHttpRequest( REGISTER_URL, "POST", params); // full json response Log.d("Registering attempt", json.toString()); // json success element success = json.getInt(TAG_SUCCESS); if (success == 1) { Log.d("User Created!", json.toString()); finish(); return json.getString(TAG_MESSAGE); }else{ Log.d("Registering Failure!", json.getString(TAG_MESSAGE)); return json.getString(TAG_MESSAGE); } } catch (JSONException e) { e.printStackTrace(); } return null; } protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted pDialog.dismiss(); if (file_url != null){ Toast.makeText(Register.this, file_url, Toast.LENGTH_LONG).show(); } } } } this is JSONparser.java import android.util.Log; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.utils.URLEncodedUtils; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import java.util.List; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(final String url) { // Making HTTP request try { // Construct the client and the HTTP request. DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); // Execute the POST request and store the response locally. HttpResponse httpResponse = httpClient.execute(httpPost); // Extract data from the response. HttpEntity httpEntity = httpResponse.getEntity(); // Open an inputStream with the data content. is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { // Create a BufferedReader to parse through the inputStream. BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); // Declare a string builder to help with the parsing. StringBuilder sb = new StringBuilder(); // Declare a string to store the JSON object data in string form. String line = null; // Build the string until null. while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } // Close the input stream. is.close(); // Convert the string builder data to an actual string. json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // Try to parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // Return the JSON Object. return jObj; } // function get json from url // by making HTTP POST or GET mehtod public JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this my error 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/Buffer Error? Error converting result java.lang.NullPointerException: lock == null 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/JSON Parser? Error parsing data org.json.JSONException: End of input at character 0 of 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database W/dalvikvm? threadid=15: thread exiting with uncaught exception (group=0xb0f37648) 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database E/AndroidRuntime? FATAL EXCEPTION: AsyncTask #4 java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:299) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) at java.util.concurrent.FutureTask.setException(FutureTask.java:219) at java.util.concurrent.FutureTask.run(FutureTask.java:239) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:841) Caused by: java.lang.NullPointerException at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:108) at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:74) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234)             at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230)             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)             at java.lang.Thread.run(Thread.java:841) 08-18 23:40:02.501 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.591 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.981 2000-2000/com.example.blackcustomzier.database E/WindowManager? Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here android.view.WindowLeaked: Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here at android.view.ViewRootImpl.<init>(ViewRootImpl.java:345) at android.view.WindowManagerGlobal.addView(WindowManagerGlobal.java:239) at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:69) at android.app.Dialog.show(Dialog.java:281) at com.example.blackcustomzier.database.Register$CreateUser.onPreExecute(Register.java:85) at android.os.AsyncTask.executeOnExecutor(AsyncTask.java:586) at android.os.AsyncTask.execute(AsyncTask.java:534) at com.example.blackcustomzier.database.Register.onClick(Register.java:70) at android.view.View.performClick(View.java:4240) at android.view.View.onKeyUp(View.java:7928) at android.widget.TextView.onKeyUp(TextView.java:5606) at android.view.KeyEvent.dispatch(KeyEvent.java:2647) at android.view.View.dispatchKeyEvent(View.java:7343) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchKeyEvent(PhoneWindow.java:1933) at com.android.internal.policy.impl.PhoneWindow.superDispatchKeyEvent(PhoneWindow.java:1408) at android.app.Activity.dispatchKeyEvent(Activity.java:2384) at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchKeyEvent(PhoneWindow.java:1860) at android.view.ViewRootImpl$ViewPostImeInputStage.processKeyEvent(ViewRootImpl.java:3791) at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:3774) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3483) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:3540) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3516) at android.view.ViewRootImpl$ImeInputStage.onFinishedInputEvent(ViewRootImpl.java:3666) at android.view.inputmethod.InputMethodManager$PendingEvent.run(InputMethodManager.java:1982) at android.view.inputmethod.InputMethodManager.invokeFinishedInputEventCallback(InputMethodManager.java:1698) at android.view.inputmethod.InputMethodManager.finishedInputEvent(InputMethodManager.java:1689) at android.view.inputmethod.InputMethodManager$ImeInputEventSender.onInputEventFinished(InputMethodManager.java:1959) at android.view.InputEventSender.dispatchInputEventFinished(InputEventSender.java:141) at android.os.MessageQueue.nativePollOnce(Native Method) at android.os.MessageQueue.next(MessageQueue.java:132) at android.os.Looper.loop(Looper.java:124) at android.app.ActivityThread.main(ActivityThread.java:5103) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:525) at com.android.internal.os.ZygoteInit$MethodAndArgsCal please help me to solve this thx

    Read the article

  • cx_Oracle.DatabaseError: ORA-01036: illegal variable name/number

    - by Joao Figueiredo
    I've a cron scheduled query which is failing with, File "./run_ora_query.py", line 69, in db_lookup cursor.execute(query, dict(time_key=time_key) ) cx_Oracle.DatabaseError: ORA-01036: illegal variable name/number where >>> dict(time_key=time_key) {'time_key': '12/10/2012 19:12:00'} I'm using a .yaml file to update the last time_key after each query runs, where the relevant parameters are, {query: 'select session_mode, inst_id, user_name, schema_name, os_user, process_id, process_mb_use, process_name, to_char(datet,''dd-mm-yyyy hh24:mi'') as DATETIME from os_admin.mem_usage where data > TO_DATE(:time_key,''dd-mm-yyyy hh24:mi:ss'') order by datet, inst_id, os_user', time_key: '12/10/2012 19:12:00'} Where is the culprit for this error?

    Read the article

  • i get the exception org.hibernate.MappingException: No Dialect mapping for JDBC type: -9

    - by ramesh m
    i am using hibernate .i wrote Native sql query. this query will be execute in sqlSever command promt try { session=HibernateUtil.getInstance().getSession(); transaction=session.beginTransaction(); SQLQuery query = session.createSQLQuery("SELECT AP.PROJECT_NAME, AP.SKILLSET, PA.START_DATE, PA.END_DATE, RS.EMPLOYEE_ID, RS.EMPLOYEE_NAME, RS.REPORTING_PM FROM RESOURCE_MASTER RS,SHARED_PROPOSAL S, ACTUAL_PROPOSAL AP, PROJECT_APPROVED PA, PROJECT_ALLOCATION PL WHERE RS.EMPLOYEE_ID = PL.EMPLOYEE_ID AND PA.PROJECT_ID = PL.PROJECT_ID AND PA.SHARED_PROPOSAL_ID = S.SHARED_PROPOSAL_ID AND S.ACTUAL_PROPOSAL_ID=AP.ACTUAL_PROPOSAL_ID"); List<Object[]> obj=query.list(); Object[] object=new Object[arrayList.size()]; for (int i = 0; i < arrayList.size(); i++) { object[i]=arrayList.get(i); System.out.println(object[i]); } arrayList.get(0); String name=(String)arrayList.get(0); logger.info("In find All searchDeveloper"); }catch(Exception exception) { throw new PPAMException("Contact admin","Problem retrieving resource master list",exception); } like that i am using on that time i got this Exception: org.hibernate.MappingException: No Dialect mapping for JDBC type: -9 this query is executed in sqlserver command propt , i maaped seven tables, but remove ACTUAL_PROPOSAL AP table .it is execute correctly please help me

    Read the article

  • ASP.NET MVC Html.BeginRouteForm render's action with problem

    - by Sadegh
    hi, i defined this route: context.MapRoute("SearchEngineWebSearch", "{culture}/{style}/search/web/{query}/{index}/{size}", new { controller = "search", action = "web", query = "", index = 0, size = 5 }, new { index = new UInt32RouteConstraint(), size = new UInt32RouteConstraint() }); and form to post parameter to that: <% using (Html.BeginRouteForm("SearchEngineWebSearch", FormMethod.Post)) { %> <input name="query" type="text" value="<%: ViewData["Query"]%>" class="search-field" /> <input type="submit" value="Search" class="search-button" /> <%} %> but form rendered with problem. why? thanks in advance ;)

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • apache tomcat sermyadmin deployment error

    - by lepricon123
    I am getting the following errro when i try to start the application Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Set web app root system property: 'sermyadmin-production-2.0.1' = [/usr/local/tomcat6/webapps/sermyadmin/] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing log4j from [/usr/local/tomcat6/webapps/sermyadmin/WEB-INF/classes/log4j.properties] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring root WebApplicationContext Mar 23, 2010 7:51:23 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener org.springframework.beans.factory.access.BootstrapException: Error executing bootstraps; nested exception is org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.codehaus.groovy.grails.web.context.GrailsContextLoader.createWebApplicationContext(GrailsContextLoader.java:66) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Caused by: org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) ... 2 more Caused by: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at BootStrap$_closure1.doCall(BootStrap.groovy:7) ... 20 more Caused by: org.hibernate.exception.SQLGrammarException: could not execute query ... 21 more Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'opensips.jsec_role' doesn't exist at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)

    Read the article

  • IQueryable<T> Extension Method not working

    - by Micah
    How can i make an extension method that will work like this public static class Extensions<T> { public static IQueryable<T> Sort(this IQueryable<T> query, string sortField, SortDirection direction) { // System.Type dataSourceType = query.GetType(); //System.Type dataItemType = typeof(object); //if (dataSourceType.HasElementType) //{ // dataItemType = dataSourceType.GetElementType(); //} //else if (dataSourceType.IsGenericType) //{ // dataItemType = dataSourceType.GetGenericArguments()[0]; //} //var fieldType = dataItemType.GetProperty(sortField); if (direction == SortDirection.Ascending) return query.OrderBy(s => s.GetType().GetProperty(sortField)); return query.OrderByDescending(s => s.GetType().GetProperty(sortField)); } } Currently that says "Extension methods must be defined in a non-generic static class". How do i do this?

    Read the article

  • SL3/SL4 - Ado.Net Data Services Error during new DataServiceCollection<T>(queryResponse)

    - by Soulhuntre
    Hey all, I have two functions in a SL project (VS2010) that do almost exactly the same thing, yet one throws an error and the other does not. It seems to be related to the projections, but I am unsure about the best way to resolve. The function that works is... public void LoadAllChunksExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsChunk> data = null; DataServiceQuery<CmsChunk> theQuery = _dataservice .CmsChunks .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsChunk> query = asyncResult.AsyncState as DataServiceQuery<CmsChunk>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsChunk> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsChunk>; data = new DataServiceCollection<CmsChunk>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } This compiles and runs as expected. A very, very similar function (shown below) fails... public void LoadAllPagesExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsPage> data = null; DataServiceQuery<CmsPage> theQuery = _dataservice .CmsPages .Expand("CmsChildPages") .Expand("CmsParentPage") .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsPage> query = asyncResult.AsyncState as DataServiceQuery<CmsPage>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsPage> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsPage>; data = new DataServiceCollection<CmsPage>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } Clearly the issue is the Expand projections that involve a self referencing relationship (pages can contain other pages). This is under SL4 or SL3 using ADONETDataServices SL3 Update CTP3. I am open to any work around or pointers to goo information, a Google search for the error results in two hits, neither particularly helpful that I can decipher. The short error is "An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection." The full error is... System.Reflection.TargetInvocationException was caught Message=Exception has been thrown by the target of an invocation. StackTrace: at System.RuntimeMethodHandle.InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at System.Data.Services.Client.ClientType.ClientProperty.SetValue(Object instance, Object value, String propertyName, Boolean allowAdd) at System.Data.Services.Client.AtomMaterializer.ApplyItemsToCollection(AtomEntry entry, ClientProperty property, IEnumerable items, Uri nextLink, ProjectionPlan continuationPlan) at System.Data.Services.Client.AtomMaterializer.ApplyFeedToCollection(AtomEntry entry, ClientProperty property, AtomFeed feed, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.MaterializeResolvedEntry(AtomEntry entry, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.Materialize(AtomEntry entry, Type expectedEntryType, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.DirectMaterializePlan(AtomMaterializer materializer, AtomEntry entry, Type expectedEntryType) at System.Data.Services.Client.AtomMaterializerInvoker.DirectMaterializePlan(Object materializer, Object entry, Type expectedEntryType) at System.Data.Services.Client.ProjectionPlan.Run(AtomMaterializer materializer, AtomEntry entry, Type expectedType) at System.Data.Services.Client.AtomMaterializer.Read() at System.Data.Services.Client.MaterializeAtom.MoveNextInternal() at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Linq.Enumerable.d_b11.MoveNext() at System.Data.Services.Client.DataServiceCollection1.InternalLoadCollection(IEnumerable1 items) at System.Data.Services.Client.DataServiceCollection1.StartTracking(DataServiceContext context, IEnumerable1 items, String entitySet, Func2 entityChanged, Func2 collectionChanged) at System.Data.Services.Client.DataServiceCollection1..ctor(DataServiceContext context, IEnumerable1 items, TrackingMode trackingMode, String entitySetName, Func2 entityChangedCallback, Func2 collectionChangedCallback) at System.Data.Services.Client.DataServiceCollection1..ctor(IEnumerable1 items) at Phinli.Dashboard.Silverlight.Helpers.DataHelper.<>c__DisplayClass44.<>c__DisplayClass46.<LoadAllPagesExpandAll>b__43() InnerException: System.InvalidOperationException Message=An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection. StackTrace: at System.Data.Services.Client.DataServiceCollection1.InsertItem(Int32 index, T item) at System.Collections.ObjectModel.Collection`1.Add(T item) InnerException: Thanks for any help!

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • SQL Server 2008 to SQL Server 2008

    - by Sakhawat Ali
    I have an MDF and LDF file of SQL Server 2005. i attached it with SQL Server 2008 and did some change in data. now when i attached it back to sql server 2005 Express Edition it gives version error. The database 'E:\DB\JOBPERS.MDF' cannot be opened because it is version 655. This server supports version 612 and earlier. A downgrade path is not supported. Could not open new database 'E:\DB\JOBPERS.MDF'. CREATE DATABASE is aborted. An attempt to attach an auto-named database for file E:\DB\Jobpers.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problem with Informix JDBC, MONEY and decimal separator in string literals

    - by Michal Niklas
    I have problem with JDBC application that uses MONEY data type. When I insert into MONEY column: insert into _money_test (amt) values ('123.45') I got exception: Character to numeric conversion error The same SQL works from native Windows application using ODBC driver. I live in Poland and have Polish locale and in my country comma separates decimal part of number, so I tried: insert into _money_test (amt) values ('123,45') And it worked. I checked that in PreparedStatement I must use dot separator: 123.45. And of course I can use: insert into _money_test (amt) values (123.45) But some code is "general", it imports data from csv file and it was safe to put number into string literal. How to force JDBC to use DBMONEY (or simply dot) in literals? My workstation is WinXP. I have ODBC and JDBC Informix client in version 3.50 TC5/JC5. I have set DBMONEY to just dot: DBMONEY=. EDIT: Test code in Jython: import sys import traceback from java.sql import DriverManager from java.lang import Class Class.forName("com.informix.jdbc.IfxDriver") QUERY = "insert into _money_test (amt) values ('123.45')" def test_money(driver, db_url, usr, passwd): try: print("\n\n%s\n--------------" % (driver)) db = DriverManager.getConnection(db_url, usr, passwd) c = db.createStatement() c.execute("delete from _money_test") c.execute(QUERY) rs = c.executeQuery("select amt from _money_test") while (rs.next()): print('[%s]' % (rs.getString(1))) rs.close() c.close() db.close() except: print("there were errors!") s = traceback.format_exc() sys.stderr.write("%s\n" % (s)) print(QUERY) test_money("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'informix', 'passwd') test_money("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test', 'informix', 'passwd') Results when I run money literal with dot and comma: C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123,45') com.informix.jdbc.IfxDriver -------------- [123.45] sun.jdbc.odbc.JdbcOdbcDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: [Informix][Informix ODBC Driver][Informix]Character to numeric conversion error C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123.45') com.informix.jdbc.IfxDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: Character to numeric conversion error sun.jdbc.odbc.JdbcOdbcDriver -------------- [123.45]

    Read the article

  • SQL Server Express 2008 R2 Installation error at Windows 7

    - by Shai Sherman
    Hello, I created install script that will install SQL Server 2008 R2 on windows XP SP3, windows vista and windows 7. One of the command that i used in the installation is for silent installation of SQL Server 2008 R2. When i install it on windows XP everything works just fine but when i try to install it on Windows 7 i get an error. What am I doing wrong? Here is the command line that i use: "Setup.exe /ConfigurationFile=Mysetup.ini" Mysetup.ini file: -------------------------------------Start of ini file --------------------------------- ;SQL SERVER 2008 R2 Configuration File ;Version 1.0, 5 May 2010 ; [SQLSERVER2008] ; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will reflect the instance ID of the SQL Server instance. INSTANCEID="MSSQLSERVER" ; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. ACTION="Install" ; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, and Tools. The SQL feature will install the database engine, replication, and full-text. The Tools feature will install Management Tools, Books online, Business Intelligence Development Studio, and other shared components. FEATURES=SQLENGINE ; Displays the command line parameters usage HELP="False" ; Specifies that the detailed Setup log should be piped to the console. INDICATEPROGRESS="False" ; Setup will not display any user interface. QUIET="False" ; Setup will display progress only without any user interaction. QUIETSIMPLE="True" ; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. ;X86="False" ; Specifies the path to the installation media folder where setup.exe is located. ;MEDIASOURCE="z:\" ; Detailed help for command line argument ENU has not been defined yet. ENU="True" ; Parameter that controls the user interface behavior. Valid values are Normal for the full UI, and AutoAdvance for a simplied UI. ; UIMODE="Normal" ; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. ERRORREPORTING="False" ; Specify the root installation directory for native shared components. ;INSTALLSHAREDDIR="D:\Program Files\Microsoft SQL Server" ; Specify the root installation directory for the WOW64 shared components. ;INSTALLSHAREDWOWDIR="D:\Program Files (x86)\Microsoft SQL Server" ; Specify the installation directory. ;INSTANCEDIR="D:\Program Files\Microsoft SQL Server" ; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. SQMREPORTING="False" ; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). INSTANCENAME="SQLEXPRESS" SECURITYMODE=SQL SAPWD=SystemAdmin ; Agent account name AGTSVCACCOUNT="NT AUTHORITY\NETWORK SERVICE" ; Auto-start service after installation. AGTSVCSTARTUPTYPE="Manual" ; Startup type for Integration Services. ;ISSVCSTARTUPTYPE="Automatic" ; Account for Integration Services: Domain\User or system account. ;ISSVCACCOUNT="NT AUTHORITY\NetworkService" ; Controls the service startup type setting after the service has been created. ;ASSVCSTARTUPTYPE="Automatic" ; The collation to be used by Analysis Services. ;ASCOLLATION="Latin1_General_CI_AS" ; The location for the Analysis Services data files. ;ASDATADIR="Data" ; The location for the Analysis Services log files. ;ASLOGDIR="Log" ; The location for the Analysis Services backup files. ;ASBACKUPDIR="Backup" ; The location for the Analysis Services temporary files. ;ASTEMPDIR="Temp" ; The location for the Analysis Services configuration files. ;ASCONFIGDIR="Config" ; Specifies whether or not the MSOLAP provider is allowed to run in process. ;ASPROVIDERMSOLAP="1" ; A port number used to connect to the SharePoint Central Administration web application. ;FARMADMINPORT="0" ; Startup type for the SQL Server service. SQLSVCSTARTUPTYPE="Automatic" ; Level to enable FILESTREAM feature at (0, 1, 2 or 3). FILESTREAMLEVEL="0" ; Set to "1" to enable RANU for SQL Server Express. ENABLERANU="1" ; Specifies a Windows collation or an SQL collation to use for the Database Engine. SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS" ; Account for SQL Server service: Domain\User or system account. SQLSVCACCOUNT="NT Authority\System" ; Default directory for the Database Engine user databases. ;SQLUSERDBDIR="K:\Microsoft SQL Server\MSSQL\Data" ; Default directory for the Database Engine user database logs. ;SQLUSERDBLOGDIR="L:\Microsoft SQL Server\MSSQL\Data\Logs" ; Directory for Database Engine TempDB files. ;SQLTEMPDBDIR="T:\Microsoft SQL Server\MSSQL\Data" ; Directory for the Database Engine TempDB log files. ;SQLTEMPDBLOGDIR="T:\Microsoft SQL Server\MSSQL\Data\Logs" ; Provision current user as a Database Engine system administrator for SQL Server 2008 R2 Express. ADDCURRENTUSERASSQLADMIN="True" ; Specify 0 to disable or 1 to enable the TCP/IP protocol. TCPENABLED="1" ; Specify 0 to disable or 1 to enable the Named Pipes protocol. NPENABLED="0" ; Startup type for Browser Service. BROWSERSVCSTARTUPTYPE="Automatic" ; Specifies how the startup mode of the report server NT service. When ; Manual - Service startup is manual mode (default). ; Automatic - Service startup is automatic mode. ; Disabled - Service is disabled ;RSSVCSTARTUPTYPE="Automatic" ; Specifies which mode report server is installed in. ; Default value: “FilesOnly” ;RSINSTALLMODE="FilesOnlyMode" ; Accept SQL Server 2008 R2 license terms IACCEPTSQLSERVERLICENSETERMS="TRUE" ;setup.exe /CONFIGURATIONFILE=Mysetup.ini /INDICATEPROGRESS --------------------------- End of ini file ------------------------------------- And i get this error: 2010-08-31 18:05:53 Slp: Error result: -2068119551 2010-08-31 18:05:53 Slp: Result facility code: 1211 2010-08-31 18:05:53 Slp: Result error code: 1 2010-08-31 18:05:53 Slp: Sco: Attempting to create base registry key HKEY_LOCAL_MACHINE, machine 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey Software\Microsoft\PCHealth\ErrorReporting\DW\Installed 2010-08-31 18:05:53 Slp: Sco: Attempting to get registry value DW0200 2010-08-31 18:05:53 Slp: Submitted 1 of 1 failures to the Watson data repository What the meaning of this? What do i need to do to fix that problem? Here is the Summary file: Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068119551 Exit facility code: 1211 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2010-08-31 18:03:44 End time: 2010-08-31 18:05:51 Requested action: Install Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.50.1600.1%26EvtType%3d0x6121810A%400xC24842DB Machine Properties: Machine name: NVR Machine processor count: 2 OS version: Windows 7 OS service pack: OS region: United States OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: C:\Disk1\setupsql\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE AGTSVCPASSWORD: * AGTSVCSTARTUPTYPE: Disabled ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: ASSVCPASSWORD: * ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Automatic CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini CUSOURCE: ENABLERANU: True ENU: True ERRORREPORTING: False FARMACCOUNT: FARMADMINPORT: 0 FARMPASSWORD: * FEATURES: SQLENGINE FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: FTSVCACCOUNT: FTSVCPASSWORD: * HELP: False IACCEPTSQLSERVERLICENSETERMS: True INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSQLDATADIR: INSTANCEDIR: C:\Program Files\Microsoft SQL Server\ INSTANCEID: MSSQLSERVER INSTANCENAME: SQLEXPRESS ISSVCACCOUNT: NT AUTHORITY\NetworkService ISSVCPASSWORD: * ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: * PCUSOURCE: PID: * QUIET: False QUIETSIMPLE: True ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: * RSSVCSTARTUPTYPE: Automatic SAPWD: * SECURITYMODE: SQL SQLBACKUPDIR: SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT Authority\System SQLSVCPASSWORD: * SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SQLTEMPDBDIR: SQLTEMPDBLOGDIR: SQLUSERDBDIR: SQLUSERDBLOGDIR: SQMREPORTING: False TCPENABLED: 1 UIMODE: AutoAdvance X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0x0A2FBD17@1211@1 Configuration error description: The process cannot access the file because it is being used by another process. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\SystemConfigurationCheck_Report.htm What should I do and why does this problem occur? Thanks , Shai.

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >