Search Results

Search found 3163 results on 127 pages for 'schema'.

Page 73/127 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • MySQL for Excel 1.1.3 has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.3, the  latest addition to the MySQL Installer for Windows. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL Data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows. The MySQL installer comes in 2 versions   Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.   You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/. MySQL for Excel 1.1.3 introduces the following features:   Upon saving a Workbook containing Worksheets in Edit Mode, the user is asked if he wants to exit the Edit Mode on all Worksheets before their parent Workbook is saved so the Worksheets are saved unprotected, otherwise the Worksheets will remain protected and the users will be able to unprotect them later retrieving the passkeys from the application log after closing MySQL for Excel. Added background coloring to the column names header row of an Import Data operation to have the same look as the one in an Edit Data operation (i.e. gray-ish background). Connection passwords can be stored securely just like MySQL Workbench does and these secured passwords are shared with Workbench in the same way connections are. Changed the way the MySQL for Excel ribbon toggle button works, instead of just showing or hiding the add-in it actually opens and closes it. Added a connection test before any operation against the database (schema creation, data import, append, export or edition) so the operation dialog is not shown and a friendlier error message is shown.   Also this release contains the following bug fixes:   Added a check on every connection test for an expired password, if the password has been expired a dialog is now shown to the user to reset the password. Bug #17354118 - DON'T HANDLE EXPIRED PASSWORDS Added code to escape text values to be imported to an Excel worksheet that start with an equals sign so Excel does not treat those values as formulas that will fail evaluation. This is an option turned on by default that can be turned off by users if they wish to import values to be treated as Excel formulas. Bug #17354102 - ERROR IMPORTING TEXT VALUES TO EXCEL STARTING WITH AN EQUALS SIGN Added code to properly check the reason for a failing connection, if it's a failing password the user gets a dialog to retry the connection with a different password until the connection succeeds, a connection error not related to the password is thrown or the user cancels. If the failing connection is not related to a bad password an error message is shown to the users indicating the reason of the failure. Bug #16239007 - CONNECTIONS TO MYSQL SERVICES NOT RUNNING DISPLAY A WRONG PASSWORD ERROR MESSAGE Added global options dialog that can be accessed from the Schema Selection and DB Object Selection panels where the timeouts for the connection to the DB Server and for the query commands can be changed from their default values (15 seconds for the connection timeout and 30 seconds for the query timeout). MySQL Bug #68732, Bug #17191646 - QUERY TIMEOUT CANNOT BE ADJUSTED IN MYSQL FOR EXCEL Changed the Varchar(65,535) data type shown in the Export Data data type combo box to Text since the maximum row size is 65,535 bytes and any autodetected column data type with a length greater than 4,000 should be set to Text actually for the table to be created successfully. MySQL Bug #69779, Bug #17191633 - EXPORT FAILS FOR EXCEL FILES CONTAINING > 4000 CHARACTERS OF TEXT PER CELL Removed code that was replacing all spaces typed by the user in an overriden data type for a new column in an Export Data operation, also improved the data type detection code to flag as invalid data types with parenthesis but without any text inside or where the contents inside the parenthesis are not valid for the specific data type. Bug #17260260 - EXPORT DATA SET TYPE NOT WORKING WITH MEMBER VALUES CONTAINING SPACES Added support for the year data type with a length of 2 or 4 and a validation that valid values are integers between 1901-2155 (for 4-digit years) or between 0-99 (for 2-digit years). Bug #17259915 - EXPORT DATA YEAR DATA TYPE NOT RECOGNIZED IF DECLARED WITH A DISPLAY WIDTH) Fixed code for Export Data operations where users overrode the data type for columns typing Text in the data type combobox, which is a valid data type but was not recognized as such. Bug #17259490 - EXPORT DATA TEXT DATA TYPE NOT RECOGNIZED AS A VALID DATA TYPE Changed the location of the registry where the MySQL for Excel add-in is installed to HKEY_LOCAL_MACHINE instead of HKEY_CURRENT_USER so the add-in is accessible by all users and not only to the user that installed it. For this to work with Excel 2007 a hotfix may be required (see http://support.microsoft.com/kb/976477). MySQL Bug #68746, Bug #16675992 - EXCEL-ADD-IN IS ONLY INSTALLED FOR USER ACCOUNT THAT THE INSTALLATION RUNS UNDER Added support for Excel 2013 Single Document Interface, now that Excel 2013 creates 1 window per workbook also the Excel Add-In maintains an independent custom task pane in each window. MySQL Bug #68792, Bug #17272087 - MYSQL FOR EXCEL SIDEBAR DOES NOT APPEAR IN EXCEL 2013 (WITH WORKAROUND) Included the latest MySQL Utility with a code fix for the COM exception thrown when attempting to open Workbench in the Manage Connections window. Bug #17258966 - MYSQL WORKBENCH NOT OPENED BY CLICKING MANAGE CONNECTIONS HOTLABEL Fixed code for Append Data operations that was not applying a calculated automatic mapping correctly when the source and target tables had different number of columns, some columns with the same name but some of those lying on column indexes beyond the limit of the other source/target table. MySQL Bug #69220, Bug #17278349 - APPEND DOESN'T AUTOMATICALLY DETECT EXCEL COL HEADER WITH SAME NAME AS SQL FIELD Fixed some code for Edit Data operations that was escaping special characters twice (during edition in Excel and then upon sending the query to the MySQL server). MySQL Bug #68669, Bug #17271693 - A BACKSLASH IS INSERTED BEFORE AN APOSTROPHE EDITING TABLE WITH MYSQL FOR EXCEL Upgraded MySQL Utility with latest version that encapsulates dialog base classes and introduces more classes to handle Workbench connections, and removed these from the Excel project. Bug #16500331 - CAN'T DELETE CONNECTIONS CREATED WITHIN ADDIN You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel.html You can find our team’s blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Ping bind errors in Operations Manager 2007

    - by Andrew Rice
    I am having an issue in SCOM 2007 R2. I am routinely getting the following errors: Failed to ping or bind to the RID Master FSMO role holder. The default gateway is not pingable. Failed to ping or bind to the Infrastructure Master FSMO role holder. The default gateway is not pingable. Failed to ping or bind to the Domain Naming Master FSMO role holder. The default gateway is not pingable. Failed to ping of bind to the Schema Master FSMO role holder. The default gateway is not pingable. The weird thing about these errors if I log into the server in question or if I log in to the SCOM server I can ping everything just fine. To top it all off the server in question is the role holder for 2 of the roles it is complaining about (RID and Infrastructure). Any thoughts as to what might be going on?

    Read the article

  • Severe errors in solr configuration: Error loading class 'solr.TrieDateField'

    - by James Lawruk
    Installed Solr on my Windows XP PC. Tomcat seems to be working fine. Cannot get Solr to work. I noticed the TrieDateField is declared in a file called schema.xml in the SolrHome directory. Any thoughts? The Url http://localhost:8080/solr/ returns: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in null ------------------------------------------------------------- org.apache.solr.common.SolrException: Unknown fieldtype 'date' specified on field updated Here is an excerpt from the catalina log file: SEVERE: org.apache.solr.common.SolrException: Error loading class 'solr.TrieDateField' at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:273) Caused by: java.lang.ClassNotFoundException: solr.TrieDateField

    Read the article

  • how do I resolve "user isn't assigned to any management roles" error in Exchange 2010 EMC?

    - by TheoJones
    Newly installed Exchange 2010 box (technically, a partially installed box, as this error is preventing me from completing the install). When I launch EMC or the Management Powershell, I get this error: VERBOSE: Connecting to myserver.mydomain.internal [myserver.mydomain.internal] Processing data from remote server failed with the following error message: The user "mydomain\administrator" isn't assigned to any management roles. For more information, see the about_Remote_Troubleshooting Help topic. Failed to connect to any Exchange Server in the current site. Thing is.. The logged in administrator account (confirmed using 'whoami') is a member of the following groups: Administrators Delegated Setup Discovery Management Domain Admins Domain Users Enterprise Admins Exchange Organization Administrators GPO Creator Owners Organization Management Schema Admins Server Management Any ideas? how can I get past this?

    Read the article

  • Can't add client machine to windows server 2008 domain controller

    - by Patrick J Collins
    A bit of background before I dive into the gritty details: I have a single server running Windows 2003 Server where I host my ASP.net website and SQL Server + Reports. I've been creating ordinary windows user accounts to authenticate my users, and I enabled integrated windows authentication with impersonation. I've set up a bunch of user groups which correspond to certain roles (admin, power user, normal user, etc) and I test membership to enable or disable certain features. Overall, I'm pretty happy with the solution, it was quick to setup and I don't have to worry about messing around storing passwords and whatnot. Well, what I'm trying to do now is set up a new environment with 3 servers (Web, SQL, Reports) and I'd like these three servers to share common user accounts. I understand that I could add these three machines to a domain, which means installing Active Directory on one of the machines. I am barking up the wrong tree here? Would you suggest an alternative configuration? Assuming that I stick with AD, I have a couple of questions regarding DNS. To be honest, I'd rather not fiddle around with the DNS settings because my ISP already has their own DNS server which works just fine. It would appear however that DNS and AD are intertwined. Firstly, if I am to create a new domain in called mycompany.net, do I actually need to be the registered owner of that domain name and ensure the DNS entry points to the IP address of the machine hosting AD? Secondly, for the two other machines that I am trying to add to the domain, do I need to fiddle with their DNS settings? I've tried setting the preferred DNS Server IP address to that of my newly installed AD, but no luck. At this point, I can't add the two other machines to the domain. Here are some diagnostics that I have run based on a few suggestions I read on forums (sorry they're in French, although I could translate if needed). I ran nltest, which seems to indicate that the client can discover the domain controller. When I run dcdiag, the call to DsGetDcName fails with error 1722, not really sure what that means. Any suggestions? Thanks! C:\Users\Administrator>nltest /dsgetdc:mycompany.net Contrôleur de domaine : \\REPORTS.mycompany.net Adresse : \\111.111.111.111 GUID dom : 3333a4ec-ca56-4f02-bb9e-76c29c6c3832 Nom dom : mycompany.net Nom de la forêt : mycompany.net Nom de site du contrôleur de domaine : Default-First-Site-Name Nom de notre site : Default-First-Site-Name Indicateurs : PDC GC DS LDAP KDC TIMESERV WRITABLE DNS_DC DNS_DOMAIN DNS _FOREST CLOSE_SITE FULL_SECRET La commande a été correctement exécutée C:\Users\Administrator>dcdiag /s:mycompany.net /u: mycompany.net \pcollins /p:somepass Diagnostic du serveur d'annuaire Exécution de l'installation initiale : * Forêt AD identifiée. Collecte des informations initiales terminée. Exécution des tests initiaux nécessaires Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Connectivity ......................... Le test Connectivity de REPORTS a réussi Exécution des tests principaux Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Advertising Erreur irrécupérable : l'appel DsGetDcName (REPORTS) a échoué ; erreur 1722 Le localisateur n'a pas pu trouver le serveur. ......................... Le test Advertising de REPORTS a échoué Démarrage du test : FrsEvent Impossible d'interroger le journal des événements File Replication Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test FrsEvent de REPORTS a échoué Démarrage du test : DFSREvent Impossible d'interroger le journal des événements DFS Replication sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test DFSREvent de REPORTS a échoué Démarrage du test : SysVolCheck [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test SysVolCheck de REPORTS a échoué Démarrage du test : KccEvent Impossible d'interroger le journal des événements Directory Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test KccEvent de REPORTS a échoué Démarrage du test : KnowsOfRoleHolders ......................... Le test KnowsOfRoleHolders de REPORTS a réussi Démarrage du test : MachineAccount Impossible d'ouvrir le canal avec [REPORTS] : échec avec l'erreur 53 : Le chemin réseau n'a pas été trouvé. Impossible d'obtenir le nom de domaine NetBIOS Échec : impossible de tester le nom principal de service (SPN) HOST Échec : impossible de tester le nom principal de service (SPN) HOST ......................... Le test MachineAccount de REPORTS a réussi Démarrage du test : NCSecDesc ......................... Le test NCSecDesc de REPORTS a réussi Démarrage du test : NetLogons [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test NetLogons de REPORTS a échoué Démarrage du test : ObjectsReplicated ......................... Le test ObjectsReplicated de REPORTS a réussi Démarrage du test : Replications ......................... Le test Replications de REPORTS a réussi Démarrage du test : RidManager ......................... Le test RidManager de REPORTS a réussi Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.mycompany.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué Démarrage du test : SystemLog Impossible d'interroger le journal des événements System sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test SystemLog de REPORTS a échoué Démarrage du test : VerifyReferences ......................... Le test VerifyReferences de REPORTS a réussi Exécution de tests de partitions sur ForestDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de ForestDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de ForestDnsZones a réussi Exécution de tests de partitions sur DomainDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de DomainDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de DomainDnsZones a réussi Exécution de tests de partitions sur Schema Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Schema a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Schema a réussi Exécution de tests de partitions sur Configuration Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Configuration a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Configuration a réussi Exécution de tests de partitions sur mycompany Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de mycompany a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de mycompany a réussi Exécution de tests d'entreprise sur mycompany.net Démarrage du test : LocatorCheck Avertissement : l'appel DcGetDcName(GC_SERVER_REQUIRED) a échoué ; erreur 1722 Serveur de catalogue global introuvable - Les catalogues globaux ne fonctionnent pas. Avertissement : l'appel DcGetDcName(PDC_REQUIRED) a échoué ; erreur 1722 Contrôleur principal de domaine introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(TIME_SERVER) a échoué ; erreur 1722 Serveur de temps introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(GOOD_TIME_SERVER_PREFERRED) a échoué ; erreur 1722 Serveur de temps introuvable. Avertissement : l'appel DcGetDcName(KDC_REQUIRED) a échoué ; erreur 1722 Centre de distribution de clés introuvable : les centres de distribution de clés ne fonctionnent pas. ......................... Le test LocatorCheck de mycompany.net a échoué Démarrage du test : Intersite ......................... Le test Intersite de mycompany.net a réussi Update 1 : I am under the distinct impression that the problem is caused by some security settings. I have read elsewhere that the client needs to be able to access the fileshare sysvol. I had to enable Client for Microsoft Windows and File and Printer Sharing which were previously disabled. When I now run dcdiag the Advertising test works, which I suppose is forward progress. It currently chokes on the Services step (unable to open remote IPC). Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.locbus.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué The original English version of that error message : Could not open Remote ipc to [server] Update 2 : I attach some more diagnostics : Netsetup.log (client): 09/24/2009 13:27:09:773 ----------------------------------------------------------------- 09/24/2009 13:27:09:773 NetpValidateName: checking to see if 'WEB' is valid as type 1 name 09/24/2009 13:27:12:773 NetpCheckNetBiosNameNotInUse for 'WEB' [MACHINE] returned 0x0 09/24/2009 13:27:12:773 NetpValidateName: name 'WEB' is valid for type 1 09/24/2009 13:27:12:805 ----------------------------------------------------------------- 09/24/2009 13:27:12:805 NetpValidateName: checking to see if 'WEB' is valid as type 5 name 09/24/2009 13:27:12:805 NetpValidateName: name 'WEB' is valid for type 5 09/24/2009 13:27:12:852 ----------------------------------------------------------------- 09/24/2009 13:27:12:852 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:12:992 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:12:992 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:320 ----------------------------------------------------------------- 09/24/2009 13:27:21:320 NetpDoDomainJoin 09/24/2009 13:27:21:320 NetpMachineValidToJoin: 'WEB' 09/24/2009 13:27:21:320 OS Version: 6.0 09/24/2009 13:27:21:320 Build number: 6002 09/24/2009 13:27:21:320 ServicePack: Service Pack 2 09/24/2009 13:27:21:414 SKU: Windows Server® 2008 Standard 09/24/2009 13:27:21:414 NetpDomainJoinLicensingCheck: ulLicenseValue=1, Status: 0x0 09/24/2009 13:27:21:414 NetpGetLsaPrimaryDomain: status: 0x0 09/24/2009 13:27:21:414 NetpMachineValidToJoin: status: 0x0 09/24/2009 13:27:21:414 NetpJoinDomain 09/24/2009 13:27:21:414 Machine: WEB 09/24/2009 13:27:21:414 Domain: MYCOMPANY.NET 09/24/2009 13:27:21:414 MachineAccountOU: (NULL) 09/24/2009 13:27:21:414 Account: MYCOMPANY.NET\pcollins 09/24/2009 13:27:21:414 Options: 0x25 09/24/2009 13:27:21:414 NetpLoadParameters: loading registry parameters... 09/24/2009 13:27:21:414 NetpLoadParameters: DNSNameResolutionRequired not found, defaulting to '1' 0x2 09/24/2009 13:27:21:414 NetpLoadParameters: status: 0x2 09/24/2009 13:27:21:414 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:21:523 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:21:523 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:523 NetpDsGetDcName: trying to find DC in domain 'MYCOMPANY.NET', flags: 0x40001010 09/24/2009 13:27:22:039 NetpDsGetDcName: failed to find a DC having account 'WEB$': 0x525, last error is 0x79 09/24/2009 13:27:22:039 NetpDsGetDcName: status of verifying DNS A record name resolution for 'KING.MYCOMPANY.NET': 0x0 09/24/2009 13:27:22:039 NetpDsGetDcName: found DC '\\KING.MYCOMPANY.NET' in the specified domain 09/24/2009 13:27:30:039 NetUseAdd to \\KING.MYCOMPANY.NET\IPC$ returned 53 09/24/2009 13:27:30:039 NetpJoinDomain: status of connecting to dc '\\KING.MYCOMPANY.NET': 0x35 09/24/2009 13:27:30:039 NetpDoDomainJoin: status: 0x35 09/24/2009 13:27:30:148 ----------------------------------------------------------------- ipconfig /all (on client): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : WEB Suffixe DNS principal . . . . . . : Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-17-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.122(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : **.***.163.123 NetBIOS sur Tcpip. . . . . . . . . . . : Activé ipconfig /all (on server): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : KING Suffixe DNS principal . . . . . . : mycompany.net Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Liste de recherche du suffixe DNS.: locbus.net Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-1E-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.123(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : 127.0.0.1 NetBIOS sur Tcpip. . . . . . . . . . . : Activé nslookup (on client): Serveur : *******.***.com Address: **.***.163.123 Nom : mycompany.net Addresses: ****:****:a37b::****:a37b **.****.163.123

    Read the article

  • Oracle 11gR2 exp does not export some tables

    - by Tilo Prütz
    I have an Oracle 11g (11.2.0.1) Database running on Linux (x64). Within the database I have a schema and 33 tables for it. When I log in via sqlplus I can list all the tables via SELECT OBJECT_NAME FROM USER_OBJECTS WHERE OBJECT_TYPE = 'TABLE'; But when I export the Tablespace using exp ... BUFFER=65536 FULL=N COMPRESS=N CONSISTENT=Y TABLESPACES=... FILE=... Then it only exports 24 of the 33 tables. I have tried to export the missing tables via exp ... TABLES=<missing_table> ... But then I get an error: EXP-00011: NPSMIGRO2_CM.DEFAULT_USR_ATTR_VALUES does not exist How can I find out what's wrong here? How can I export all the tables?

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • ODBC error state S1092: postgresql through ODBC

    - by mechcow
    While performing an upgrade, our in-house software started to report the following strange error. It is a C++ application talking to a remote postgresql database, defined through ODBC: ODBC error state S1092, native error 0. [unixODBC][Driver Manager]Invalid attribute/option identifier Both the client and the server are Centos 5.4 Xen guests with the following RPMs installed: postgresql-libs-8.1.18-2.el5_4.1 postgresql-odbc-08.01.0200-3.1 postgresql-8.1.18-2.el5_4.1 postgresql-server-8.1.18-2.el5_4.1 Its possible the schema changed as part of the upgrade, could this explain the error message? What does this error message actually indicate, and do you know any likely causes of it?

    Read the article

  • IIS 7.5 FTP IIS Manager Users Login Fail (530)

    - by Jim
    IIS 7.5 FTP IIS Manager Users Login Fail (530) I'm trying to set up a FTP site on IIS 7.5 that allows IIS Manager Users to login. I'm following this guide: http://learn.iis.net/page.aspx/321/configure-ftp-with-iis-7-manager-authentication/. After set up, I cannot login to the FTP using an IIS Manager User account. The client error I got was 530 User cannot log in. Win32 error: Unspecified error. Error details: An error occured during the authentication process. I tried both with or without a virtual host. A Windows account login fine. The only strange thing I noticed was that when setting up Read permission for Network Service, there was an access denied error when setting up permission for "%SystemDrive%\Windows\System32\inetsrv\config\schema". Any thoughts? Thanks!

    Read the article

  • What proportion of DBA skills are platform-independent?

    - by Arkaaito
    Problem: we've grown to the point where we need a real DBA. Real DBAs are hard to find. Possible solution: I know someone with extensive experience (currently working on MSSQL) who is looking for a new DBA job. He might be willing to consider working for a MySQL shop (I haven't asked). Snag: I don't know to what extent MSSQL DBA skills map to MySQL DBA skills. I'm a developer, so I know enough about MySQL to develop apps which use it (including the basic performance-tuning, index selection, etc.), do schema design, and perform simple utility tasks (backup with mysqldump, scripting, etc.). I don't know anything about MSSQL. Nor do I actually know much about the boundaries of the DBA role. Can anyone with more experience - perhaps as a DBA - weigh in on this? Are there enough similarities between MSSQL and MySQL that it's worth asking a MSSQL DBA if he'd be interested in applying for a MySQL position?

    Read the article

  • Can't edit an account in MySQL Administrator on OS X

    - by Wavy Crab
    I'm running MySQL 5.5.20 on OS X 10.6.8. Using MySQL Administrator 1.2.12 (the latest) I am unable to edit any accounts. After successfully connecting to the database in MySQL Administrator, I go the Account tab, expand a user, and it just says "Loading...". It will stay on "Loading..." indefinitely. The symptoms are almost identical to user's experience and this MySQL bug from 2006, which has since been closed (fixed?). If I create a user and grant permissions to a schema, I get the error "Could not save changes to the use", which came up in 2005 as a bug (now closed). How do I edit MySQL accounts using MySQL Administrator?

    Read the article

  • Transferring FSMO roles over vpn

    - by Tom Bowman
    I have a server located at one of our offices which is quite old and is due to be upgraded soon, this server holds the FSMO roles, I have another server in another office, both are DC's in the same domain and both are replicated, both run Server 2003 standard. I need to transfer the FSMO roles from the old server to the the one I have in the other office before I upgrade. Also I am looking at bringing in Exchange 2010 server however I cant install/configure that until I transfer the roles as it needs to be at the same site as the schema master. My question really is as both servers replicate over a vpn, how quickly will the roles transfer and will there be downtime as I need to make sure that while the transfer is running, both servers will service logon's and share files. or would it be better to do it out of hours? many thanks and apologies if I've missed out anything Regards Tom

    Read the article

  • Share Pc' s wifi internet connection to router and spit it out trough it

    - by Maken
    Basicly im trying to share the an internet connection that i receive through wifi from my bestfriend's place to my place and Extend it trough a router for my consoles and other computers in my household i have 1 pc with Wifi /Ethernet , one router "DD-wrt"ed" Schema is like this Internet -----Modem------- (Friend's)Router -----to my PC through Wifi (Ics)---- to Router(2) through lan----- to Computer 1 (over lan or wifi etc...) ------ Computer 2 (over lan or wifi etc...) i want the computer 1 and 2 to have internet from Router(2) if its not clear enough il try to give more details but a little help would be greatly appreciated

    Read the article

  • Severe errors in solr configuration: Error loading class

    - by James Lawruk
    Installed Solr on my Windows XP PC. Tomcat seems to be working fine. Cannot get Solr to work. I noticed the TrieDateField is declared in a file called schema.xml in the SolrHome directory. Any thoughts? The Url http://localhost:8080/solr/ returns: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in null ------------------------------------------------------------- org.apache.solr.common.SolrException: Unknown fieldtype 'date' specified on field updated Here is an excerpt from the catalina log file: SEVERE: org.apache.solr.common.SolrException: Error loading class 'solr.TrieDateField' at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:273) Caused by: java.lang.ClassNotFoundException: solr.TrieDateField

    Read the article

  • DB2 UDF Permissions

    - by WernerCD
    I have a custom function that I'm working on... the problem I'm having is simple: Permssions. example function: drop function circle_area go CREATE FUNCTION circle_area (radius FLOAT) RETURNS FLOAT LANGUAGE SQL BEGIN DECLARE pi FLOAT DEFAULT 3.14; DECLARE area FLOAT; SET area = pi * radius * radius; RETURN area; END GO if I then log out of my "admin" account... and log into test account I get a "Not authorized" error when I try to run something "Select circle_area(foo) from library.bar". I can log into iSeries Navigator, navigate to schema functions permissions and change the permission for public from Exclude to All. bam it works. How do I grant permission to all, either in the CREATE FUNCTION or after?

    Read the article

  • Can't uninstall SBS 2003 in sbs 2003 -> sbs 2008 migration

    - by ChrisMuench
    I'm trying to remove sbs from my sbs 2003 server. Also I'm logged in as Administrator. However When I start to go through the wizard it gives me the following error. You must be a member of the Domain Admins, Schema Admins, and Enterprise Admins group. I then did some research and found this (http://support.microsoft.com/kb/842694) and I came to the point where it says to delete the server from Sites and Services. However when I clicked delete it wanted me to dcpromo the box first. However I have read you have to uninstall exchange first and then dcpromo it to remove AD. Any ideas?

    Read the article

  • On MySQL 5.1 for Windows, why can't I assign DBA role to the "root" user?

    - by djangofan
    On MySQL 5.1 for Windows, why can't I assign DBA role to "root" user? The MySQL Workbench allows me to add all the other roles except for DBA. Also, when I "alter schema" on any table, while logged in as root, I dont see all the tabs that show me all the database properties... I only see the first tab that allows me to change collation only. What is wrong with this picture? How do i give root all priveleges? I've tried a few variations of GRANT ALL PRIVILEGES etc. from the command line but nothing works. My root account is unable to alter column names, indexes, or options of any given table that I create. I can create tables and delete them but I can't alter them.

    Read the article

  • Make server unavailable gracefully using Powershell in ARR

    - by Carl Bergquist
    We are using ARR as reverse proxy and I would like to make a server unavailable for various reasons. How can this be done using Powershell? Edit 1: I found this http://blogs.iis.net/anilr/archive/2009/11/09/using-arr-config-extensibility-to-gracefully-stop-server.aspx tutorial for using JScript. But I'm not able to translate it to powershell. Edit 2: Using the Set-WebConfigurationProperty in WebAdministration module I'm able to changes settings for a server. I found SetState in %windir%\system32\inetsrv\config\schema\arr_schema.xml but I don't know how to invoke that method.

    Read the article

  • ORA-12705: invalid or unknown NLS parameter value specified

    - by viky
    I have a j2ee application hosted on jboss and linux platform. When I try to access the application , I see following error in server.log file. ORA-12705: invalid or unknown NLS parameter value specified When I point the same jboss instance to a different schema, the application works fine. I tried to go through few forum and found that the NLS parameter settings are fine. Can anyone help. Jboss version = 4.0.2 DB version = oracle 10.2 output of locale command on linux $ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=

    Read the article

  • How to backup/restore excluding filestream varbinary in SQL Server 2008?

    - by fdierre
    There is an application used in a production site that uses SQL Server 2008 as its DBMS. The database schema uses Filestream Varbinary to save binary data on the filesystem instead of directly into the DB tables. The point is that now and then it would be useful to copy the production database on development machines, mostly for doing troubleshooting. The database is too big for comfortably moving it around, but it would be ok if it could be moved leaving out the filestream varbinary fields. In other words, I am trying to make an "imperfect" copy of a database: i.e., on the destination database, it is ok to have NULL values instead of the varbinary. Is this possible? I tried looking for the feature on the SQL Server Management studio and did a backup that excludes the filegroup containing the filestream varbinary, but I cannot restore: MSSMS complains that the restore cannot be done because the backup is incomplete (of course). Is it possible to achieve what I am trying to do in some way?

    Read the article

  • Migrating data from SQL Server 2000 to SQL Server 2005

    - by Muhammad Kashif Nadeem
    I have to migrate existing data which is in SQL Server 2000 to SQL Server 2005. The schema of two databases is different. For Example Locations table in SS2000 is split into two tables and has different columns. This is one time activity. After successful migration I don't need old db anymore. What is the best way to transfer data from one SQL Server to another having different schemas? I can write stored procedures to fetch data from SQL Server 2000 and insert/update tables in SQL Server 2005. What about SSIS? I don't have any experience with this and is this better to create package of SSIS because I don't need this again and need to learn it first. Thanks.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >