Search Results

Search found 31319 results on 1253 pages for 'alter database'.

Page 134/1253 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

  • SQL Server: How to tell if a database is a system database?

    - by Vinko Vrsalovic
    I know that so far (until MSSQL 2005 at least), system databases are master, model, msdb and tempdb. Thing is, as far as I can tell, this is not guaranteed to be preserved in the future. And neither the sys.databases view nor the sys.sysdatabases view tell me if a database is considered as a system database. Is there someplace where this information (whether a database is considered a system database or not) can be obtained?

    Read the article

  • Opening the Table from the Database

    - by Gopal
    Am new to MySQL 5.1 In MySQL Where i can i find the Database & Table.... I was attached one database in MYSQL through odbc connectivity, How can i get that table values from the attached database. And also i want to create a table from the database..

    Read the article

  • How to generate script/stored procedure to copy data from one database to another

    - by aein
    I had to restore may database to 1 day erlier, so My database is missing data from a day before, and this databse has been used by users since the restoration. How do I generate script/stored procedure to copy just the missing data from the backup database into my current database. There are PK and FK relationships that need to be considering. I'm using SQL server 2005 Thanks for your help. Aein

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • How to export and import an user profile from one Quassel core to another?

    - by Zertrin
    I have been using Quassel as my bouncer for IRC for quite a long time now. We (a group of administrators of a small network) have set up a shared Quassel core with many users on the same core. But now I would like to export everything related to my user account from the Quassel database on this core, in order to re-import it later in another Quassel core on my own server. Unfortunately, while a feature for adding users has been implemented into Quassel, nothing is so far provided for either exporting or deleting an user. (if deleting-a-user feature was available, I could have made a copy of the current database, delete all the other users leaving only mine, and use this resulting database on my own server, while leaving the first one untouched on the shared server) Despite extensive research on the Internet on this subject, I've found so far no solution. I have to precise that the backend database for the core has been migrated from the default SQLite backend to a PosgreSQL backend as the database grew sensibly (over 1,5 GB for now). However I'd be glad to hear from any working solution (SQLite or PostgreSQL backend) describing a way to export the data related to a specific user profile and then re-import-it in a new Quasselcore database.

    Read the article

  • EF4 generates invalid script

    - by Jaxidian
    When I right-click in a .EDMX file and click Generate Database From Model, the resulting script is obviously wrong because of the table names. What it generates is the following script. Note the table names in the DROP TABLE part versus the CREATE TABLE part. Why is this inconsistent? This is obviously not a reusable script. What I created was an Entity named "Address" and an Entity named "Company", etc (all singular). The EntitySet names are pluralized. The "Pluralize New Objects" boolean does not change this either. So what's the deal? For what it's worth, I originally generated the EDMX by pointing it to a database that had tables with non-pluralized names and now that I've made some changes, I want to go back the other way. I'd like to have the option to go back and forth as neither the db-first nor the model-first model is ideal in all scenarios, and I have the control to ensure that there will be no merging issues from multiple people going both ways at the same time. -- -------------------------------------------------- -- Dropping existing FOREIGN KEY constraints -- NOTE: if the constraint does not exist, an ignorable error will be reported. -- -------------------------------------------------- ALTER TABLE [Address] DROP CONSTRAINT [FK_Address_StateID-State_ID]; GO ALTER TABLE [Company] DROP CONSTRAINT [FK_Company_AddressID-Address_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_BossEmployeeID-Employee_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_CompanyID-Company_ID]; GO ALTER TABLE [Employee] DROP CONSTRAINT [FK_Employee_PersonID-Person_ID]; GO ALTER TABLE [Person] DROP CONSTRAINT [FK_Person_AddressID-Address_ID]; GO -- -------------------------------------------------- -- Dropping existing tables -- NOTE: if the table does not exist, an ignorable error will be reported. -- -------------------------------------------------- DROP TABLE [Address]; GO DROP TABLE [Company]; GO DROP TABLE [Employee]; GO DROP TABLE [Person]; GO DROP TABLE [State]; GO -- -------------------------------------------------- -- Creating all tables -- -------------------------------------------------- -- Creating table 'Addresses' CREATE TABLE [Addresses] ( [ID] int IDENTITY(1,1) NOT NULL, [StreetAddress] nvarchar(100) NOT NULL, [City] nvarchar(100) NOT NULL, [StateID] int NOT NULL, [Zip] nvarchar(10) NOT NULL ); GO -- Creating table 'Companies' CREATE TABLE [Companies] ( [ID] int IDENTITY(1,1) NOT NULL, [Name] nvarchar(100) NOT NULL, [AddressID] int NOT NULL ); GO -- Creating table 'People' CREATE TABLE [People] ( [ID] int IDENTITY(1,1) NOT NULL, [FirstName] nvarchar(100) NOT NULL, [LastName] nvarchar(100) NOT NULL, [AddressID] int NOT NULL ); GO -- Creating table 'States' CREATE TABLE [States] ( [ID] int IDENTITY(1,1) NOT NULL, [Name] nvarchar(100) NOT NULL, [Abbreviation] nvarchar(2) NOT NULL ); GO -- Creating table 'Employees' CREATE TABLE [Employees] ( [ID] int IDENTITY(1,1) NOT NULL, [PersonID] int NOT NULL, [CompanyID] int NOT NULL, [Position] nvarchar(100) NOT NULL, [BossEmployeeID] int NULL ); GO -- -------------------------------------------------- -- Creating all PRIMARY KEY constraints -- -------------------------------------------------- -- Creating primary key on [ID] in table 'Addresses' ALTER TABLE [Addresses] ADD CONSTRAINT [PK_Addresses] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'Companies' ALTER TABLE [Companies] ADD CONSTRAINT [PK_Companies] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'People' ALTER TABLE [People] ADD CONSTRAINT [PK_People] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'States' ALTER TABLE [States] ADD CONSTRAINT [PK_States] PRIMARY KEY ([ID] ); GO -- Creating primary key on [ID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [PK_Employees] PRIMARY KEY ([ID] ); GO -- -------------------------------------------------- -- Creating all FOREIGN KEY constraints -- -------------------------------------------------- -- Creating foreign key on [StateID] in table 'Addresses' ALTER TABLE [Addresses] ADD CONSTRAINT [FK_Address_StateID_State_ID] FOREIGN KEY ([StateID]) REFERENCES [States] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Address_StateID_State_ID' CREATE INDEX [IX_FK_Address_StateID_State_ID] ON [Addresses] ([StateID]); GO -- Creating foreign key on [AddressID] in table 'Companies' ALTER TABLE [Companies] ADD CONSTRAINT [FK_Company_AddressID_Address_ID] FOREIGN KEY ([AddressID]) REFERENCES [Addresses] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Company_AddressID_Address_ID' CREATE INDEX [IX_FK_Company_AddressID_Address_ID] ON [Companies] ([AddressID]); GO -- Creating foreign key on [AddressID] in table 'People' ALTER TABLE [People] ADD CONSTRAINT [FK_Person_AddressID_Address_ID] FOREIGN KEY ([AddressID]) REFERENCES [Addresses] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Person_AddressID_Address_ID' CREATE INDEX [IX_FK_Person_AddressID_Address_ID] ON [People] ([AddressID]); GO -- Creating foreign key on [CompanyID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_CompanyID_Company_ID] FOREIGN KEY ([CompanyID]) REFERENCES [Companies] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_CompanyID_Company_ID' CREATE INDEX [IX_FK_Employee_CompanyID_Company_ID] ON [Employees] ([CompanyID]); GO -- Creating foreign key on [BossEmployeeID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_BossEmployeeID_Employee_ID] FOREIGN KEY ([BossEmployeeID]) REFERENCES [Employees] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_BossEmployeeID_Employee_ID' CREATE INDEX [IX_FK_Employee_BossEmployeeID_Employee_ID] ON [Employees] ([BossEmployeeID]); GO -- Creating foreign key on [PersonID] in table 'Employees' ALTER TABLE [Employees] ADD CONSTRAINT [FK_Employee_PersonID_Person_ID] FOREIGN KEY ([PersonID]) REFERENCES [People] ([ID]) ON DELETE NO ACTION ON UPDATE NO ACTION; -- Creating non-clustered index for FOREIGN KEY 'FK_Employee_PersonID_Person_ID' CREATE INDEX [IX_FK_Employee_PersonID_Person_ID] ON [Employees] ([PersonID]); GO -- -------------------------------------------------- -- Script has ended -- --------------------------------------------------

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • Insert Error:CREATE DATABASE permission denied in database 'master'. cannot attach the file

    - by user1300580
    i have a register page on my website I am creating and it saves the data entered by the user into a database however when I click the register button i am coming across the following error: Insert Error:CREATE DATABASE permission denied in database 'master'. Cannot attach the file 'C:\Users\MyName\Documents\MyName\Docs\Project\SJ\App_Data\SJ-Database.mdf' as database 'SJ-Database'. These are my connection strings: <connectionStrings> <add name="ApplicationServices" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient"/> <add name="MyConsString" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|SJ-Database.mdf; Initial Catalog=SJ-Database; Integrated Security=SSPI;" providerName="System.Data.SqlClient" /> </connectionStrings> Register page code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using System.Data.SqlClient; public partial class About : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } public string GetConnectionString() { //sets the connection string from your web config file "ConnString" is the name of your Connection String return System.Configuration.ConfigurationManager.ConnectionStrings["MyConsString"].ConnectionString; } private void ExecuteInsert(string name, string gender, string age, string address, string email) { SqlConnection conn = new SqlConnection(GetConnectionString()); string sql = "INSERT INTO Register (Name, Gender, Age, Address, Email) VALUES " + " (@Name,@Gender,@Age,@Address,@Email)"; try { conn.Open(); SqlCommand cmd = new SqlCommand(sql, conn); SqlParameter[] param = new SqlParameter[6]; //param[0] = new SqlParameter("@id", SqlDbType.Int, 20); param[0] = new SqlParameter("@Name", SqlDbType.VarChar, 50); param[1] = new SqlParameter("@Gender", SqlDbType.Char, 10); param[2] = new SqlParameter("@Age", SqlDbType.Int, 100); param[3] = new SqlParameter("@Address", SqlDbType.VarChar, 50); param[4] = new SqlParameter("@Email", SqlDbType.VarChar, 50); param[0].Value = name; param[1].Value = gender; param[2].Value = age; param[3].Value = address; param[4].Value = email; for (int i = 0; i < param.Length; i++) { cmd.Parameters.Add(param[i]); } cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); } catch (System.Data.SqlClient.SqlException ex) { string msg = "Insert Error:"; msg += ex.Message; throw new Exception(msg); } finally { conn.Close(); } } protected void cmdRegister_Click(object sender, EventArgs e) { if (txtRegEmail.Text == txtRegEmailCon.Text) { //call the method to execute insert to the database ExecuteInsert(txtRegName.Text, txtRegAge.Text, ddlRegGender.SelectedItem.Text, txtRegAddress.Text, txtRegEmail.Text); Response.Write("Record was successfully added!"); ClearControls(Page); } else { Response.Write("Email did not match"); txtRegEmail.Focus(); } } public static void ClearControls(Control Parent) { if (Parent is TextBox) { (Parent as TextBox).Text = string.Empty; } else { foreach (Control c in Parent.Controls) ClearControls(c); } } }

    Read the article

  • Setup database for Unit tests with Spring, Hibernate and Spring Transaction Support

    - by Michael Bulla
    I want to test integration of dao-layer with my service-layer in a unit-test. So I need to setup some data in my database (hsql). For this setup I need an own transaction at the begining of my testcase to ensure that all my setup is really commited to database before starting my testcase. So here's what I want to achieve: // NotTranactional public void doTest { // transaction begins setup database // commit transaction service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } Here is my not working code: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"/asynchUnit.xml"}) @DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD) public class ReceiptServiceTest implements ApplicationContextAware { @Autowired(required=true) private UserHome userHome; private ApplicationContext context; @Before @Transactional(propagation=Propagation.REQUIRED) public void init() throws Exception { User user = InitialCreator.createUser(); userHome.persist(user); } @Test public void testDoSomething() { ... } } Leading to this exception: org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:687) at de.diandan.asynch.modell.GenericHome.getSession(GenericHome.java:40) at de.diandan.asynch.modell.GenericHome.persist(GenericHome.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:318) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:196) at $Proxy28.persist(Unknown Source) at de.diandan.asynch.service.ReceiptServiceTest.init(ReceiptServiceTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) I dont know whats the right way to get the transaction around setup database. What I tried: @Before @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } - Spring seems not to start transaction in @Before-annotated methods. Beyond that, thats not what I really want, cause there are a lot merhods in my testclass which needs a slightly differnt setup, so I need several of that init-methods. @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } public void doTest { init(); service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- init seems not to get started in transaction What I dont want to do: public void doTest { // doing my own transaction-handling setup database // doing my own transaction-handling service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- start mixing springs transaction-handling and my own seems to get pain in the ass. @Transactional(propagation=Propagation.REQUIRED) public void doTest { setup database service.doStuff() } -- I want to test as real as possible situation, so my service should start with a clean session and no transaction opened So whats the right way to setup database for my testcase?

    Read the article

  • 2012?6?27?(?) 19:00~??????90?????!Oracle Database??????????

    - by Yuichi Hayashi
    ???????Oracle Database??????/??????????????????????????????????????????????????????????????????????????????Oracle Database????????????????????????????Oracle Database????????·?????????????? ¦ ???? 2012?6?27?(?) 19:00~20:30(18:45????)?20:30~21:30 ???????(????/??????) ¦ ?? ???????? ??(?????????)???? ¦ ?? ???????? DB????? ¦ ???? ??(?????) ¦ ?? 30?(????????????????????????) ¦ ?? ·?????????????2??????????????????????? ¦ ?? ????????(http://www.cosol.jp/) ¦ ??????? ???????? ??????? 03-3264-8800(9:00~18:00/??????)[email protected] ????????????? ?????????? Oracle DBA&Developer Days2010?2011??????????????????????????????Start! Oracle Database~Oracle Database?????????Step by Step~?????????????????????????? Oracle Database????????·?????????????????????????????? ·??????????? ·??????????? ·Oracle Enterprise Manager Database Control?????? ·Oracle Enterprise Manager Database Control?????? ·??????????? ·?????????????????? ·Oracle Net???????(listener.ora) ·?????????(tnsnames.ora) ·???????????(NetCA, Net Manager) ·??????? ·Oracle????????? ·???????(??????????SGA?? ?) ·???????????? ·?????????????? ·?????????????? ·????????????????? ·???????????????????(????????????) ·?????REDO????? ·UNDO??????(????????UNDO_RETENTION??) ???????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????? ?????????????

    Read the article

  • SQL SERVER – Finding Shortest Distance between Two Shapes using Spatial Data Classes – Ramsetu or Adam’s Bridge

    - by pinaldave
    Recently I was reading excellent blog post by Lenni Lobel on Spatial Database. He has written very interesting function ShortestLineTo in Spatial Data Classes. I really loved this new feature of the finding shortest distance between two shapes in SQL Server. Following is the example which is same as Lenni talk on his blog article . DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -26, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO When you run this script SQL Server finds out the shortest distance between two shapes and draws the line. We are using STBuffer so we can see the connecting line clearly. Now let us modify one of the object and then we see how the connecting shortest line works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO Now once again let us modify one of the script and see how the shortest line to works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -18, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) SELECT @Shape1.STDistance(@Shape2) GO You can see as the objects are changing the shortest lines are moving at appropriate place. I think even though this is very small feature this is really cool know. While I was working on this example, I suddenly thought about distance between Sri Lanka and India. The distance is very short infect it is less than 30 km by sea. I decided to map India and Sri Lanka using spatial data classes. To my surprise the plotted shortest line is the same as Adam’s Bridge or Ramsetu. Adam’s Bridge starts as chain of shoals from the Dhanushkodi tip of India’s Pamban Island and ends at Sri Lanka’s Mannar Island. Geological evidence suggests that this bridge is a former land connection between India and Sri Lanka. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database, SQL Spatial

    Read the article

  • June Oracle Technology Network NEW Member Benefits - books books and more books!!!

    - by Cassandra Clark
    As we mentioned a few posts ago we are working to bring Oracle Technology Network members NEW benefits each month. Listed below are several discounts on technology books brought to you by Apress, Pearson, CRC Press and Packt Publishing. Happy reading!!! Apress Offers - Get 50% off the eBook below using promo code ORACLEJUNEJCCF. Pro ODP.NET for Oracle Database 11g By Edmund T. Zehoo This book is a comprehensive and easy-to-understand guide for using the Oracle Data Provider (ODP) version 11g on the .NET Framework. It also outlines the core GoF (Gang of Four) design patterns and coding techniques employed to build and deploy high-impact mission-critical applications using advanced Oracle database features through the ODP.NET provider. Pearson Offers - Get 35% off all titles listed below using code OTNMEMBER. SOA Design Patterns | Thomas Earl | ISBN: 0136135161 In cooperation with experts and practitioners throughout the SOA community, best-selling author Thomas Erl brings together the de facto catalog of design patterns for SOA and service-orientation. Oracle Performance Survival Guide | Guy Harrison | ISBN: 9780137011957 The fast, complete, start-to-finish guide to optimizing Oracle performance. Core JavaServer Faces, Third Edition | David Geary and Cay S. Horstmann | ISBN: 9780137012893 Provides everything you need to master the powerful and time-saving features of JSF 2.0? Solaris Security Essentials | ISBN: 9780137012336 A superb guide to deploying and managing secure computer environments.? Effective C#, Second Edition | Bill Wagner | ISBN: 9780321658708 Respected .NET expert Bill Wagner identifies fifty ways you can leverage the full power of the C# 4.0 language to express your designs concisely and clearly. CRC Press Offers - Use 813DA to get 20% off this the title below. Secure and Resilient Software Development This book illustrates all phases of the secure software development life cycle. It details quality software development strategies that stress resilience requirements with precise, actionable, and ground-level inputs. Packt Publishing Offers - Use the promo code "Java35June", to save 35% off of each eBook mentioned below. JSF 2.0 Cookbook By Anghel Leonard ISBN: 978-1-847199-52-2 Packed with fast, practical solutions and techniques for JavaServer Faces developers who want to push past the JSF basics. JavaFX 1.2 Application Development Cookbook By Vladimir Vivien ISBN: 978-1-847198-94-5 Fast, practical solutions and techniques for building powerful, responsive Rich Internet Applications in JavaFX.

    Read the article

  • Microsoft SQL Server High-Availability Videos and Q&A Log

    - by KKline
    You Want Videos? We Got Videos! I always enjoy getting the chance to catch up with author, consultant, and Microsoft Clustering MVP Allan Hirt . Allan and I recently presented two sessions covering an overview of high availability in Microsoft SQL Server and, the following week, a demo of how to implement several different kinds of high availability techniques including database mirroring, transactional replication, and Windows clustering services. You can see videos of these presentations at the...(read more)

    Read the article

  • SQL SERVER – Rename Columnname or Tablename – SQL in Sixty Seconds #032 – Video

    - by pinaldave
    We all make mistakes at some point of time and we all change our opinion. There are quite a lot of people in the world who have changed their name after they have grown up. Some corrected their parent’s mistake and some create new mistake. Well, databases are not protected from such incidents. There are many reasons why developers may want to change the name of the column or table after it was initially created. The goal of this video is not to dwell on the reasons but to learn how we can rename the column and table. Earlier I have written the article on this subject over here: SQL SERVER – How to Rename a Column Name or Table Name. I have revised the same article over here and created this video. There is one very important point to remember that by changing the column name or table name one creates the possibility of errors in the application the columns and tables are used. When any column or table name is changed, the developer should go through every place in the code base, ad-hoc queries, stored procedures, views and any other place where there are possibility of their usage and change them to the new name. If this is one followed up religiously there are quite a lot of changes that application will stop working due to this name change.  One has to remember that changing column name does not change the name of the indexes, constraints etc and they will continue to reference the old name. Though this will not stop the show but will create visual un-comfort as well confusion in many cases. Here is my question back to you – have you changed ever column name or table name in production database (after project going live)? If yes, what was the scenario and need of doing it. After all it is just a name. Let me know what you think of this video. Here is the updated script. USE tempdb GO CREATE TABLE TestTable (ID INT, OldName VARCHAR(20)) GO INSERT INTO TestTable VALUES (1, 'First') GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the ColumnName sp_RENAME 'TestTable.OldName', 'NewName', 'Column' GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the TableName sp_RENAME 'TestTable', 'NewTable' GO -- Check the Tabledata - Error SELECT * FROM TestTable GO -- Check the Tabledata - New SELECT * FROM NewTable GO -- Cleanup DROP TABLE NewTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – How to Rename a Column Name or Table Name What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Eloadásaim: IQSYS BI Symposium, 2010. ápr. 14.

    - by Fekete Zoltán
    2010. ápr. 14-én lesz a következo IQSYS BI Symposium, ahol az Oracle arany fokozatú támogató. A nap folyamán ketto eloadást is tartok: - 15:15 Az Oracle Exadata/Database Machine rendszer: üzleti és technikai elonyök, Fekete Zoltán (Oracle) - 14:25 Az Oracle üzleti intelligencia és EPM rendszer áttekintése, Fekete Zoltán (Oracle)

    Read the article

  • ...Welche DB-Hintergrundprozesse sind für was zuständig?... wie ging das nochmal? Und wie heisst noch diese eine wichtige Data Dictionary View? ...

    - by britta.wolf
    ...Gab es da nicht mal ein gutes Oracle-Poster, wo man schnell nachschauen konnte und einen guten Überblick bekam? Viele Datenbankadministratoren haben das besagte Poster, das die Architektur und Prozesse sowie die Data Dictionary-Struktur der Oracle Datenbank beschreibt, vermisst! Daher wurde nun eine handliche kleine Flash-Applikation mit erweitertem Inhalt entwickelt - Oracle Database 11g: Interactive Quick Reference - die man sich hier downloaden kann (einfach auf den Button "Download now" klicken (Größe der Zip-Datei: 4.6 MB). Ist genial, muss man haben!!! :-)

    Read the article

  • Oracle Technológia Fórum rendezvény, 2010. május 5. szerda

    - by Fekete Zoltán
    Jövo hét szerdán Oracle Technology Fórum napot tartunk, ahol az adatbázis-kezelési és a fejlesztoi szekciókban hallgathatók meg eloadások illetve kaphatók válaszok a kérdésekre. Jelentkezés a rendezvényre. Az adatbázis szekcióban fogok beszélni a Sun Oracle Database Machine / Exadata megoldások technikai gyöngyszemeirol mind a tranzakciós (OLTP) mind az adattárházas (DW) és adatbázis konszolidáció oldaláról. Emellett kiemelem majd az Oracle Data Mining (adatbányászat) és OLAP újdonságait, érdekességeit. Megemlítem majd az Oracle's Data Warehouse Reference Architecture alkalmazási lehetoségeit is.

    Read the article

  • Survey: How much data do you work with?

    - by James Luetkehoelter
    Andy isn't the only one that can ask a survey question. This is something I really curious about because many of the answers or recommendations or rants in blogs are not universably applicable to every database - small databases must sometimes be treated differently, and uber databases are just a pain (and fun at the same time). So, how would you classify most of the databases you work with: 1) Up to 50GB 2) 50-500GB 3) 500GB - 2TB 4) DEAR GOD THAT"S TOO MUCH INFORMATION! Share this post: email it!...(read more)

    Read the article

  • PASS Summit 2012 PreCon - DBA-298-P Automate and Manage SQL Server with PowerShell

    - by AllenMWhite
    On Tuesday I presented an all-day pre-conference session on using PowerShell to automate and manage SQL Server. It was a very full day and we had a lot of great questions. One discussion in Module 6 was around scripting all the objects in a database, and I'd mentioned the script I wrote for the book The Red Gate Guide to SQL Server Team-based Development . When putting together the demos for the attendees to download I realized I'd placed that script in the Module 6 folder, so you don't need to go...(read more)

    Read the article

  • What's the difference between MariaDB and MySQL?

    - by Chris J. Lee
    What's the difference between MariaDB and MySQL? I'm not very familiar with both. I'm primarily a front end developer for the most part. Are they syntactically similar? Where do these two query languages differ? Wikipedia only mentions the difference between licensing: MariaDB is a community-developed branch of the MySQL database, the impetus being the community maintenance of its free status under GPL, as opposed to any uncertainty of MySQL license status under its current ownership by Oracle.

    Read the article

  • SQL Server 2008 R2 Express Edition - a treat for small scale businesses

    - by ssqa.net
    SQL Server Express edition is a light-weight software within SQL Server arena, it is classed as database platform that makes it easy to develop data-driven applications that are rich in capability, offer enhanced storage security, and are fast to deploy. Also the SQL Server 2008 Express with Advanced Services is an edition of same flock that includes a new graphical management tool, features for reporting, and advanced text-based search capabilities. You can add the GUI capabilities for management...(read more)

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >