Search Results

Search found 30899 results on 1236 pages for 'openworld database machin'.

Page 37/1236 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How to keep historic details of modification in a database (Audit trail)?

    - by mada
    I'm a J2EE developer & we are using hibernate mapping with a PostgreSQL database. We have to keep track of any changes occurs in the database, in others words all previous & current values of any field should be saved. Each field can be any type (bytea, int, char...) With a simple table it is easy but we a graph of objects things are more difficult. So we have, speaking in a UML point of view, a graph of objects to store in the database with every changes & the user. Any idea or pattern how to do that?

    Read the article

  • What is the preferred way to update database schemas in multiple production environments

    - by rmarimon
    I am about to install some 20 servers with the same web application in multiple locations connected to their own local database. I will be updating the web applications remotely (perhaps using debian's package manager) and I'm sure will eventually need to update the database schemas. Since each server could be eventually be using a different release of the web application, I need a way to apply the incremental changes to the servers. I'm thinking something like this. Let's start with database.schema.1 as the original release of the database and assume this number increases with each new version of the schema. I eventually could end up with database.schema.17 as the current release. For a new installation this would be the schema to install. It seems to me that I would need consecutive translations like database.translation.1.2 which would convert database.schema.1 into database.schema.2, database.translation.2.3 to convert from 2 to 3 and so on until 17. It seems that whenever I change a schema I need to alter the database but perhaps I need to run some script to update the data which might be done with SQL but might require an external non sql script. What is the appropriate way to organize all these files? What is the automatic way to apply those upgrades to the schema? Where do I store the current version number of the schema?

    Read the article

  • Can I create support multiple database transactions on a single connection?

    - by draezal
    I have created a HyperSQL Database. I was just wondering whether I could run multiple transactions on a single connection. I didn't want to spawn a new connection for each transaction due to the overhead associated with this. Looking at some similar questions the suggestion appeared to be to create a pool of database connections and then block waiting for one to become available. This is a workable, but not desirable solution. Background Info (if this is relevant to the answer). My application will create a new thread when some request comes in. This request will require a database transaction. Then some not insignificant time later this transaction will be committed. Any advice appreciated :)

    Read the article

  • How to import and export only data of whole database in access 2007

    - by DiegoMaK
    Hi, I have two identical databases with same structure, database a in computer a and database b in computer b. The data of database a*(a.accdb)* and database b*(b.accdb)* are different. then in database a i have for example ID:1, 2, 3 and in database B i Have ID:4,5,6 Then i need merge these databases data in only one database(a or b, doesn't matter) so the final database looks like. ID:1,2,3,4,5,6 I search an easy way to do this. because i have many tables. and do this by union query is so tedious. I search for example for a backup option for only data without scheme as in postgreSQl or many others RDBMS, but i don't see this options in access 2007. pd:only just table could be duplicate values(i guess that pk doesn't allow copy a duplicate value and all others values will be copied well). if i wrong please correct me. thanks for your help.

    Read the article

  • How I can make Recycle Bin for Database ?Application?

    - by Wael Dalloul
    Hi, I have database application, I want to allow the user to restore the deleted records from the database, like in windows we have Recycle bin for files I want to do the same thing but for database records, Assume that I have a lot of related tables that have a lot of fields. Edit: let's say that I have the following structures: Reports table RepName primary key ReportData Users table ID primary key Name UserReports table RepName primary key UserID primary key IsDeleted now if I put isdeleted field in UserReports table, the user can't add same record again if it marked as deleted, because the record is already and this will make duplication.

    Read the article

  • If don't own proprietary database engine, what is best way to convert database to mysql?

    - by John Robertson
    I work for a very small company. I was recently faced with the question of whether there is a good way to convert a proprietary database to a MySQL database without owning the proprietary database engine e.g. if one is given a large oracle database file (or choose your favorite proprietary database engine format), but doesn't have a license for the oracle database engine, is there a good, perfectly reliable way to convert it to a MySQL database format that can be read with the MySQL database engine? My question is very vague as to which proprietary format is the source just because there would be multiple sources and it looks like they would be "various and sundry". My suspicion is that there is no perfectly reliable way, especially for a wide variety of proprietary databases. If there are a few proprietary formats for which this is possible, I would still be interested in knowing, though "various and sundry" is probably the real issue. Minimizing cost, effort and correct conversion are key so I think this is probably is the not possible list. -John

    Read the article

  • PHP database selection issue

    - by Citroenfris
    I'm in a bit of a pickle with freshening up my PHP a bit, it's been about 3 years since I last coded in PHP. Any insights are welcomed! I'll give you as much information as I possibly can to resolve this error so here goes! Files config.php database.php news.php BLnews.php index.php Includes config.php - news.php database.php - news.php news.php - BLnews.php BLnews.php - index.php Now the problem with my current code is that the database connection is being made but my database refuses to be selected. The query I have should work but due to my database not getting selected it's kind of annoying to get any data exchange going! database.php <?php class Database { //------------------------------------------- // Connects to the database //------------------------------------------- function connect() { if (isset($dbhost) && isset($dbuser) && isset($dbpass)) { $con = mysql_connect($dbhost, $dbuser, $dbpass) or die("Could not connect: " . mysql_error()); } }// end function connect function selectDB() { if (isset($dbname) && isset($con)) { $selected_db = mysql_select_db($dbname, $con) or die("Could not select test DB"); } } } // end class Database ?> News.php <?php // include the config file and database class include 'config.php'; include 'database.php'; ... ?> BLnews.php <?php // include the news class include 'news.php'; // create an instance of the Database class and call it $db $db = new Database; $db -> connect(); $db->selectDB(); class BLnews { function getNews() { $sql = "SELECT * FROM news"; if (isset($sql)) { $result = mysql_query($sql) or die("Could not execute query. Reason: " .mysql_error()); } return $result; } ?> index.php <?php ... include 'includes/BLnews.php'; $blNews = new BLnews(); $news = $blNews->getNews(); ?> ... <?php while($row = mysql_fetch_array($news)) { echo '<div class="post">'; echo '<h2><a href="#"> ' . $row["title"] .'</a></h2>'; echo '<p class="post-info">Posted by <a href="#"> </a> | <span class="date"> Posted on <a href="#">' . $row["date"] . '</a></span></p>'; echo $row["content"]; echo '</div>'; } ?> Well this is pretty much everything that should get the information going however due to the mysql_error in $result = mysql_query($sql) or die("Could not execute query. Reason: " .mysql_error()); I can see the error and it says: Could not execute query. Reason: No database selected I honestly have no idea why it would not work and I've been fiddling with it for quite some time now. Help is most welcomed and I thank you in advance! Greets Lemon

    Read the article

  • How to normalize a database where different user groups have different kinds of profiles?

    - by Stephen
    My application database has a Groups table that separates users into logical roles and defines access levels (admin, owner, salesperson, customer service, etc.) Groups has many Users. The Users table contains login details such as username and password. Now I wish to add user profiles to my database. The trouble I'm having (probably due to my relative unfamiliarity with proper database normalization) is that different user groups have different kinds of profiles. Ergo, a salesperson's profile will include his commission percentage, whereas an admin or customer service would not need this value. So, would the proper method be to create a unique profile table for each group? (e.g. admin_profiles, or salesperson_profiles). or is there a better way that combines certain details in a generic profile, while some users have extended info. And if so, whats a good example of how to do this with the commission example given?

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Does it make sense to develop open source python library for database inspection?

    - by gruszczy
    Some time ago I came up with an idea for a library for database inspection. I started developing it and got some very basic functionality, just to check if that's possible. Recently however, I get second thoughts, whether such project would really be useful. I am actually planning to develop following software suite: library for python, that would provide easy interface to inspect database structure, desktop application in PyQt that would use the interface to provide graphical database inspection, web application in Django that would use the interface to provide database inspection through the browser. Do you think such suite would be useful for other developers/database administrators/analysts? I know, that there is pgadmin for PostgreSQL and some tool for sqlite3 and that there is Java tool called DBInspect. Usually I would be against creating new tool and rather join existing project, but I am not Java programmer (and I would rather stick to python or C, which I like) and none of these projects provide a library for database inspection. Anyway I would like to hear some opinions from fellow developers, whether such project make sense or I should try to spend my free time on developing something else.

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Data Guard - Snapshot Standby Database??

    - by Jian Zhang-Oracle
    ?? -------- ?????,??standby?????mount??????????REDO??,??standby????????????????????,???????read-only???open????,????ACTIVE DATA GUARD,????standby?????????(read-only)??(????????),????standby???????????(read-write)? ?????,?????????????Real Application Testing(RAT)??????????,?????????standby??????snapshot standby?????????,??snapshot standby??????????,???????????(read-write)??????snapshot standby??????????????,?????????,??????????,????????,?????????snapshot standby?????standby???,????????? ?? ---------  1.??standby?????? SQL> Alter system set db_recovery_file_dest_size=500M; System altered. SQL> Alter system set db_recovery_file_dest='/u01/app/oracle/snapshot_standby'; System altered. 2.??standby?????? SQL> alter database recover managed standby database cancel; Database altered. 3.??standby???snapshot standby,??open snapshot standby SQL> alter database convert to snapshot standby; Database altered. SQL> alter database open;    Database altered. ??snapshot standby??????SNAPSHOT STANDBY,open???READ WRITE: SQL> select DATABASE_ROLE,name,OPEN_MODE from v$database; DATABASE_ROLE    NAME      OPEN_MODE ---------------- --------- -------------------- SNAPSHOT STANDBY FSDB      READ WRITE 4.?snapshot standby???????????Real Application Testing(RAT)????????? 5.?????,??snapshot standby???physical standby,?????????? SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Database mounted. SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY; Database altered. SQL> shutdown immediate; ORA-01507: database not mounted ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Database mounted. SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Database altered. 5.?????standby?,???????PHYSICAL STANDBY,open???MOUNTED SQL> select DATABASE_ROLE,name,OPEN_MODE from v$database; DATABASE_ROLE    NAME      OPEN_MODE ---------------- --------- -------------------- PHYSICAL STANDBY FSDB      MOUNTED 6.??????????????? ????: SQL> select ads.dest_id,max(sequence#) "Current Sequence",            max(log_sequence) "Last Archived"        from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads        where ad.dest_id=al.dest_id        and al.dest_id=ads.dest_id        and al.resetlogs_change#=(select max(resetlogs_change#) from v$archived_log )        group by ads.dest_id;    DEST_ID Current Sequence Last Archived ---------- ---------------- -------------      1              361           361      2              361           362 --???? SQL>    select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"       from (select thread# thrd, max(sequence#) almax           from v$archived_log           where resetlogs_change#=(select resetlogs_change# from v$database)           group by thread#) al,          (select thread# thrd, max(sequence#) lhmax           from v$log_history           where resetlogs_change#=(select resetlogs_change# from v$database)           group by thread#) lh      where al.thrd = lh.thrd;     Thread Last Seq Received Last Seq Applied ---------- ----------------- ----------------          1               361              361 ??????????,???blog,???????????,??"??:Data Guard - Snapshot Standby Database??" 

    Read the article

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • Oracle OpenWorld Update: Oracle GoldenGate Customer Panels

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} We are two weeks out from the start of Oracle OpenWorld 2012. The Data Integration team has a solid line-up of product and customer sessions for you to attend this year, plus five hands-on labs, and numerous demonstration pods in Moscone South. On Monday we kick the track off with Brad Adelberg’s Future Strategy, Direction and Roadmap for Oracle’s Data Integration Platform at 10:45AM in Moscone West 3005. Over the rest of the week we have a number of deep dive sessions that build out the themes that Brad discusses in his keynote, but the two that I would like to highlight today are our Oracle GoldenGate customer panels. The first customer panel is on Zero Downtime Operations and is on Monday at 1:45 in Moscone West 3005. The theme of this session is how to reduce downtime for critical must-succeed systems. Here’s a rundown of the session: Bank of America, TALX, and St. Jude Medical all have users communities that expect systems to be available around the clock. In this customer panel session, Bank of America discusses how it will be leveraging Oracle GoldenGate. St. Jude Medical shares how it is using Oracle GoldenGate to achieve a zero-downtime migration for a 5 TB Oracle online transaction processing (OLTP) 24/7 mission-critical database. TALX discusses how Equifax Workforce Information Services used Oracle GoldenGate to move from processing online transactions in a single site to processing concurrently from two geographically disparate data centers, providing a highly available solution with significant burst capacity. On Tuesday at 11:45 in Moscone West 3005 we switch gears and host a customer panel on Operational Reporting. The theme of this customer panel is all around reporting and how Oracle GoldenGate raises the bar on reporting by enabling real-time access to real-time data. Here’s a rundown of the session: Turk Telekom and Comcast are half a world away from each other, but these two powerhouse companies have both drastically improved performance and access to real-time data by using Oracle GoldenGate. During this panel discussion, Turk Telekom will explain its evaluation and implementation of Oracle GoldenGate, how the business has experienced significant improvements in the core database and reporting platform, and how it plans to expand its usage into its SOA architecture and its architecture based on Oracle’s Siebel platform. Comcast will explain its implementation of Oracle GoldenGate and how it moves data in real time from its mission-critical HP NonStop database to a Teradata data warehouse. Join us at our sessions to learn what other customers are doing with our products or stop by our demo pods in Moscone south and meet the product management and development teams.

    Read the article

  • OPN Exchange @ OpenWorld – Don’t Forget…

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Mark your calendar because we’re less than a week away from kicking off our first ever Oracle PartnerNetwork Exchange @ OpenWorld program, and do we have a lot in store for you!  So don’t forget to attend these great partner events! Sunday, 9/30: The Global Partner Keynote with Judson Althoff and other senior executives @ 1:00pm OPN Exchange General Sessions  to discuss the overview of each OPN Exchange track including, Cloud, Engineered Systems, Industries, Technology and Applications @ 3:30pm The exclusive OPN Exchange AfterDark Reception complete with the smooth sounds of Macy Gray @ 7:30pm. Don’t worry, there is plenty to come after Sunday! Be sure to take part in all the exciting activity taking place during the week, including: Over 40 + OPN Exchange Sessions taking place at the Marriott Marquis throughout the week “Test Fest” exams for OPN Specialist Certifications,  taking place throughout the week The 5k Partner Fun Run- Meet at the W Hotel lobby on Monday 10/1 at 6 a.m. PT – No registration necessary! Led by Judson Althoff, SVP of WWA&C. Social Media Rally Station- Join us in the OPN Lounge on Monday to become social savvy and leverage social media tools for your business Ice Cream Social- Monday October 1st, from 3-5:30 p.m. in the OPN Lounge. Hosted by Oracle Advanced Customer Support Services. Endless Networking Opportunities at the OPN Lounge, the Howard Street Tent for lunch, the ‘It’s a Wrap Reception’, and much more! We can’t wait to see you there! The OPN Communications Team

    Read the article

  • Oracle AutoVue Key Highlights from Oracle OpenWorld 2012

    - by Celine Beck
    We closed another successful Oracle Open World for AutoVue. Thanks to everyone who joined us this year. As usual, from customer presentations to evening networking activities, there was enough to keep us busy during the entire event. Here is a summary of some of the key highlights of the conference: Sessions:We had two AutoVue-specific sessions during Oracle Open World this year. The first session was part of the Product Lifecycle Management track and covered how AutoVue can be used to help drive effective decision making and streamline design for manufacturing processes. Attendees had the opportunity to learn from customer speaker GLOBALFOUNDRIES how they have been leveraging Oracle AutoVue within Agile PLM to enable high degree of collaboration during the exceptionally creative phases of their product development processes, securely, without risking valuable intellectual property. If you are interested, you can actually download the presentation by visiting launch.oracle.com/?plmopenworld2012.AutoVue was also featured as part of the Utilities track. This session focused on how visualization solutions play a critical role in effective plant optimization and configuration strategies defined by owners and operators of power generation facilities. Attendees learnt how integrated with document management systems, and enterprise applications like Oracle Primavera and Asset Lifecycle Management, AutoVue improves change management processes; minimizes risks by providing access to accurate engineering drawings which capture and reflect the as-maintained status of assets; and allows customers to drive complex maintenance projects to successful completion.Augmented Business Visualization for Agile PLMDuring Oracle Open World, we also showcased an Augmented Business Visualization-based solution for Oracle Agile PLM. An Augmented Business Visualization (ABV) solution is one where your structured data (from Oracle Agile PLM for instance) and your unstructured data (documents, designs, 3D models, etc) come together to allow you to make better decisions (check out our blog posts on the topic: Augment the Value of Your Data (or Time to replace the “attach” button) and Context is Everything ). As part of the Agile PLM, the idea is to support more effective decision-making by turning 3D assemblies into color-coded reports, and streamlining business processes like Engineering Change Management by enabling the automatic creation of engineering change requests in Agile PLM directly from documents being viewed in AutoVue. More on this coming soon...probably during the Oracle Value Chain Summit to be held in San Francisco, from Feb. 4-6, 2013 in San Francisco! Mark your calendars and stay tuned for more information! And thanks again for joining us at Oracle OpenWorld!

    Read the article

  • Oracle Systems and Solutions at OpenWorld Tokyo 2012

    - by ferhat
    Oracle OpenWorld Tokyo and JavaOne Tokyo will start next week April 4th. We will cover Oracle systems and Oracle Optimized Solutions in several keynote talks and general sessions. Full schedule can be found here. Come by the DemoGrounds to learn more about mission critical integration and optimization of complete Oracle stack. Our Oracle Optimized Solutions experts will be at hand to discuss 1-1 several of Oracle's systems solutions and technologies. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance. And because they are highly flexible by design, Oracle Optimized Solutions can be implemented as an end-to-end solution or easily adapted into existing environments. Oracle Optimized Solutions, Servers,  Storage, and Oracle Solaris  Sessions, Keynotes, and General Session Talks DAY TIME TITLE Notes Session Wednesday  April 4 9:00 - 11:15 Keynote: ENGINEERED FOR INNOVATION - Engineered Systems Mark Hurd,  President, Oracle Takao Endo, President & CEO, Oracle Corporation Japan John Fowler, EVP of Systems, Oracle Ed Screven, Chief Corporate Architect, Oracle English Session K1-01 11:50 - 12:35 Simplifying IT: Transforming the Data Center with Oracle's Engineered Systems Robert Shimp, Group VP, Product Marketing, Oracle English Session S1-01 15:20 - 16:05 Introducing Tiered Storage Solution for low cost Big Data Archiving S1-33 16:30 - 17:15 Simplifying IT - IT System Consolidation that also Accelerates Business Agility S1-42 Thursday  April 5 9:30 - 11:15 Keynote: Extreme Innovation Larry Ellison, Chief Executive Officer, Oracle English Session K2-01 11:50 - 13:20 General Session: Server and Storage Systems Strategy John Fowler, EVP of Systems, Oracle English Session G2-01 16:30 - 17:15 Top 5 Reasons why ZFS Storage appliance is "The cloud storage" by SAKURA Internet Inc L2-04 16:30 - 17:15 The UNIX based Exa* Performance IT Integration Platform - SPARC SuperCluster S2-42 17:40 - 18:25 Full stack solutions of hardware and software with SPARC SuperCluster and Oracle E-Business Suite  to minimize the business cost while maximizing the agility, performance, and availability S2-53 Friday April 6 9:30 - 11:15 Keynote: Oracle Fusion Applications & Cloud Robert Shimp, Group VP, Product Marketing Anthony Lye, Senior VP English Session K3-01 11:50 - 12:35 IT at Oracle: The Art of IT Transformation to Enable Business Growth English Session S3-02 13:00-13:45 ZFS Storagge Appliance: Architecture of high efficient and high performance S3-13 14:10 - 14:55 Why "Niko Niko doga" chose ZFS Storage Appliance to support their growing requirements and storage infrastructure By DWANGO Co, Ltd. S3-21 15:20 - 16:05 Osaka University: Lower TCO and higher flexibility for student study by Virtual Desktop By Osaka University S3-33 Oracle Developer Sessions with Oracle Systems and Oracle Solaris DAY TIME TITLE Notes LOCATION Friday April 6 13:00 - 13:45 Oracle Solaris 11 Developers D3-03 13:00 - 14:30 Oracle Solaris Tuning Contest Hands-On Lab D3-04 14:00 - 14:35 How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris English Session D3-13 15:00 - 15:45 IT Assets preservation and constructive migration with Oracle Solaris virtualization D3-24 16:00 - 17:30 The best packaging system for cloud environment - Creating an IPS package D3-34 Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions.

    Read the article

  • Oracle Customer Experience Summit @ OpenWorld

    - by Christie Flanagan
    This first-ever Oracle Customer Experience Summit @ OpenWorld kicked off yesterday, bringing together established thought leaders and practitioners in customer experience. The first day saw noted marketing and customer experience thought leader, Seth Godin, take the stage to discuss how rapidly accelerating change and adoption are driving new behaviors and higher expectations in a massively disruptive transformation in which the customer now holds the power. His presentation gave us in-depth insight into this always-connected, always-sharing experience revolution we are witnessing.If you haven’t yet made it over to the Oracle Customer Experience Summit at The Westin St. Francis and the recently made over Oracle Square (aka Union Square), there’s still time today and tomorrow to network with industry peers and hear best practices from those who have steered their ventures through the disruptive trends of customer experience and have proven, successful strategies to share for driving strategic customer-centric initiatives. If you’re interested in learning how Oracle WebCenter helps businesses meet the demands of the customer experience revolution, be sure to check out these sessions at the Oracle Customer Experience Summit later today:Using the Online Customer Experience to Drive Engagement and Marketing Success Thursday, Oct 4, 4:15 PM - 5:15 PM - St. Francis - GeorgianMariam Tariq - Senior Director Product Management, Oracle Stephen Schleifer - Senior Principal Product Manager, Oracle Richard Backx - Business IT Architect/Consultant, KPN NL Netco CE Channels Online The online channel is a critical means of reaching and engaging customers. Online marketing efforts today must be targeted, interactive, and consistent to provide customers with a seamless experience. These efforts must include integrated management of Web, mobile, and social channels—supported by cross-channel customer data and campaigns—and integration with commerce to drive an engaging and differentiated online customer experience. Attend this session to learn how you can use the online channel to increase customer loyalty and drive the success of your marketing initiatives.Empowering Your Frontline Employees: Sales and Service Enterprise Collaboration Thursday, Oct 4, 5:30 PM - 6:30 PM - St. Francis - Elizabethan ABStephen Fioretti - VP, Product Management, Oracle Peter Doolan - Group Vice President, Sales Engineering, Oracle Andrew Kershaw - Sr Director Business Development, Oracle Marty Marcinczyk - VP Customer Experience Engineering, Comcast A focus on the employee experience is critical, because it can make or break your customers’ experiences, directly or indirectly. Engaged and empowered frontline employees become your best advocates and inspire your brand champions. This session explores proven approaches and tools, including social collaboration tools, that can help you empower and enable your frontline teams to improve customer and employee experiences.And before you go, you'll also want to explore the Innovation Tents in Oracle Square which feature leading-edge customer experience demonstrations; attend our customer journey mapping workshop; and learn at sessions focused on innovating differentiated experiences that drive cross-functional alignment.

    Read the article

  • OpenWorld in Small Bites

    - by Kathryn Perry
    Fifty thousand attendees -- that's bigger than the cities some of us live in. Monday morning it took 20 minutes to get from Hall D in Moscone North to a conference room in Moscone South -- the crowds were crushing! A great start to a great week! Larry is as big a name as ever on the program schedule and on the Moscone stage. People were packed in Hall D and clustered around every big screen TV. He stayed on script as he laid out Oracle's SaaS, PaaS, and IaaS strategies. Every seat in Chris Leone's Fusion Apps Cloud Overview was filled on Monday morning. Oracle employees who wanted to get in were turned away. And the same thing happened in the repeat session on Wednesday. Our newest suite of apps is hot! Speaking of hot, the weather was made to order. Then it turned very San Francisco-like on Wednesday afternoon. Downright cold for those who trusted SF temps to hold in the 80's. Who did you follow on Twitter during the conference? So many voices, opinions, and convos! Great combo of social media and sharp minds. Be sure to follow @larryellison, @stevenrmiranda, and @Oracle for updates and MyPOVs. Keywords for the Apps customers at the conference were cloud, mobile, and social. Every day, every session, every speaker. Wednesday afternoon, 4 pm at the Four Seasons hotel. A large roomful of analysts and influencers firing questions at a panel of eight Fusion customers. Steve Miranda moderating. Good energy and a great exchange of information and confidence. Word on the street is that OpenWorld has outgrown San Francisco -- but moving it seems unthinkable. The city isn't just a backdrop for an industry conference - it's a headliner right up there with Larry Ellison and Pearl Jam. As you can imagine, electrical outlets were in high demand at every venue. The most popular hotels and bars near Moscone designed their interiors around accessible electrical power strips. People are plenty willing to buy a drink while they grab a charge. Wednesday afternoon, 4 pm at the Four Seasons hotel. A large roomful of analysts and influencers firing questions at a panel of eight Fusion customers. Steve Miranda moderating. Good energy and a great exchange of information and confidence. Treasure Island in the dark. Eddy Vedder has an amazing voice! And Kings of Leon over delivered on people's expectations. It was cold. It was windy. It was very fun. One analyst said it's the best customer appreciation party in the industry. 

    Read the article

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • Oracle OpenWorld / JavaOne Where I'll Be

    - by Shay Shmeltzer
    It's that time of the year again when San Francisco get flooded with Oracle and Java geeks for the annual OpenWorld and JavaOne conferences. Here are some of the places where you'll be able to find me: Sunday has a bunch of great ADF content in the ADF Enterprise Methodology Group track - I'm not sure if I'll make it there but I'm sure those who will will get some serious knowledge transfer. I'm starting Monday at the Keynote for Developers (10:45 in Salon 8 at the Marriott) - that's a great place for ADF developers to start the official week with an overview of what's new and upcoming in the world of development with ADF. While I'm not presenting this session - Chris Tonas who leads the development tools org will -  a demo that I built will be shown. So I'll be sitting in the audience crossing my fingers praying for the demo gods (and the wifi connection to work). My presentation part of the week starts on Monday at 12:15 at Moscone South room 306 where I'll be presenting "CON3004 - Understanding Oracle ADF and Its Role in Oracle Fusion"  . A basic introduction to ADF, it's architecture, development experience and how it integrates and works with the rest of the Fusion Middleware components.  After the session between 2-4 I'll be at the JDeveloper demo booth in Moscone South to answer any questions people might have. Then at 6:15 together with Grant we'll host BOF4492 - How to Get Started with Oracle ADF where we'll try and explain some of the learning paths and resources that are available for people who want to start learning ADF. This is a birds-of-a-feather so we'll also love to hear ideas from the audience about what paths they took and what things work or need improvment. Tuesday is relatively a quite day for me with a shift at the Oracle ADF Essentials pod at JavaOne from 1:30-3:30. There are several very good ADF architecture and best practices sessions on that day - so I'll try and hit those. Wednesday starts with another shift at the JDeveloper booth at JavaOne. Then at 4:30, instead of doing what all the ADF developers should do and heading over to the ADF meetup at the OTN Lounge, I'll be heading over to JavaOne for my CON3770 - Oracle JDeveloper and Oracle ADF: What’s New session. It's been a couple of years since the last time JDeveloper or ADF got any airtime at JavaOne - so it will be a great opportunity to show those in the Java community with open minds our approach to Java development. Now that ADF Essentials offers a free way to develop with ADF on GlassFish, I hope we'll be getting more people from the core Java camp interested in what we have to offer. Thursday is another relaxed day for me - who knows maybe I'll even be able to catch a session or two on that day. If you want to learn more about the ADF related sessions at OOW check out our full list here.

    Read the article

  • split a database web application - good idea or bad idea?

    - by Khou
    Is it a bad idea to split up a application and the database? Application1 uses database1 on ServerX Application2 uses database2 on ServerY Both application communicates over web service API, they are apart of the same application, one application is used to manage user's profile/personal data, while the other application is used to manages user's financial data. Or should just put them together and just use 1 database on the same server?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >