Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 433/1276 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Access 2010 datasheet view only/relationships unavailable

    - by Luis
    I'm relatively new to MS Access in general and just started working with Access 2010. I've created a new web database with a few tables that I need to relate. First problem: For the life of me, I can't view anything in any view other than datasheet view; everywhere I would expect to be able to change the view, only datasheet view is available. Second problem: I can't change the primary key(s). Presumably I would be able to do this if I could get out of datasheet view and into design view. Third problem: The 'Relationships' button is greyed out. I know these appear to be really simple things but I've been looking for much more time than I'd like to admit trying to figure out how to get unstuck. Update: It would appear that this is happening because it is a 'web database' as I've been able to do all of the above in a new regular database. With this in mind let me ask a different question: Am I able to add relationships and change primary keys in a web database? If so how? More generally, what is the point of a web database?

    Read the article

  • Apache httpd + FreeTDS hangs until restarted

    - by Jordan Reiter
    Every so often requests to a Linux server (say, linux.example.org) where the web app (Django) pulls in data from a SQL Server database via FreeTDS will hang. Requests on other servers pointing to the database still work, as do requests on linux.example.org that use local MySQL databases. Only the server plus FreeTDS appear to be affected. Restarting httpd makes the database connections work correctly again. What could cause this problem? Using: Centos 5.9 freetds 0.91 Apache httpd 2.2.3 /etc/obdc.ini: [DSN] Description = SQL Server 2005 Driver = FreeTDS ;Database = dbname Servername = SERVERNAME ;TDS_Version = 8.0 /etc/freetds.conf: [SERVERNAME] driver = /usr/lib64/libtdsodbc.so host = db.example.org port = 1433 tds version = 8.0 client charset = UTF-8

    Read the article

  • After creating a mysql user with all privileges, the user cannot create databases in phpMyAdmin and only sees information_schema table

    - by GHarping
    This is a recurring problem for some reason... Using mysql 5.5, I am simply trying to create a user that can connect to the database remotely, have access to all databases, and create databases. I have created a user using: create user 'dev'@'%' identified by 'abcdefg'; then granted all permissions using: GRANT ALL ON *.* to 'dev'@'192.168.%' IDENTIFIED BY 'abcdefg' WITH GRANT OPTION; and the result is that the user cannot create databases, and can only see information_schema database for some reason. Databases Create database: Documentation No Privileges Database Ascending information_schema Total: 1 Does anyone know why this might be happening?

    Read the article

  • How setup federated SQL data to display limited information to a Web server in the DMZ?

    - by Pcav
    I have a SQL server behind a firewall. I need to push some limited SQL 2005 information to a Web Server in the DMZ so that I do not have to let database queries come all the way into the database server on our internal network. I want to push a small amount of dynamic data to a Web server in the DMZ and lock it down so that our hosted website does not need to come into the internal network for information; I want to put a server in the DMZ that will be the only connection allowed to the SQL database. This DMZ server will be the only server that can have any sort of connection to the back-end database so the hosting provider just pull the data from our server in the DMZ...

    Read the article

  • Which open source/free CMSs allow for staging content changes before putting live?

    - by elliot100
    I'm not sure that I've phrased the question all that well. What I'm really looking for is a feature of CMSs where content changes are made on a restricted access 'staging/preview' site, before being published to the live external site. The open source/free CMSs I've looked at so far (Textpattern, WordPress, Movable Type) don't seem to allow this, as far as I can see. Although they allow new content to be saved as draft/pending, viewable by users with appropriate privileges, this doesn't work with changes to existing content -- a post/page can't be live and also have a new version pending. (Do correct me if I'm wrong). I realise it should be possible to do this by making all changes on a staging site, and then replicating the contents of that database to a separate live site manually, but am looking for something a little more elegant. Edit: Just to clarify, both systems which involve synchronising a live database with a staging database systems which offer live/staging views of a single database would be of interest. Am sure I have seen both approaches in commercial/proprietary CMSs.

    Read the article

  • Preventing users from deleting SQL data

    - by me2011
    We just purchased a program that requires the users to have an account in the MS SQL server, with read/write access to the program's database. My concern is that since these users will now have write access to the database, they could directly connect to the SQL server outside of the program's client and then mess with the data directly in the tables. Is there anyway I can prevent access to the database while still allowing access via the client program?

    Read the article

  • Using LDAP/Active Directory with PIN based authentication

    - by nishantjr
    We'd like to integrate our service with LDAP, but because of hardware constraints, we're only allow 4 digit user ids and passwords. What would be the best practice for performing such an authetication? We've considered adding User ID and PIN attributes to the LDAP user schema, but we're not sure how happy people would be with modifying their schema to interact with our service. The PIN attribute would have to have the same support that native user passwords have. (hashing and salting etc.) UPDATE Another consideration is how ldap_bind works with this scenario. How do we get it to use an alternate authentication method? Can this even be done without affecting other services that use the same LDAP server?

    Read the article

  • Removing extra commas in CSV without another data source

    - by fi-no
    We have a large database with customer addresses that was exported from an SQL database to CSV. In the event that a company has a comma in their name, it (predictably) throws the whole database out of whack. Unfortunately, there are so many instances of this (and commas in the second address line) that the whole CSV (~100k rows) is a huge mess. The obvious fix is to export the data again in a different, non comma reliant format, but access to that SQL database is more or less impossible at the moment... I've tried a few tools and brainstormed about combining things to fix this, but I figured asking couldn't hurt. Thanks!

    Read the article

  • Apache: multiple domains handling

    - by cache
    So I use following schema to handle multiple sites on my apache: <VirtualHost 192.168.1.100:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/%0/docs VirtualScriptAlias /var/www/%0/cgi-bin </VirtualHost> Therefore, if a client go to www.example.com, it will actually point to /var/www/www.example.com/doc/, which is good. However, what if the client go to example.com? It will point to /var/www/example.com/doc, which is not what we want. So my question is: is there any better schema for that? Or what should I do to fix the issue? Thanks!

    Read the article

  • class not found expection.

    - by theJava
    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mySessionFactory' defined in ServletContext resource [/WEB-INF/dispatcher-servlet.xml]: Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type 'java.util.ArrayList' to required type 'java.lang.Class[]' for property 'annotatedClasses'; nested exception is java.lang.IllegalArgumentException: Cannot find class [com.vinoth.domain.User] org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527) org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:563) org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:442) org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:458) org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:339) org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:306) org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:127) javax.servlet.GenericServlet.init(GenericServlet.java:212) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879) org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) java.lang.Thread.run(Unknown Source) I have the particular class in my source and here is my bean config and yet when i compile my application it throws me the error. <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver" p:prefix="/WEB-INF/jsp/" p:suffix=".jsp" /> <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://127.0.0.1:3306/spring"/> <property name="username" value="monty"/> <property name="password" value="indian"/> </bean> <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="myDataSource" /> <property name="annotatedClasses"> <list> <value>com.vinoth.domain.User</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.hbm2ddl.auto">create</prop> </props> </property> </bean> <bean id="myUserDAO" class="com.vinoth.dao.UserDAOImpl"> <property name="sessionFactory" ref="mySessionFactory"/> </bean> <bean name="/user/*.htm" class="com.vinoth.web.UserController" > <property name="userDAO" ref="myUserDAO" /> </bean> </beans>

    Read the article

  • NHibernate MySQL Composite-Key

    - by LnDCobra
    I am trying to create a composite key that mimicks the set of PrimaryKeys in the built in MySQL.DB table. The Db primary key is as follows: Field | Type | Null | ---------------------------------- Host | char(60) | No | Db | char(64) | No | User | char(16) | No | This is my DataBasePrivilege.hbm.xml file <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="TGS.MySQL.DataBaseObjects" namespace="TGS.MySQL.DataBaseObjects"> <class name="TGS.MySQL.DataBaseObjects.DataBasePrivilege,TGS.MySQL.DataBaseObjects" table="db"> <composite-id name="CompositeKey" class="TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey, TGS.MySQL.DataBaseObjects"> <key-property name="Host" column="Host" type="char" length="60" /> <key-property name="DataBase" column="Db" type="char" length="64" /> <key-property name="User" column="User" type="char" length="16" /> </composite-id> </class> </hibernate-mapping> The following are my 2 classes for my composite key: namespace TGS.MySQL.DataBaseObjects { public class DataBasePrivilege { public virtual DataBasePrivilegePrimaryKey CompositeKey { get; set; } } public class DataBasePrivilegePrimaryKey { public string Host { get; set; } public string DataBase { get; set; } public string User { get; set; } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; if (obj.GetType() != typeof (DataBasePrivilegePrimaryKey)) return false; return Equals((DataBasePrivilegePrimaryKey) obj); } public bool Equals(DataBasePrivilegePrimaryKey other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return Equals(other.Host, Host) && Equals(other.DataBase, DataBase) && Equals(other.User, User); } public override int GetHashCode() { unchecked { int result = (Host != null ? Host.GetHashCode() : 0); result = (result*397) ^ (DataBase != null ? DataBase.GetHashCode() : 0); result = (result*397) ^ (User != null ? User.GetHashCode() : 0); return result; } } } } And the following is the exception I am getting: Execute System.InvalidCastException: Unable to cast object of type 'System.Object[]' to type 'TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey'. at (Object , GetterCallback ) at NHibernate.Bytecode.Lightweight.AccessOptimizer.GetPropertyValues(Object target) at NHibernate.Tuple.Component.PocoComponentTuplizer.GetPropertyValues(Object component) at NHibernate.Type.ComponentType.GetPropertyValues(Object component, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode, ISessionFactoryImplementor factory) at NHibernate.Engine.EntityKey.GenerateHashCode() at NHibernate.Engine.EntityKey..ctor(Object identifier, String rootEntityName, String entityName, IType identifierType, Boolean batchLoadable, ISessionFactoryImplementor factory, EntityMode entityMode) at NHibernate.Engine.EntityKey..ctor(Object id, IEntityPersister persister, EntityMode entityMode) at NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.Get(String entityName, Object id) at NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id) at NHibernate.Impl.SessionImpl.Get[T](Object id) at TGS.MySQL.DataBase.DataProvider.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.MySQL.DataBase\DataProvider.cs:line 20 at TGS.UserAccountControl.UserAccountManager.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControl\UserAccountManager.cs:line 10 at TGS.UserAccountControlTest.UserAccountManagerTest.CanGetDataBasePrivilegeByHostDbUser() in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControlTest\UserAccountManagerTest.cs:line 12 I am new to NHibernate and any help would be appreciated. I just can't see where it is getting the object[] from? Is the composite key supposed to be object[]?

    Read the article

  • XML and XSD - use element name as replacement of xsi:type for polymorphism

    - by disown
    Taking the W3C vehicle XSD as an example: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://cars.example.com/schema" xmlns:target="http://cars.example.com/schema"> <complexType name="Vehicle" abstract="true"/> <complexType name="Car"> <complexContent> <extension base="target:Vehicle"/> ... </complexContent> </complexType> <complexType name="Plane"> <complexContent> <extension base="target:Vehicle"/> <sequence> <element name="wingspan" type="integer"/> </sequence> </complexContent> </complexType> </schema> , and the following definition of 'meansOfTravel': <complexType name="MeansOfTravel"> <complexContent> <sequence> <element name="transport" type="target:Vehicle"/> </sequence> </complexContent> </complexType> <element name="meansOfTravel" type="target:MeansOfTravel"/> With this definition you need to specify the type of your instance using xsi:type, like this: <meansOfTravel> <transport xsi:type="Plane"> <wingspan>3</wingspan> </transport> </meansOfTravel> I would just like to acheive a 'name of type' - 'name of element' mapping so that this could be replaced with just <meansOfTravel> <plane> <wingspan>3</wingspan> </plane> </meansOfTravel> The only way I could do this until now is by making it explicit: <complexType name="MeansOfTravel"> <sequence> <choice> <element name="plane" type="target:Plane"/> <element name="car" type="target:Car"/> </choice> </sequence> </complexType> <element name="meansOfTravel" type="target:MeansOfTravel"/> But this means that I have to list all possible sub-types in the 'MeansOfTravel' complex type. Is there no way of making the XML parser assume that you mean a 'Plane' if you call the element 'plane'? Or do I have to make the choice explicit? I would just like to keep my design DRY - if you have any other suggestions (like groups or so) - i would love to hear them.

    Read the article

  • Why do I get a blank page from my Perl CGI script?

    - by Jason
    The user enters a product code, price and name using a form. The script then either adds it to the database or deletes it from the database. If the user is trying to delete a product that is not in the database they get a error message. Upon successful adding or deleting they also get a message. However, when I test it I just get a blank page. Perl doesnt come up with any warnings, syntax errors or anything; says everything is fine, but I still just get a blank page. The script: #!/usr/bin/perl #c09ex5.cgi - saves data to and removes data from a database print "Content-type: text/html\n\n"; use CGI qw(:standard); use SDBM_File; use Fcntl; use strict; #declare variables my ($code, $name, $price, $button, $codes, $names, $prices); #assign values to variables $code = param('Code'); $name = param('Name'); $price = param('Price'); $button = param('Button'); ($code, $name, $price) = format_input(); ($codes, $names, $prices) = ($code, $name, $price); if ($button eq "Save") { add(); } elsif ($button eq "Delete") { remove(); } exit; sub format_input { $codes =~ s/^ +//; $codes =~ s/ +$//; $codes =~ tr/a-z/A-Z/; $codes =~ tr/ //d; $names =~ s/^ +//; $names =~ s/ +$//; $names =~ tr/ //d; $names = uc($names); $prices =~ s/^ +//; $prices =~ s/ +$//; $prices =~ tr/ //d; $prices =~ tr/$//d; } sub add { #declare variable my %candles; #open database, format and add record, close database tie(%candles, "SDBM_File", "candlelist", O_CREAT|O_RDWR, 0666) or die "Error opening candlelist. $!, stopped"; format_vars(); $candles{$codes} = "$names,$prices"; untie(%candles); #create web page print "<HTML>\n"; print "<HEAD><TITLE>Candles Unlimited</TITLE></HEAD>\n"; print "<BODY>\n"; print "<FONT SIZE=4>Thank you, the following product has been added.<BR>\n"; print "Candle: $codes $names $prices</FONT>\n"; print "</BODY></HTML>\n"; } #end add sub remove { #declare variables my (%candles, $msg); tie(%candles, "SDBM_File", "candlelist", O_RDWR, 0) or die "Error opening candlelist. $!, stopped"; format_vars(); #determine if the product is listed if (exists($candles{$codes})) { delete($candles{$codes}); $msg = "The candle $codes $names $prices has been removed."; } else { $msg = "The product you entered is not in the database"; } #close database untie(%candles); #create web page print "<HTML>\n"; print "<HEAD><TITLE>Candles Unlimited</TITLE></HEAD>\n"; print "<BODY>\n"; print "<H1>Candles Unlimited</H1>\n"; print "$msg\n"; print "</BODY></HTML>\n"; }

    Read the article

  • Can I create an xml that specifies element from 2 nested xsd's without using a prefixes?

    - by TweeZz
    I have 2 xsd's which are nested: DefaultSchema.xsd: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="DefaultSchema" targetNamespace="http://myNamespace.com/DefaultSchema.xsd" elementFormDefault="qualified" xmlns="http://myNamespace.com/DefaultSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" > <xs:complexType name="ZForm"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Part" minOccurs="0" maxOccurs="unbounded" type="Part"/> </xs:sequence> <xs:attribute name="Title" use="required" type="xs:string"/> <xs:attribute name="Version" type="xs:int"/> </xs:complexType> <xs:complexType name="Part"> <xs:sequence minOccurs="0" maxOccurs="unbounded"> <xs:element name="Label" type="Label" minOccurs="0"></xs:element> </xs:sequence> <xs:attribute name="Title" use="required" type="xs:string"/> </xs:complexType> <xs:complexType name="Label"> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="Title" type="xs:string"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:schema> ExportSchema.xsd: (this one kinda wraps 1 more element (ZForms) around the main element (ZForm) of the DefaultSchema) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="ExportSchema" targetNamespace="http://myNamespace.com/ExportSchema.xsd" elementFormDefault="qualified" xmlns="http://myNamespace.com/DefaultSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:es="http://myNamespace.com/ExportSchema.xsd" > <xs:import namespace="http://myNamespace.com/DefaultSchema.xsd" schemaLocation="DefaultSchema.xsd"/> <xs:element name="ZForms" type="es:ZFormType"></xs:element> <xs:complexType name="ZFormType"> <xs:sequence> <xs:element name="ZForm" type="ZForm" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:schema> And then finally I have a generated xml: <?xml version="1.0" encoding="utf-8"?> <ZForms xmlns="http://myNamespace.com/ExportSchema.xsd"> <ZForm Version="1" Title="FormTitle"> <Part Title="PartTitle" > <Label Title="LabelTitle" /> </Part> </ZForm> </ZForms> Visual studio complains it doesn't know what 'Part' is. I was hoping I do not need to use xml namespace prefixes (..) to make this xml validate, since ExportSchema.xsd has a reference to the DefaultSChema.xsd. Is there any way to make that xml structure valid without explicitly specifying the DefaultSchema.xsd? Or is this a no go?

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • How many different servers are needed to keep a website running with no downtime? [closed]

    - by Mason Wheeler
    Machines go down. It's a fact of life. They may need to be rebooted for some reason, or they may have a hardware failure, or a power outage. So if I wanted to deploy a website with a server backed by a SQL database, putting the whole thing on one server wouldn't be good enough. It obviously needs at least two servers, so that if one goes down, the other can pick up the slack until the first comes back up. Of course, if I have the server software on two machines, either one of which could go down, I can't place the database on either of those two machines, because it could go down. So the database needs its own server. But that server can go down, so I need a backup database server and some sort of replication system to keep it in sync so the main can fail-over to it. So far, that's a bare minimum of 4 machines to keep one website running with a reasonable chance of no downtime. (Assuming no catastrophic events take place that take down both front-end servers at once or both DB servers at once, and no hacks, DDOS attacks, etc. Am I missing any other factors, or should I consider 4 servers to be the minimum for running a website with a goal of continuing operation without downtime even when a server goes down?

    Read the article

  • SQL Sharding and SQL Azure&hellip;

    - by Dave Noderer
    Herve Roggero has just published a paper that outlines patterns for scaling using SQL Azure and the Blue Syntax (he and Scott Klein’s company) sharding api. You can find the paper at: http://www.bluesyntax.net/files/EnzoFramework.pdf Herve and Scott have also just released an Apress book Pro SQL Azure. The idea of being able to split (shard) database operations automatically and control them from a web based management console is very appealing. These ideas have been talked about for a long time and implemented in thousands of very custom ways that have been costly, complicated and fragile. Now, there is light at the end of the tunnel. Scaling database access will become easier and move into the mainstream of application development. The main cost is using an api whenever accessing the database. The api will direct the query to the correct database(s) which may be located locally or in the cloud. It is inevitable that the api will change in the future, perhaps incorporated into a Microsoft offering. Even if this is the case, your application has now been architected to utilize these patterns and details of the actual api will be less important. Herve does a great job of laying out the concepts which every developer and architect should be familiar with!

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Installation procedure RAC One Node

    - by rene.kundersma
    Okay, In order to test RAC One Node, on my Oracle VM Laptop, I just: - installed Oracle VM 2.2 - Created two OEL 5.3 images The two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics. After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output: This patch has the scripts required to administer RAC One Node, you will see them later. At the moment we have them available for Linux and Solaris. After installation of the patch, I created a RAC database with an instance on one node. Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters: When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service: The service configuration details are: After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes: After initialization, when you would run raconeinit again, you would see: So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) . Omotion is started by running Omotion. With Omotion -v you get verbose output: So, during the migration you will see the two instance active: And, after the migration, there is only one instance left on the new node:

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >