Search Results

Search found 1110 results on 45 pages for 'constraint'.

Page 11/45 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Tomcat 403 error after LDAP authentication.

    - by user352636
    I'm currently trying to use an LDAP server to authenticate users who are trying to access our Tomcat setup. I believe I have managed to get the LDAP authentication working in the form of a JNDI realm call from Tomcat, but immediately after the user enters their password Tomcat starts throwing 403 (permission denied) errors for everything except from the root page (ttp://localhost:1337/). I have no idea why this is happening. I am following the example at http://blog.mc-thias.org/?title=tomcat_ldap_authentication&more=1&c=1&tb=1&pb=1 . server.xml (the interesting/changed bits) <Realm className="org.apache.catalina.realm.JNDIRealm" debug="99" connectionURL="ldap://localhost:389" userPattern="uid={0},ou=People,o=test,dc=company,dc=uk" userSubTree="true" roleBase="ou=Roles,o=test,dc=company,dc=uk" roleName="cn" roleSearch="memberUid={1}" /> <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> web.xml (the interesting/changed bits) <security-constraint> <display-name>Security Constraint</display-name> <web-resource-collection> <web-resource-name>Protected Area</web-resource-name> <!-- Define the context-relative URL(s) to be protected --> <url-pattern>/*</url-pattern> <!-- If you list http methods, only those methods are protected --> </web-resource-collection> <auth-constraint> <!-- Anyone with one of the listed roles may access this area --> <role-name>admin</role-name> <role-name>regular</role-name> </auth-constraint> </security-constraint> <!-- Default login configuration uses form-based authentication --> <login-config> <auth-method>BASIC</auth-method> </login-config> <!-- Security roles referenced by this web application --> <security-role> <role-name>admin</role-name> <role-name>regular</role-name> </security-role> I cannot access my LDAP setup at the moment, but I believe it is alright as the login is accepted by the BASIC auth method, it's just tomcat that is rejecting it. The roles should be as defined in web.xml - admin and regular. If there is any other information you require me to provide, please just ask! My thanks in advance to anyone who can help, and my apologies for any major mistakes I have made - yesterday was pretty much the first time I'd ever heard of LDAP =D. EDIT: Fixed the second xml segment. Apologies for the formating-fail.

    Read the article

  • Configuring a Context specific Tomcat Security Realm

    - by Andy Mc
    I am trying to get a context specific security Realm in Tomcat 6.0, but when I start Tomcat I get the following error: 09-Dec-2010 16:12:40 org.apache.catalina.startup.ContextConfig validateSecurityRoles INFO: WARNING: Security role name myrole used in an <auth-constraint> without being defined in a <security-role> I have created the following context.xml file: <Context debug="0" reloadable="true"> <Resource name="MyUserDatabase" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/my-users.xml" /> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="MyUserDatabase"/> </Context> Created a file: my-users.xml which I have placed under WEB-INF/conf which contains the following: <tomcat-users> <role rolename="myrole"/> <user username="test" password="changeit" roles="myrole" /> </tomcat-users> Added the following lines to my web.xml file: <web-app ...> ... <security-constraint> <web-resource-collection> <web-resource-name>Entire Application</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>myrole</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> ... </web-app> But seem to get the error wherever I put conf/my-users.xml. Do I have to specify an explicit PATH in the pathname or is it relative to somewhere? Ideally I would like to have it packaged up as part of my WAR file. Any ideas?

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Problem resolving a generic Repository with Entity Framework and Castle Windsor Container

    - by user368776
    Hi, im working in a generic repository implementarion with EF v4, the repository must be resolved by Windsor Container. First the interface public interface IRepository<T> { void Add(T entity); void Delete(T entity); T Find(int key) } Then a concrete class implements the interface public class Repository<T> : IRepository<T> where T: class { private IObjectSet<T> _objectSet; } So i need _objectSet to do stuff like this in the previous class public void Add(T entity) { _objectSet.AddObject(entity); } And now the problem, as you can see im using a EF interface like IObjectSet to do the work, but this type requires a constraint for the T generic type "where T: class". That constrait is causing an exception when Windsor tries to resolve its concrete type. Windsor configuration look like this. <castle> <components> <component id="LVRepository" service="Repository.Infraestructure.IRepository`1, Repository" type="Repository.Infraestructure.Repository`1, Repository" lifestyle="transient"> </component> </components> The container resolve code IRepository<Product> productsRep =_container.Resolve<IRepository<Product>>(); Now the exception im gettin System.ArgumentException: GenericArguments[0], 'T', on 'Repository.Infraestructure.Repository`1[T]' violates the constraint of type 'T'. ---> System.TypeLoadException: GenericArguments[0], 'T', on 'Repository.Infraestructure.Repository`1[T]' violates the constraint of type parameter 'T'. If i remove the constraint in the concrete class and the depedency on IObjectSet (if i dont do it get a compile error) everything works FINE, so i dont think is a container issue, but IObjectSet is a MUST in the implementation. Some help with this, please.

    Read the article

  • Setup SSL (self signed cert) with tomcat

    - by Danny
    I am mostly following this page: http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html I used this command to create the keystore keytool -genkey -alias tomcat -keyalg RSA -keystore /etc/tomcat6/keystore and answered the prompts Then i edited my server.xml file and uncommented/edited this line <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/etc/tomcat6/keystore" keystorePass="tomcat" /> then I go to the web.xml file for my project and add this into the file <security-constraint> <web-resource-collection> <web-resource-name>Security</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> When I try to run my webapp I am met with this: Unable to connect Firefox can't establish a connection to the server at localhost:8443. * The site could be temporarily unavailable or too busy. Try again in a few moments. * If you are unable to load any pages, check your computer's network connection. If I comment out the lines I've added to my web.xml file, the webapp works fine. My log file in /var/lib/tomcat6/logs says nothing. I can't figure out if this is a problem with my keystore file, my server.xml file or my web.xml file.... Any assistance is appreciated I am using tomcat 6 on ubuntu.

    Read the article

  • General many-to-many relationship problem ( Postgresql )

    - by David
    Hi, i have two tables: CREATE TABLE "public"."auctions" ( "id" VARCHAR(255) NOT NULL, "auction_value_key" VARCHAR(255) NOT NULL, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX2" PRIMARY KEY("id"), ); and CREATE TABLE "public"."auction_values" ( "id" NUMERIC DEFAULT nextval('default_seq'::regclass) NOT NULL, "fk_auction_value_key" VARCHAR(255) NOT NULL, "key" VARCHAR(255) NOT NULL, "value" TEXT, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX1" PRIMARY KEY("id"), ); if i want to create a many-to-many relationship on the auction_value_key like this: ALTER TABLE "public"."auction_values" ADD CONSTRAINT "auction_values_fk" FOREIGN KEY ("fk_auction_value_key") REFERENCES "public"."auctions"("auction_value_key") ON DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE; i get this SQL error: ERROR: there is no unique constraint matching given keys for referenced table "auctions" Question: As you might see, i want "auction_values" to be be "reused" by different auctions without duplicating them for every auction... So i don't want a key relation on the "id" field in the auctions table... Am i thinking wrong here or what is the deal? ;) Thanks

    Read the article

  • PostgreSQL - Error: SQL state: XX000.

    - by rob
    I have a table in Postgres that looks like this: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL CONSTRAINT "pk_Population" PRIMARY KEY ("Id") ) WITH ( OIDS=FALSE ); And a select function that looks like this: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 Calling the select function returns all the rows in the table as expected. I have a need to add a couple of columns to the table (both of which are foreign keys to other tables in the database). This gives me a new table def as follows: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL, "DefaultSpeciesId" bigint NOT NULL, "DefaultEcotypeId" bigint NOT NULL, CONSTRAINT "pk_Population" PRIMARY KEY ("Id"), CONSTRAINT "fk_Population_DefaultEcotypeId" FOREIGN KEY ("DefaultEcotypeId") REFERENCES "Ecotype" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "fk_Population_DefaultSpeciesId" FOREIGN KEY ("DefaultSpeciesId") REFERENCES "Species" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) WITH ( OIDS=FALSE ); and function: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible", "DefaultSpeciesId", "DefaultEcotypeId" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 ROWS 1000; Calling the function after these changes results in the following error message: ERROR: could not find attribute 11 in subquery targetlist SQL state: XX000 What is causing this error and how do I fix it? I have tried to drop and recreate the columns and function - but the same error occurs. Platform is PostgreSQL 8.4 running on Windows Server. Thanks.

    Read the article

  • Using Constraints on Hierarchical Data in a Self-Referential Table

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to update on cascades and prevent deletion of fruit if they have "children." So I used the following: CREATE TABLE `idtlp_main`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`iddoc_main`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • symfony doctrine build-sql error

    - by user313571
    I have some big problems with symfony and doctrine at the beginning of a new project. I have created database diagram with mysql workbench, inserted the sql into phpmyadmin and then I've tried symfony doctrine:build-schema to generate the YAML schema. It generates a wrong schema (relations don't have on delete/on update) and after this I've tried symfony doctrine:build --sql and symfony doctrine:insert-sql The insert-sql statement generates error (can't create table ... failing query alter table add constraint ....), so I've decided to take a look over the generated sql and I've found out some differences between the sql generated from mysql workbench (which works perfect, including relations) and the sql generated by doctrine. I'll be short from now: I have to tables, EVENT and FORM and a 1 to n relation (each event may have multiple forms) so the correct constraint (generated with workbench) is ALTER TABLE `form` ADD CONSTRAINT `fk_form_event1` FOREIGN KEY (`event_id`) REFERENCES `event` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; doctrine generated statement is: ALTER TABLE event ADD CONSTRAINT event_id_form_event_id FOREIGN KEY (id) REFERENCES form(event_id); It's totally reversed and I am sure here is the error. What should I do? It's also correct like this?

    Read the article

  • How to embed XBase expressions in an Xtext DSL

    - by Marcus Mathioudakis
    I am writing a simple little DSL for specifying constraints on messages, and Have been trying without success for a while to embed XBase expressions into the language. The Grammar looks like this: grammar org.xtext.businessrules.BusinessRules with org.eclipse.xtext.xbase.Xbase //import "http://www.eclipse.org/xtext/xbase/Xbase" as xbase import "http://www.eclipse.org/xtext/common/JavaVMTypes" as jvmTypes generate businessRules "http://www.xtext.org/businessrules/BusinessRules" Start: rules+=Constraint*; Constraint: {Constraint} 'FOR' 'PAYLOAD' payload=PAYLOAD 'ELEMENT' element=ID 'CONSTRAINED BY' constraint=XExpression; PAYLOAD: "SimulationSessionEvents" |"stacons" |"any" ; Range: 'above' min=INT ('below' max=INT)? |'below' max=INT ('above' min=INT)? ; When trying to parse a file such as: FOR PAYLOAD SimulationSessionEvents ELEMENT matrix CONSTRAINED BY ... I can't get it to work for ... = any kind of Arithmetic expression, although it works for ...= loop or if expression, or even just a number. As soon as I do something like '-5' or '4-5' it says Couldn't resolve reference to JvmIdentifiableElement '-', even though the Xbase.xtext Grammar looks like it allows these expressions. I don't think I'm missing any Jars, as it doesn't complain when I run the mwe workflow, but only when trying to parse the input file. Any help would be much appreciated.

    Read the article

  • Dealing with SQLException with spring,hibernate & Postgres

    - by mad
    Hi im working on a project using HibernateDaoSUpport from my Daos from Spring & spring-ws & hibernate & postgres who will be used in a national application (means a lot of users) Actually, every exception from hibernate is automatically transformed into some specific Spring dataAccesException. I have a table with a keyword on the dabatase & a unique constraint on the keywords : no duplicate keywords is allowed. I have found twows ways to deal with with that in the Insert Dao: 1- Check for the duplicate manually (with a select) prior to doing your insert. I means that the spring transaction will have a SERIALIZABLE isolation level. The obvious drawback is that we have now 2 queries for a simple insert.Advantage: independent of the database 2-let the insert gone & catch the SqlException & convert it to a userfriendly message & errorcode to the final consumer of our webservices. Solution 2: Spring has developped a way to translate specific exeptions into customized exceptions. see http://www.oracle.com/technology/pub/articles/marx_spring.html In my case i would have a ConstraintViolationException. Ideally i would like to write a custom SQLExceptionTranslator to map the duplicate word constraint in the database with a DuplicateWordException. But i can have many unique constraints on the same table. So i have to get the message of the SQLEXceptions in order to find the name of the constraint declared in the create table "uq_duplicate-constraint" for example. Now i have a strong dependency with the database. Thanks in advance for your answers & excuse me for my poor english (it is not my mother tongue)

    Read the article

  • Is it Possible to Use Constraints on Hierarchical Data in a Self-Referential Table?

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to cascade on update and prevent deletion of a record if it has any "children." So I used the following: CREATE TABLE `test`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`test`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • Loop to check all 14 days in the pay period

    - by Rachel Ann Arndt
    Name: Calc_Anniversary Input: Pay_Date, Hire_Date, Termination_Date Output: "Y" if is the anniversary of the employee's Hire_Date, "N" if it is not, and "T" if he has been terminated before his anniversary. Description: Create local variables to hold the month and day of the employee's Date_of_Hire, Termination_Date, and of the processing date using the TO_CHAR function. First check to see if he was terminated before his anniversary. The anniversary could be on any day during the pay period, so there will be a loop to check all 14 days in the pay period to see if one was his anniversary. CREATE OR replace FUNCTION Calc_anniversary( incoming_anniversary_date IN VARCHAR2) RETURN BOOLEAN IS hiredate VARCHAR2(20); terminationdate VARCHAR(20); employeeid VARCHAR2(38); paydate NUMBER := 0; BEGIN SELECT Count(arndt_raw_time_sheet_data.pay_date) INTO paydate FROM arndt_raw_time_sheet_data WHERE paydate = incoming_anniversary_date; WHILE paydate <= 14 LOOP SELECT To_char(employee_id, '999'), To_char(hire_date, 'DD-MON'), To_char(termination_date, 'DD-MON') INTO employeeid, hiredate, terminationdate FROM employees, time_sheet WHERE employees.employee_id = time_sheet.employee_id AND paydate = pay_date; IF terminationdate > hiredate THEN RETURN 'T'; ELSE IF To_char(SYSDATE, 'DD-MON') = To_char(hiredate, 'DD-MON')THEN RETURN 'Y'; ELSE RETURN 'N'; END IF; END IF; paydate := paydate + 1; END LOOP; END; Tables I am using CREATE TABLE Employees ( EMPLOYEE_ID INTEGER, FIRST_NAME VARCHAR2(15), LAST_NAME VARCHAR2(25), ADDRESS_LINE_ONE VARCHAR2(35), ADDRESS_LINE_TWO VARCHAR2(35), CITY VARCHAR2(28), STATE CHAR(2), ZIP_CODE CHAR(10), COUNTY VARCHAR2(10), EMAIL VARCHAR2(16), PHONE_NUMBER VARCHAR2(12), SOCIAL_SECURITY_NUMBER VARCHAR2(11), HIRE_DATE DATE, TERMINATION_DATE DATE, DATE_OF_BIRTH DATE, SPOUSE_ID INTEGER, MARITAL_STATUS CHAR(1), ALLOWANCES INTEGER, PERSONAL_TIME_OFF FLOAT, CONSTRAINT pk_employee_id PRIMARY KEY (EMPLOYEE_ID), CONSTRAINT fk_spouse_id FOREIGN KEY (SPOUSE_ID) REFERENCES EMPLOYEES (EMPLOYEE_ID)) / CREATE TABLE Arndt_Raw_Time_Sheet_data ( EMPLOYEE_ID INTEGER, PAY_DATE DATE, HOURS_WORKED FLOAT, SALES_AMOUNT FLOAT, CONSTRAINT pk_employee_id_pay_date_time PRIMARY KEY (EMPLOYEE_ID, PAY_DATE), CONSTRAINT fk_employee_id_time FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEES (EMployee_ID)); error FUNCTION Calc_Anniversary compiled Warning: execution completed with warning

    Read the article

  • How to set up WebLogic 10.3.3. security for JAX_WS web services?

    - by Roman Kagan
    I have quite simple task to accomplish - I have to set up the security for web services ( basic authentication with hardcoded in WLES user id and password). I set the web.xml (see code fragment below) but I have tough time configuring WebLogic. I added IdentityAssertionAuthenticator Authentication Provider, set it as Required, modified DefaultAuthenticator as Optional and I went to deployed application's security and set the role to "thisIsUser" and at some point it worked, but not anymore (I redeployed war file and set web service security the same way but no avail.) I'd greatly appreciate for all your help. <security-constraint> <display-name>SecurityConstraint</display-name> <web-resource-collection> <web-resource-name>ABC</web-resource-name> <url-pattern>/ABC</url-pattern> </web-resource-collection> <auth-constraint> <role-name>thisIsUser</role-name> </auth-constraint> </security-constraint>

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

  • in haskell, why do I need to specify type constraints, why can't the compiler figure them out?

    - by Steve
    Consider the function, add a b = a + b This works: *Main> add 1 2 3 However, if I add a type signature specifying that I want to add things of the same type: add :: a -> a -> a add a b = a + b I get an error: test.hs:3:10: Could not deduce (Num a) from the context () arising from a use of `+' at test.hs:3:10-14 Possible fix: add (Num a) to the context of the type signature for `add' In the expression: a + b In the definition of `add': add a b = a + b So GHC clearly can deduce that I need the Num type constraint, since it just told me: add :: Num a => a -> a -> a add a b = a + b Works. Why does GHC require me to add the type constraint? If I'm doing generic programming, why can't it just work for anything that knows how to use the + operator? In C++ template programming, you can do this easily: #include <string> #include <cstdio> using namespace std; template<typename T> T add(T a, T b) { return a + b; } int main() { printf("%d, %f, %s\n", add(1, 2), add(1.0, 3.4), add(string("foo"), string("bar")).c_str()); return 0; } The compiler figures out the types of the arguments to add and generates a version of the function for that type. There seems to be a fundamental difference in Haskell's approach, can you describe it, and discuss the trade-offs? It seems to me like it would be resolved if GHC simply filled in the type constraint for me, since it obviously decided it was needed. Still, why the type constraint at all? Why not just compile successfully as long as the function is only used in a valid context where the arguments are in Num? Thank you.

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • How do I get the earlist DateTime of a set, where there is a few conditions

    - by radbyx
    Create script for Product SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[Product]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [ProductName] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Create script for StateLog SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[StateLog]( [StateLogID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Status] [bit] NOT NULL, [TimeStamp] [datetime] NOT NULL, CONSTRAINT [PK_Uptime] PRIMARY KEY CLUSTERED ( [StateLogID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[StateLog] WITH CHECK ADD CONSTRAINT [FK_Uptime_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Product] ([ProductID]) GO ALTER TABLE [dbo].[StateLog] CHECK CONSTRAINT [FK_Uptime_Products] GO I have this and it's not enough: select top 5 [ProductName], [TimeStamp] from [Product] inner join StateLog on [Product].ProductID = [StateLog].ProductID where [Status] = 0 order by TimeStamp desc; (My query givess the 5 lastest TimeStamp's where Status is 0(false).) But I need a thing more: Where there is a set of lastest TimeStamps for a product where Status is 0, i only want the earlist of them (not the lastet). Example: Let's say for Product X i have: TimeStamp1(status = 0) TimeStamp2(status = 1) TimeStamp3(status = 0) TimeStamp4(status = 0) TimeStamp5(status = 1) TimeStamp6(status = 0) TimeStamp7(status = 0) TimeStamp8(status = 0) Correct answer would then be:: TimeStamp6, because it's the first of the lastest timestamps.

    Read the article

  • Custom filtering in Android using ArrayAdapter

    - by Alxandr
    I'm trying to filter my ListView which is populated with this ArrayAdapter: package me.alxandr.android.mymir.adapters; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.Set; import me.alxandr.android.mymir.R; import me.alxandr.android.mymir.model.Manga; import android.content.Context; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Filter; import android.widget.SectionIndexer; import android.widget.TextView; public class MangaListAdapter extends ArrayAdapter<Manga> implements SectionIndexer { public ArrayList<Manga> items; public ArrayList<Manga> filtered; private Context context; private HashMap<String, Integer> alphaIndexer; private String[] sections = new String[0]; private Filter filter; private boolean enableSections; public MangaListAdapter(Context context, int textViewResourceId, ArrayList<Manga> items, boolean enableSections) { super(context, textViewResourceId, items); this.filtered = items; this.items = filtered; this.context = context; this.filter = new MangaNameFilter(); this.enableSections = enableSections; if(enableSections) { alphaIndexer = new HashMap<String, Integer>(); for(int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while(it.hasNext()) keyList.add(it.next()); Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); } } @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; if(v == null) { LayoutInflater vi = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.mangarow, null); } Manga o = items.get(position); if(o != null) { TextView tt = (TextView) v.findViewById(R.id.MangaRow_MangaName); TextView bt = (TextView) v.findViewById(R.id.MangaRow_MangaExtra); if(tt != null) tt.setText(o.getName()); if(bt != null) bt.setText(o.getLastUpdated() + " - " + o.getLatestChapter()); if(enableSections && getSectionForPosition(position) != getSectionForPosition(position + 1)) { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setText(sections[getSectionForPosition(position)]); h.setVisibility(View.VISIBLE); } else { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setVisibility(View.GONE); } } return v; } @Override public void notifyDataSetInvalidated() { if(enableSections) { for (int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while (it.hasNext()) { keyList.add(it.next()); } Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); super.notifyDataSetInvalidated(); } } public int getPositionForSection(int section) { if(!enableSections) return 0; String letter = sections[section]; return alphaIndexer.get(letter); } public int getSectionForPosition(int position) { if(!enableSections) return 0; int prevIndex = 0; for(int i = 0; i < sections.length; i++) { if(getPositionForSection(i) > position && prevIndex <= position) { prevIndex = i; break; } prevIndex = i; } return prevIndex; } public Object[] getSections() { return sections; } @Override public Filter getFilter() { if(filter == null) filter = new MangaNameFilter(); return filter; } private class MangaNameFilter extends Filter { @Override protected FilterResults performFiltering(CharSequence constraint) { // NOTE: this function is *always* called from a background thread, and // not the UI thread. constraint = constraint.toString().toLowerCase(); FilterResults result = new FilterResults(); if(constraint != null && constraint.toString().length() > 0) { ArrayList<Manga> filt = new ArrayList<Manga>(); ArrayList<Manga> lItems = new ArrayList<Manga>(); synchronized (items) { Collections.copy(lItems, items); } for(int i = 0, l = lItems.size(); i < l; i++) { Manga m = lItems.get(i); if(m.getName().toLowerCase().contains(constraint)) filt.add(m); } result.count = filt.size(); result.values = filt; } else { synchronized(items) { result.values = items; result.count = items.size(); } } return result; } @SuppressWarnings("unchecked") @Override protected void publishResults(CharSequence constraint, FilterResults results) { // NOTE: this function is *always* called from the UI thread. filtered = (ArrayList<Manga>)results.values; notifyDataSetChanged(); } } } However, when I call filter('test') on the filter nothing happens at all (or the background-thread is run, but the list isn't filtered as far as the user conserns). How can I fix this?

    Read the article

  • Solr Multicore Admin Problem

    - by Daniel M
    Im trying to add a url based security constraint to solr deployed in websphere 6.1. If I specify the core name in the url of the constraint then the admin url for that core gives a 404. Has anyone had any success with this or any suggestions? Cheers Cross-posted with stackoverflow

    Read the article

  • Hibernate: @UniqueConstraint with @ManyToOne field???

    - by Misha Koshelev
    Dear All: Using the following code: @Entity @Table(uniqueConstraints=[@UniqueConstraint(columnNames=["account","name"])]) class Friend { @Id @GeneratedValue(strategy=GenerationType.AUTO) public Long id @ManyToOne public Account account public String href public String name } I get the following error: org.hibernate.AnnotationException: Unable to create unique key constraint (account, name) on table Friend: account not found It seems this has to do with the @ManyToOne constraint, which I imagine actually creates a separate UniqueConstraint??? In any case, if I take this out, there is no complaint about the UniqueConstraint, but there is another error which makes me believe it must be left in. org.hibernate.MappingException: Could not determine type for: com.mksoft.fbautomate.domain.Account, at table: Friend, for columns: [org.hibernate.mapping.Column(account)] Any hints how I can create such a desired constraint (i.e., that each combination of account and name occurs only once???) Thank you! Misha

    Read the article

  • Fixtures and inheritance in Symfony

    - by Tere
    Hi! I have a database schema in Symfony like this: Persona: actAs: { Timestampable: ~ } columns: primer_nombre: { type: string(255), notnull: true } segundo_nombre: { type: string(255) } apellido: { type: string(255), notnull: true } rut: { type: string(255) } email: { type: string(255) } email2: { type: string(255) } direccion: { type: string(400) } ciudad: { type: string(255) } region: { type: string(255) } pais: { type: string(255) } telefono: { type: string(255) } telefono2: { type: string(255) } fecha_nacimiento: { type: date } Alumno: inheritance: type: concrete extends: Persona columns: comentario: { type: string(255) } estado_pago: { type: string(255) } Alumno_Beca: columns: persona_id: { type: integer, primary: true } beca_id: { type: integer, primary: true } relations: Alumno: { onDelete: CASCADE, local: persona_id, foreign: id } Beca: { onDelete: CASCADE, local: beca_id, foreign: id } Beca: columns: nombre: { type: string(255) } monto: { type: double } porcentaje: { type: double } descripcion: { type: string(5000) } As you see, "alumno" has a concrete inheritance from "persona". Now I'm trying to create fixtures for this two tables, and I can't make Doctrine to load them. It gives me this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (eat/alumno__beca, CONSTRAINT alumno__beca_persona_id_alumno_id FOREIGN KEY (persona_id) REFERENCES alumno (id) ON DELETE CASCADE) Does someone know how to write a fixture for a table inherited from another? Thanks!

    Read the article

  • What is the proper way to create a recursive entity in the Entity Framework?

    - by Orion Adrian
    I'm currently using VS 2010 RC, and I'm trying to create a model that contains a recursive self-referencing entity. Currently when I import the entity from the model I get an error indicating that the parent property cannot be part of the association because it's set to 'Computed' or 'Identity', though I'm not sure why it does it that way. I've been hand-editing the file to get around that error, but then the model simply doesn't work. What is the proper way to get recursive entities to work in the Entity Framework. CREATE TABLE [dbo].[Appointments]( [AppointmentId] [int] IDENTITY(1,1) NOT NULL, [Description] [nvarchar](1024) NULL, [Start] [datetime] NOT NULL, [End] [datetime] NOT NULL, [Username] [varchar](50) NOT NULL, [RecurrenceRule] [nvarchar](1024) NULL, [RecurrenceState] [varchar](20) NULL, [RecurrenceParentId] [int] NULL, [Annotations] [nvarchar](50) NULL, [Application] [nvarchar](100) NOT NULL, CONSTRAINT [PK_Appointments] PRIMARY KEY CLUSTERED ( [AppointmentId] ASC ) ) GO ALTER TABLE [dbo].[Appointments] WITH CHECK ADD CONSTRAINT [FK_Appointments_ParentAppointments] FOREIGN KEY([RecurrenceParentId]) REFERENCES [dbo].[Appointments] ([AppointmentId]) GO ALTER TABLE [dbo].[Appointments] CHECK CONSTRAINT [FK_Appointments_ParentAppointments] GO

    Read the article

  • does the order a composite key is defined matter?

    I have a table with (col1,col2) as a composite primary key. create table twokeytable(col1 int,col2 int,constraint twokeytable_pk primary key (col1,col2)); and another table with col3,col4 collumns witha composite foreign key(col3,col4) which references the(col1,col2) primary key. For some processing I need to drop the foreign key and primary constraints .While restoring the constraints does order of the keys matter?. are these same? create table fktwokeytable(col3 int,col4 int,constraint fkaddfaa_fk foreign key(col4,col3) references twokeytable(col1,col2)) and create table fktwokeytable(col3 int,col4 int,constraint fkaddfaa_fk foreign key(col3,col4) references twokeytable(col1,col2))

    Read the article

  • Oracle Unique Indexes

    - by Melvin
    I was creating a new table today in 10g when I noticed an interesting behavior. Here is an example of what I did: CREATE TABLE test_table ( field_1 INTEGER PRIMARY KEY ); Oracle will by default, create a non-null unique index for the primary key. I double checked this. After a quick check, I find a unique index name SYS_C0065645. Everything is working as expected so far. Now I did this: CREATE TABLE test_table ( field_1 INTEGER, CONSTRAINT pk_test_table PRIMARY KEY (field_1) USING INDEX (CREATE INDEX idx_test_table_00 ON test_table (field_1))); After describing my newly created index idx_test_table_00, I see that it is non-unique. I tried to insert duplicate data into the table and was stopped by the primary key constraint, proving that the functionality has not been affected. It seems strange to me that Oracle would allow a non-unique index to be used for a primary key constraint. Why is this allowed?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >