Search Results

Search found 10060 results on 403 pages for 'column'.

Page 93/403 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • LOAD DATA LOCAL INFILE custom value

    - by NR03
    How to add a custom value using LOAD DATA LOCAL INFILE? The column time_added is the 7th column and the file has only 2 values for the first and the second column. For the 7th column, time_added I want to use the unix timestamp when loading from file. This code isn't working: $result = mysql_query("LOAD DATA LOCAL INFILE '{$myFile}' INTO TABLE {$table} FIELDS TERMINATED BY ':' LINES TERMINATED BY '\n' SET `time_added`=unix_timestamp()");

    Read the article

  • Android-SQLite: How to Count specific value from Column?

    - by sanpatil
    I have two table (TABLE_EXAM,TABLE_RESULT). Here is value of my TABLE_RESULT. result_id exam_id question_id correct_answer 1 2 4 y 2 2 5 y 3 2 6 n 4 2 7 y I need to count how many correct_answer='y' where exam_id=2. I try following code but it return 0. public int calculateResult(int examId,String confirmAnswer) { int correctAnswer=0; try { SQLiteDatabase db=this.getWritableDatabase(); String selectQuery=("select count(correctAnswer) from result where exam_id ='" + examId + "' and correctAnswer ='" + 'y' +"'" ); // String selectQuery=("SELECT COUNT(*)FROM result WHERE exam_id ='" + examId + "' and correctAnswer ='" + confirmAnswer +"'" ); Cursor cursor = db.rawQuery(selectQuery, null); if(cursor.moveToLast()) { correctAnswer=cursor.getInt(3); } } catch(Exception e) { e.printStackTrace(); } return correctAnswer; } In variable confirm_answer i pass "y". Give me some hint or reference. Any help is appreciated. Thanks in Advance

    Read the article

  • MySQL query to find the most popular value in a column joined by another value in a second table

    - by Budove
    I have two tables: users: user_id, user_zip settings: user_id, pref_ex_loc I need to find the single most popular 'pref_ex_loc' from the settings table based on a particular user_zip, which will be specified as the variable $userzip. Here is the query that I have now and obviously it doesn't work. $popularexloc = "SELECT pref_ex_loc, user_id COUNT(pref_ex_loc) AS countloc FROM settings FULL OUTER JOIN users ON settings.user_id = users.user_id WHERE users.user_zip='$userzip' GROUP BY settings.pref_ex_loc ORDER BY countloc LIMIT 1"; $popexloc = mysql_query($popularexloc) or die('SQL Error :: '.mysql_error()); $exlocrow = mysql_fetch_array($popexloc); $mostpopexloc=$exlocrow[0]; echo '<option value="'.$mostpopexloc.'">'.$mostpopexloc.'</option>'; What am I doing wrong here? I'm not getting any kind of error from this either.

    Read the article

  • Using Oracle Proxy Authentication with JPA (eclipselink-Style)

    - by olaf.heimburger
    Security is a very intriguing topic. You will find it everywhere and you need to implement it everywhere. Yes, you need. Unfortunately, one can easily forget it while implementing the last mile. The Last Mile In a multi-tier application it is a common practice to use connection pools between the business layer and the database layer. Connection pools are quite useful to speed database connection creation and to split the load. Another very common practice is to use a specific, often called technical, user to connect to the database. This user has authentication and authorization rules that apply to all application users. Imagine you've put every effort to define roles for different types of users that use your application. These roles are necessary to differentiate between normal users, premium users, and administrators (I bet you will find or already have more roles in your application). While these user roles are pretty well used within your application, once the flow of execution enters the database everything is gone. Each and every user just has one role and is the same database user. Issues? What Issues? As long as things go well, this is not a real issue. However, things do not go well all the time. Once your application becomes famous performance decreases in certain situations or, more importantly, current and upcoming regulations and laws require that your application must be able to apply different security measures on a per user role basis at every stage of your application. If you only have a bunch of users with the same name and role you are not able to find the application usage profile that causes the performance issue, or which user has accessed data that he/she is not allowed to. Another thread to your role concept is that databases tend to be used by different applications and tools. These tools can be developer tools like SQL*Plus, SQL Developer, etc. or end user applications like BI Publisher, Oracle Forms and so on. These tools have no idea of your applications role concept and access the database the way they think is appropriate. A big oversight for your perfect role model and a big nightmare for your Chief Security Officer. Speaking of the CSO, brings up another issue: Password management. Once your technical user account is compromised, every user is able to do things that he/she is not expected to do from the design of your application. Counter Measures In the Oracle world a common counter measure is to use Virtual Private Database (VPD). This restricts the values a database user can see to the allowed minimum. However, it doesn't help in regard of a connection pool user, because this one is still not the real user. Oracle Proxy Authentication Another feature of the Oracle database is Proxy Authentication. First introduced with version 9i it is a quite useful feature for nearly every situation. The main idea behind Proxy Authentication is, to create a crippled database user who has only connect rights. Even if this user is compromised the risks are well understood and fairly limited. This user can be used in every situation in which you need to connect to the database, no matter which tool or application (see above) you use.The proxy user is perfect for multi-tier connection pools. CREATE USER app_user IDENTIFIED BY abcd1234; GRANT CREATE SESSION TO app_user; But what if you need to access real data? Well, this is the primary use case, isn't it? Now is the time to bring the application's role concept into play. You define database roles that define the grants for your identified user groups. Once you have these groups you grant access through the proxy user with the application role to the specific user. CREATE ROLE app_role_a; GRANT app_role_a TO scott; ALTER USER scott GRANT CONNECT THROUGH app_user WITH ROLE app_role_a; Now, hr has permission to connect to the database through the proxy user. Through the role you can restrict the hr's rights the are needed for the application only. If hr connects to the database directly all assigned role and permissions apply. Testing the Setup To test the setup you can use SQL*Plus and connect to your database: $ sqlplus app_user[hr]/abcd1234 Java Persistence API The Java Persistence API (JPA) is a fairly easy means to build applications that retrieve data from the database and put it into Java objects. You use plain old Java objects (POJOs) and mixin some Java annotations that define how the attributes of the object are used for storing data from the database into the Java object. Here is a sample for objects from the HR sample schema EMPLOYEES table. When using Java annotations you only specify what can not be deduced from the code. If your Java class name is Employee but the table name is EMPLOYEES, you need to specify the table name, otherwise it will fail. package demo.proxy.ejb; import java.io.Serializable; import java.sql.Timestamp; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @NamedQueries({ @NamedQuery(name = "Employee.findAll", query = "select o from Employee o") }) @Table(name = "EMPLOYEES") public class Employee implements Serializable { @Column(name="COMMISSION_PCT") private Double commissionPct; @Column(name="DEPARTMENT_ID") private Long departmentId; @Column(nullable = false, unique = true, length = 25) private String email; @Id @Column(name="EMPLOYEE_ID", nullable = false) private Long employeeId; @Column(name="FIRST_NAME", length = 20) private String firstName; @Column(name="HIRE_DATE", nullable = false) private Timestamp hireDate; @Column(name="JOB_ID", nullable = false, length = 10) private String jobId; @Column(name="LAST_NAME", nullable = false, length = 25) private String lastName; @Column(name="PHONE_NUMBER", length = 20) private String phoneNumber; private Double salary; @ManyToOne @JoinColumn(name = "MANAGER_ID") private Employee employee; @OneToMany(mappedBy = "employee") private List employeeList; public Employee() { } public Employee(Double commissionPct, Long departmentId, String email, Long employeeId, String firstName, Timestamp hireDate, String jobId, String lastName, Employee employee, String phoneNumber, Double salary) { this.commissionPct = commissionPct; this.departmentId = departmentId; this.email = email; this.employeeId = employeeId; this.firstName = firstName; this.hireDate = hireDate; this.jobId = jobId; this.lastName = lastName; this.employee = employee; this.phoneNumber = phoneNumber; this.salary = salary; } public Double getCommissionPct() { return commissionPct; } public void setCommissionPct(Double commissionPct) { this.commissionPct = commissionPct; } public Long getDepartmentId() { return departmentId; } public void setDepartmentId(Long departmentId) { this.departmentId = departmentId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public Long getEmployeeId() { return employeeId; } public void setEmployeeId(Long employeeId) { this.employeeId = employeeId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public Timestamp getHireDate() { return hireDate; } public void setHireDate(Timestamp hireDate) { this.hireDate = hireDate; } public String getJobId() { return jobId; } public void setJobId(String jobId) { this.jobId = jobId; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public Double getSalary() { return salary; } public void setSalary(Double salary) { this.salary = salary; } public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public List getEmployeeList() { return employeeList; } public void setEmployeeList(List employeeList) { this.employeeList = employeeList; } public Employee addEmployee(Employee employee) { getEmployeeList().add(employee); employee.setEmployee(this); return employee; } public Employee removeEmployee(Employee employee) { getEmployeeList().remove(employee); employee.setEmployee(null); return employee; } } JPA could be used in standalone applications and Java EE containers. In both worlds you normally create a Facade to retrieve or store the values of the Entities to or from the database. The Facade does this via an EntityManager which will be injected by the Java EE container. Here is sample Facade Session Bean for a Java EE container. package demo.proxy.ejb; import java.util.HashMap; import java.util.List; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; import oracle.jdbc.driver.OracleConnection; import org.eclipse.persistence.config.EntityManagerProperties; import org.eclipse.persistence.internal.jpa.EntityManagerImpl; @Stateless(name = "DataFacade", mappedName = "ProxyUser-TestEJB-DataFacade") @Remote @Local public class DataFacadeBean implements DataFacade, DataFacadeLocal { @PersistenceContext(unitName = "TestEJB") private EntityManager em; private String username; public Object queryByRange(String jpqlStmt, int firstResult, int maxResults) { // setSessionUser(); Query query = em.createQuery(jpqlStmt); if (firstResult 0) { query = query.setFirstResult(firstResult); } if (maxResults 0) { query = query.setMaxResults(maxResults); } return query.getResultList(); } public Employee persistEmployee(Employee employee) { // setSessionUser(); em.persist(employee); return employee; } public Employee mergeEmployee(Employee employee) { // setSessionUser(); return em.merge(employee); } public void removeEmployee(Employee employee) { // setSessionUser(); employee = em.find(Employee.class, employee.getEmployeeId()); em.remove(employee); } /** select o from Employee o */ public List getEmployeeFindAll() { Query q = em.createNamedQuery("Employee.findAll"); return q.getResultList(); } Putting Both Together To use Proxy Authentication with JPA and within a Java EE container you have to take care of the additional requirements: Use an OCI JDBC driver Provide the user name that connects through the proxy user Use an OCI JDBC driver To use the OCI JDBC driver you need to set up your JDBC data source file to use the correct JDBC URL. hr jdbc:oracle:oci8:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SID=XE))) oracle.jdbc.OracleDriver user app_user 62C32F70E98297522AD97E15439FAC0E SQL SELECT 1 FROM DUAL jdbc/hrDS Application Additionally you need to make sure that the version of the shared libraries of the OCI driver match the version of the JDBC driver in your Java EE container or Java application and are within your PATH (on Windows) or LD_LIBRARY_PATH (on most Unix-based systems). Installing the Oracle Database Instance Client software works perfectly. Provide the user name that connects through the proxy user This part needs some modification of your application software and session facade. Session Facade Changes In the Session Facade we must ensure that every call that goes through the EntityManager must be prepared correctly and uniquely assigned to this session. The second is really important, as the EntityManager works with a connection pool and can not guarantee that we set the proxy user on the connection that will be used for the database activities. To avoid changing every method call of the Session Facade we provide a method to set the username of the user that connects through the proxy user. This method needs to be called by the Facade client bfore doing anything else. public void setUsername(String name) { username = name; } Next we provide a means to instruct the TopLink EntityManager Delegate to use Oracle Proxy Authentication. (I love small helper methods to hide the nitty-gritty details and avoid repeating myself.) private void setSessionUser() { setSessionUser(username); } private void setSessionUser(String user) { if (user != null && !user.isEmpty()) { EntityManagerImpl emDelegate = ((EntityManagerImpl)em.getDelegate()); emDelegate.setProperty(EntityManagerProperties.ORACLE_PROXY_TYPE, OracleConnection.PROXYTYPE_USER_NAME); emDelegate.setProperty(OracleConnection.PROXY_USER_NAME, user); emDelegate.setProperty(EntityManagerProperties.EXCLUSIVE_CONNECTION_MODE, "Always"); } } The final step is use the EJB 3.0 AroundInvoke interceptor. This interceptor will be called around every method invocation. We therefore check whether the Facade methods will be called or not. If so, we set the user for proxy authentication and the normal method flow continues. @AroundInvoke public Object proxyInterceptor(InvocationContext invocationCtx) throws Exception { if (invocationCtx.getTarget() instanceof DataFacadeBean) { setSessionUser(); } return invocationCtx.proceed(); } Benefits Using Oracle Proxy Authentification has a number of additional benefits appart from implementing the role model of your application: Fine grained access control for temporary users of the account, without compromising the original password. Enabling database auditing and logging. Better identification of performance bottlenecks. References Effective Oracle Database 10g Security by Design, David Knox TopLink Developer's Guide, Chapter 98

    Read the article

  • In Wireshark's Protocol Hierarchy Statistics screen, is the total byte count of a capture the sum of the Bytes column or just the top line (Frame)?

    - by Howiecamp
    Part 1 - I'm looking at Wireshark's Protocol Hierarchy Statistics screen (sample below), is the total byte count of the capture the sum of the Bytes column or just the top line (Frame)? I'm 99% that it's the latter because of protocol rollup but I wanted to conform. Part 2 - From Wireshark documentation on this screen, "Protocol layers can consist of packets that won't contain any higher layer protocol, so the sum of all higher layer packets may not sum up to the protocols packet count. Example: In the screenshot TCP has 85,83% but the sum of the subprotocols (HTTP, ...) is much less. This may be caused by TCP protocol overhead, e.g. TCP ACK packets won't be counted as packets of the higher layer)." Can you explain this?

    Read the article

  • In XSLT is it possible to use the value of an xpath expression in a call to a template using an par

    - by Cell
    I am performing an xsl transform and in it I call a template with a param using the following code <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + 1"/> </xsl:call-template> This calls a template function which outputs part of a table element in HTML. The curRow and curCol are used to determine which row and column we are in the table. gbl_maxCols is set to the number of columns in an html table <xsl:template name="GenerateColumns"> <xsl:when test="$curCol &lt;= $gbl_maxCols"> <td> <xsl:attribute="colspan"> <xsl:value-of select="/page/column/@skipColumns"/> </xsl:attribute> </xsl:when> </xsl:template> The result of this function is a set of td elements, however some of these elements (those with a skipColumn attribute greater than 1 span more than 1 column, I need to skip this many columns with the next call to generateColumns. this works just like I would expect in the case where I simply increment the curCol param but I have a case where I need to use the value from the xml attribute skipColumns in the math to calculate the value for curCol. In the above case I iterate through all the columns and this works for the majority of my use cases. However in same cases I need to skip over some of the columns and need to pass in that value from the xml attribute to calculate how many columns I need to skip. My naive first attempt was something like this <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + /page/column/@skipColumns"/> </xsl:call-template> But unforutnately this does not seem to work. Is there any way to use an attribute from an xml page in the calculation for the value of a param in xsl. My xml page is something like this (edited heavily since the xml file is rather large) <page> <column name="blank" skipColumns="1"/> <column name="blank" skipColumns="1"/> <column name="test" skipColumns="3"/> <column name="blank" skipColumns="1"/> <column name="test2" skipColumns="6"/> </page> after all of this I would like to have a set of td elements like the following <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td> if I just iterate through the columns I instead end up with something like this which gives me more td elements than I should have <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td> Edited to provide more information

    Read the article

  • How do I code this relationship in SQLAlchemy?

    - by Martin Del Vecchio
    I am new to SQLAlchemy (and SQL, for that matter). I can't figure out how to code the idea I have in my head. I am creating a database of performance-test results. A test run consists of a test type and a number (this is class TestRun below) A test suite consists the version string of the software being tested, and one or more TestRun objects (this is class TestSuite below). A test version consists of all test suites with the given version name. Here is my code, as simple as I can make it: from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref, sessionmaker Base = declarative_base() class TestVersion (Base): __tablename__ = 'versions' id = Column (Integer, primary_key=True) version_name = Column (String) def __init__ (self, version_name): self.version_name = version_name class TestRun (Base): __tablename__ = 'runs' id = Column (Integer, primary_key=True) suite_directory = Column (String, ForeignKey ('suites.directory')) suite = relationship ('TestSuite', backref=backref ('runs', order_by=id)) test_type = Column (String) rate = Column (Integer) def __init__ (self, test_type, rate): self.test_type = test_type self.rate = rate class TestSuite (Base): __tablename__ = 'suites' directory = Column (String, primary_key=True) version_id = Column (Integer, ForeignKey ('versions.id')) version_ref = relationship ('TestVersion', backref=backref ('suites', order_by=directory)) version_name = Column (String) def __init__ (self, directory, version_name): self.directory = directory self.version_name = version_name # Create a v1.0 suite suite1 = TestSuite ('dir1', 'v1.0') suite1.runs.append (TestRun ('test1', 100)) suite1.runs.append (TestRun ('test2', 200)) # Create a another v1.0 suite suite2 = TestSuite ('dir2', 'v1.0') suite2.runs.append (TestRun ('test1', 101)) suite2.runs.append (TestRun ('test2', 201)) # Create another suite suite3 = TestSuite ('dir3', 'v2.0') suite3.runs.append (TestRun ('test1', 102)) suite3.runs.append (TestRun ('test2', 202)) # Create the in-memory database engine = create_engine ('sqlite://') Session = sessionmaker (bind=engine) session = Session() Base.metadata.create_all (engine) # Add the suites in version1 = TestVersion (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = TestVersion (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = TestVersion (suite3.version_name) version3.suites.append (suite3) session.add (suite3) session.commit() # Query the suites for suite in session.query (TestSuite).order_by (TestSuite.directory): print "\nSuite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) # Query the versions for version in session.query (TestVersion).order_by (TestVersion.version_name): print "\nVersion %s has %d test suites:" % (version.version_name, len (version.suites)) for suite in version.suites: print " Suite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) The output of this program: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 1 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Version v1.0 has 1 test suites: Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This is not correct, since there are two TestVersion objects with the name 'v1.0'. I hacked my way around this by adding a private list of TestVersion objects, and a function to find a matching one: versions = [] def find_or_create_version (version_name): # Find existing for version in versions: if version.version_name == version_name: return (version) # Create new version = TestVersion (version_name) versions.append (version) return (version) Then I modified my code that adds the records to use it: # Add the suites in version1 = find_or_create_version (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = find_or_create_version (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = find_or_create_version (suite3.version_name) version3.suites.append (suite3) session.add (suite3) Now the output is what I want: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 2 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This feels wrong to me; it doesn't feel right that I am manually keeping track of the unique version names, and manually adding the suites to the appropriate TestVersion objects. Is this code even close to being correct? And what happens when I'm not building the entire database from scratch, as in this example. If the database already exists, do I have to query the database's TestVersion table to discover the unique version names? Thanks in advance. I know this is a lot of code to wade through, and I appreciate the help.

    Read the article

  • @OneToMany association joining on the wrong field

    - by april26
    I have 2 tables, devices which contains a list of devices and dev_tags, which contains a list of asset tags for these devices. The tables join on dev_serial_num, which is the primary key of neither table. The devices are unique on their ip_address field and they have a primary key identified by dev_id. The devices "age out" after 2 weeks. Therefore, the same piece of hardware can show up more than once in devices. I mention that to explain why there is a OneToMany relationship between dev_tags and devices where it seems that this should be a OneToOne relationship. So I have my 2 entities @Entity @Table(name = "dev_tags") public class DevTags implements Serializable { private Integer tagId; private String devTagId; private String devSerialNum; private List<Devices> devices; @Id @GeneratedValue @Column(name = "tag_id") public Integer getTagId() { return tagId; } public void setTagId(Integer tagId) { this.tagId = tagId; } @Column(name="dev_tag_id") public String getDevTagId() { return devTagId; } public void setDevTagId(String devTagId) { this.devTagId = devTagId; } @Column(name="dev_serial_num") public String getDevSerialNum() { return devSerialNum; } public void setDevSerialNum(String devSerialNum) { this.devSerialNum = devSerialNum; } @OneToMany(mappedBy="devSerialNum") public List<Devices> getDevices() { return devices; } public void setDevices(List<Devices> devices) { this.devices = devices; } } and this one public class Devices implements java.io.Serializable { private Integer devId; private Integer officeId; private String devSerialNum; private String devPlatform; private String devName; private OfficeView officeView; private DevTags devTag; public Devices() { } @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "dev_id", unique = true, nullable = false) public Integer getDevId() { return this.devId; } public void setDevId(Integer devId) { this.devId = devId; } @Column(name = "office_id", nullable = false, insertable=false, updatable=false) public Integer getOfficeId() { return this.officeId; } public void setOfficeId(Integer officeId) { this.officeId = officeId; } @Column(name = "dev_serial_num", nullable = false, length = 64, insertable=false, updatable=false) @NotNull @Length(max = 64) public String getDevSerialNum() { return this.devSerialNum; } public void setDevSerialNum(String devSerialNum) { this.devSerialNum = devSerialNum; } @Column(name = "dev_platform", nullable = false, length = 64) @NotNull @Length(max = 64) public String getDevPlatform() { return this.devPlatform; } public void setDevPlatform(String devPlatform) { this.devPlatform = devPlatform; } @Column(name = "dev_name") public String getDevName() { return devName; } public void setDevName(String devName) { this.devName = devName; } @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "office_id") public OfficeView getOfficeView() { return officeView; } public void setOfficeView(OfficeView officeView) { this.officeView = officeView; } @ManyToOne() @JoinColumn(name="dev_serial_num") public DevTags getDevTag() { return devTag; } public void setDevTag(DevTags devTag) { this.devTag = devTag; } } I messed around a lot with @JoinColumn(name=) and the mappedBy attribute of @OneToMany and I just cannot get this right. I finally got the darn thing to compile, but the query is still trying to join devices.dev_serial_num to dev_tags.tag_id, the @Id for this entity. Here is the transcript from the console: 13:12:16,970 INFO [STDOUT] Hibernate: select devices0_.office_id as office5_2_, devices0_.dev_id as dev1_2_, devices0_.dev_id as dev1_156_1_, devices0_.dev_name as dev2_156_1_, devices0_.dev_platform as dev3_156_1_, devices0_.dev_serial_num as dev4_156_1_, devices0_.office_id as office5_156_1_, devtags1_.tag_id as tag1_157_0_, devtags1_.comment as comment157_0_, devtags1_.dev_serial_num as dev3_157_0_, devtags1_.dev_tag_id as dev4_157_0_ from ond.devices devices0_ left outer join ond.dev_tags devtags1_ on devices0_.dev_serial_num=devtags1_.tag_id where devices0_.office_id=? 13:12:16,970 INFO [IntegerType] could not read column value from result set: dev4_156_1_; Invalid value for getInt() - 'FDO1129Y2U4' 13:12:16,970 WARN [JDBCExceptionReporter] SQL Error: 0, SQLState: S1009 13:12:16,970 ERROR [JDBCExceptionReporter] Invalid value for getInt() - 'FDO1129Y2U4' That value for getInt() 'FD01129Y2U4' is a serial number, definitely not an Int! What am I missing/misunderstanding here? Can I join 2 tables on any fields I want or does at least one have to be a primary key?

    Read the article

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • 12c - Invisible Columns...

    - by noreply(at)blogger.com (Thomas Kyte)
    Remember when 11g first came out and we had "invisible indexes"?  It seemed like a confusing feature - indexes that would be maintained by modifications (hence slowing them down), but would not be used by queries (hence never speeding them up).  But - after you looked at them a while, you could see how they can be useful.  For example - to add an index in a running production system, an index used by the next version of the code to be introduced later that week - but not tested against the queries in version one of the application in place now.  We all know that when you add an index - one of three things can happen - a given query will go much faster, it won't affect a given query at all, or... It will make some untested query go much much slower than it used to.  So - invisible indexes allowed us to modify the schema in a 'safe' manner - hiding the change until we were ready for it.Invisible columns accomplish the same thing - the ability to introduce a change while minimizing any negative side effects of that change.  Normally when you add a column to a table - any program with a SELECT * would start seeing that column, and programs with an INSERT INTO T VALUES (...) would pretty much immediately break (an INSERT without a list of columns in it).  Now we can add a column to a table in an invisible fashion, the column will not show up in a DESCRIBE command in SQL*Plus, it will not be returned with a SELECT *, it will not be considered in an INSERT INTO T VALUES statement.  It can be accessed by any query that asks for it, it can be populated by an INSERT statement that references it, but you won't see it otherwise.For example, let's start with a simple two column table:ops$tkyte%ORA12CR1> create table t  2  ( x int,  3    y int  4  )  5  /Table created.ops$tkyte%ORA12CR1> insert into t values ( 1, 2 );1 row created.Now, we will add an invisible column to it:ops$tkyte%ORA12CR1> alter table t add                     ( z int INVISIBLE );Table altered.Notice that a DESCRIBE will not show us this column:ops$tkyte%ORA12CR1> desc t Name              Null?    Type ----------------- -------- ------------ X                          NUMBER(38) Y                          NUMBER(38)and existing inserts are unaffected by it:ops$tkyte%ORA12CR1> insert into t values ( 3, 4 );1 row created.A SELECT * won't see it either:ops$tkyte%ORA12CR1> select * from t;         X          Y---------- ----------         1          2         3          4But we have full access to it (in well written programs! The ones that use a column list in the insert and select - never relying on "defaults":ops$tkyte%ORA12CR1> insert into t (x,y,z)                         values ( 5,6,7 );1 row created.ops$tkyte%ORA12CR1> select x, y, z from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7and when we are sure that we are ready to go with this column, we can just modify it:ops$tkyte%ORA12CR1> alter table t modify z visible;Table altered.ops$tkyte%ORA12CR1> select * from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7I will say that a better approach to this - one that is available in 11gR2 and above - would be to use editioning views (part of Edition Based Redefinition - EBR ).  I would rather use EBR over this approach, but in an environment where EBR is not being used, or the editioning views are not in place, this will achieve much the same.Read these for information on EBR:http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10asktom-172777.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-mar/o20asktom-098897.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-may/o30asktom-082672.html

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    ????????????????????????????????????????·???????????Java EE 6???????????????·????WebLogic Server 12c?(???)?????????Oracle Enterprise Pack for Eclipse 12c?????Java EE 6??????3???????????????????????JSF 2.0?????????????????????????JAX-RS????RESTful?Web???????????????(???)?????????????JSF 2.0???????????????? Java EE 6??????????????????????????????????????JSF(JavaServer Faces) 2.0??????????Java EE?????????????????????????????????Struts????????????????????????????????JSF 2.0?Java EE 6??????????????????????????????????????????????????JSP(JavaServer Pages)?JSF???????????????????????·???????????????????????Web???????????????????????????????????????????????????????????????????????????????? ???????????????????????????????EJB??????????????EMPLOYEES??????????????????????XHTML????????????????????????????????????????????????????????????ManagedBean????????????JSF 2.0????????????????????? ?????????Oracle Enterprise Pack for Eclipse(OEPE)?????????????????Eclipse(OEPE)???????·?????OOW?????????????????·???????????Properties?????????????????·???·????????????????????????????Project Facets????????????JavaServer Faces?????????????Apply?????????OK???????????? ???JSF????????????????????????????ManagedBean???IndexBean?????????????OOW??????????????????·???????????????NEW?-?Class??????New Java Class??????????????????????Package????managed???Name????IndexBean???????Finish???????????? ?????IndexBean??????·????????????????????????????????????????????IndexBean(IndexBean.java)?package managed;import java.util.ArrayList;import java.util.List;import javax.ejb.EJB;import javax.faces.bean.ManagedBean;import ejb.EmpLogic;import model.Employee;@ManagedBeanpublic class IndexBean {  @EJB  private EmpLogic empLogic;  private String keyword;  private List<Employee> results = new ArrayList<Employee>();  public String getKeyword() {    return keyword;  }  public void setKeyword(String keyword) {    this.keyword = keyword;  }  public List getResults() {    return results;  }  public void actionSearch() {    results.clear();    results.addAll(empLogic.getEmp(keyword));  }} ????????????????keyword?results??????????????????????????????Session Bean???EmpLogic?????????????????@EJB?????????????????????????????????????????????????????????????????????actionSearch??????????????EmpLogic?????????·????????????????????result???????? ???ManagedBean?????????????????????????????????????????·??????OOW??????????????WebContent???????index.xhtml????? ???????????index.xhtml????????????????????????????????????????????????(Index.xhtml)?<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"   "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"  xmlns:ui="http://java.sun.com/jsf/facelets"  xmlns:h="http://java.sun.com/jsf/html"  xmlns:f="http://java.sun.com/jsf/core"><h:head>  <title>Employee??????</title></h:head><h:body>  <h:form>    <h:inputText value="#{indexBean.keyword}" />    <h:commandButton action="#{indexBean.actionSearch}" value="??" />    <h:dataTable value="#{indexBean.results}" var="emp" border="1">      <h:column>        <f:facet name="header">          <h:outputText value="employeeId" />        </f:facet>        <h:outputText value="#{emp.employeeId}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="firstName" />        </f:facet>        <h:outputText value="#{emp.firstName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="lastName" />        </f:facet>        <h:outputText value="#{emp.lastName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="salary" />        </f:facet>        <h:outputText value="#{emp.salary}" />      </h:column>    </h:dataTable>  </h:form></h:body></html> index.xhtml???????????????????ManagedBean???IndexBean??????????????????????????????IndexBean?????actionSearch??????????h:commandButton???????????????????????????????????????? ???Web???????????????(web.xml)??????web.xml???????·?????OOW???????????WebContent?-?WEB-INF?????? ?????????????web-app??????????????welcome-file-list(????)?????????????Web???????????????(web.xml)?<?xml version="1.0" encoding="UTF-8"?><web-app xmlns:javaee="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="3.0">  <javaee:display-name>OOW</javaee:display-name>  <servlet>    <servlet-name>Faces Servlet</servlet-name>    <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>    <load-on-startup>1</load-on-startup>  </servlet>  <servlet-mapping>    <servlet-name>Faces Servlet</servlet-name>    <url-pattern>/faces/*</url-pattern>  </servlet-mapping>  <welcome-file-list>    <welcome-file>/faces/index.xhtml</welcome-file>  </welcome-file-list></web-app> ???JSF????????????????????????????? ??????Java EE 6?JPA 2.0?EJB 3.1?JSF 2.0????????????????????????????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????????????????????????????????????????????Oracle WebLogic Server 12c(12.1.1)??????Next??????????????? ?????????????????????Domain Directory??????Browse????????????????????????C:\Oracle\Middleware\user_projects\domains\base_domain??????Finish???????????? ?????WebLogic Server?????????????????????????????????????????????????????????????????????OEPE??Servers???????Oracle WebLogic Server 12c???????????·???????????????Properties??????????????????????????????WebLogic?-?Publishing????????????Publish as an exploded archive??????????????????OK???????????? ???????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????? ???????????????????????????????????????????????·??????????????????????????????????????????firstName?????????????????JAX-RS???RESTful?Web??????? ?????????JAX-RS????RESTful?Web??????????????? Java EE??????????Java EE 5???SOAP????Web??????????JAX-WS??????????Java EE 6????????JAX-RS?????????????RESTful?Web????????????·????????????????????????JAX-RS????????Session Bean??????·?????????Web???????????????????????????????????????????????JAX-RS?????????? ?????????????????????????????JAX-RS???RESTful Web??????????????????????????·?????OOW???????????·???????????????Properties???????????????????????????Project Facets?????????????JAX-RS(Rest Web Services)???????????Further configuration required?????????????Modify Faceted Project???????????????JAX-RS??????·?????????????????JAX-RS Implementation Library??????Manage libraries????(???????????)?????????????? ??????Preference(Filtered)???????????????New????????????????New User Library????????????????User library name????JAX-RS???????OK???????????????????Preference(Filtered)?????????????Add JARs????????????????????????C:\Oracle\Middleware\modules \com.sun.jersey.core_1.1.0.0_1-9.jar??????OK???????????? ???Modify Faceted Project??????????JAX-RS Implementation Library????JAX-RS????????????????????JAX-RS servlet class name????com.sun.jersey.spi.container.servlet.ServletContainer???????OK?????????????Project Facets???????????????????OK?????????????????? ???RESTful Web??????????????????????????????????(???????EmpLogic?????????????)??RESTful Web?????????????EmpLogic(EmpLogic.java)?package ejb; import java.util.List; import javax.ejb.LocalBean; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.ws.rs.GET;import javax.ws.rs.Path;import javax.ws.rs.PathParam;import javax.ws.rs.Produces;import model.Employee; @Stateless @LocalBean @Path("/emprest")public class EmpLogic {     @PersistenceContext(unitName = "OOW")     private EntityManager em;     public EmpLogic() {     }  @GET  @Path("/getname/{empno}")  // ?  @Produces("text/plain")  // ?  public String getEmpName(@PathParam("empno") long empno) {    Employee e = em.find(Employee.class, empno);    if (e == null) {      return "no data.";    } else {      return e.getFirstName();    }  }} ?????????????????????@Path("/emprest ")????????????RESTful Web????????????HTTP??????????????JAX-RS????????????????????????RESTful Web?????Web??????????????????@Produces???????(?)??????????????????????????text/plain????????????????????????????application/xml?????????XML???????????application/json?????JSON?????????????????? ???????????????Web???????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????????????Web??????http://localhost:7001/OOW/jaxrs/emprest/getname/186????????????????URL?????????(186)?employeeId?????????????firstName????????????????*    *    * ????????3??????WebLogic Server 12c?OEPE????Java EE 6?????????????????Java EE 6????????????????·????????????????????????????Java EE?????????????????????????????????????????????????????????????????????????????????

    Read the article

  • Primefaces: java.lang.ClassCastException: java.util.HashMap cannot be cast to ClassObject

    - by razegarra
    I have a problem with p:dataTable in Primefaces, I can not find the error. Class UsuarioAsig: public class UsuarioAsig { private BigDecimal codigopersona; private String nombre; private String paterno; private String materno; private String login; private String observacion; private String tipocontrol; private String externo; private String habilitado; private String nombreperfil; private BigDecimal codigousuario; ...get and set...} Class UsuarioAsigListaDataModel: public class UsuarioAsigListaDataModel extends ListDataModel<UsuarioAsig> implements SelectableDataModel<UsuarioAsig> { public UsuarioAsigListaDataModel(){} public UsuarioAsigListaDataModel(List<UsuarioAsig> data){super(data);} @Override public UsuarioAsig getRowData(String rowKey) { @SuppressWarnings("unchecked") List<UsuarioAsig> listaUsuarioAsigLectura = (List<UsuarioAsig>) getWrappedData(); for (UsuarioAsig usuarioAsig : listaUsuarioAsigLectura) { if (usuarioAsig.getCodigopersona().equals(rowKey)) { return usuarioAsig; } } return null; } @Override public Object getRowKey(UsuarioAsig usuarioAsig) { return usuarioAsig.getCodigopersona(); }} Controller UsuarioAsigController: @Controller("usuarioAsigController") @Scope(value = "session") public class UsuarioAsigController { private List<UsuarioAsig> listaUsuarioAsig; private HashMap<String, Object> selUsuarioAsig; private UsuarioAsigListaDataModel mediumUsuarioAsigModel; @Autowired UsuarioService usuarioService; ... public List<UsuarioAsig> getListaUsuarioAsig() { listaUsuarioAsig = usuarioService.selectAsig(); return listaUsuarioAsig; } public void setListaUsuarioAsig(List<UsuarioAsig> listaUsuarioAsig) { this.listaUsuarioAsig = listaUsuarioAsig; } public void setMediumUsuarioAsigModel(UsuarioAsigListaDataModel mediumUsuarioAsigModel) { this.mediumUsuarioAsigModel = mediumUsuarioAsigModel; } public UsuarioAsigListaDataModel getMediumUsuarioAsigModel() { listaUsuarioAsig = usuarioService.selectAsig(); mediumUsuarioAsigModel = new UsuarioAsigListaDataModel(listaUsuarioAsig); return mediumUsuarioAsigModel; } public void onRowSelect(SelectEvent event) { FacesMessage msg = new FacesMessage("Usuario seleccionado", ((UsuarioAsig) event.getObject()).getNombre()); FacesContext.getCurrentInstance().addMessage(null, msg); } } the error is generated when you click on one of the lines of datatable: asiginst.xhtml: <h:form id="form"> <p:growl id="msgs" showDetail="true" /> <p:dataTable id="usuarioAsigListaDataModel" var="usuarioAsig" value="#{usuarioAsigController.mediumUsuarioAsigModel}" rowKey="#{usuarioAsig.codigopersona}" selection="#{usuarioAsigController.selUsuarioAsig}" selectionMode="single" paginator="true" rows="10"> <p:ajax event="rowSelect" listener="#{usuarioAsigController.onRowSelect}" update=":form:msgs" /> <p:column headerText="Código" style="width:10%">#{usuarioAsig.codigopersona}</p:column> <p:column headerText="Nombre" style="width:32%">#{usuarioAsig.nombre}</p:column> <p:column headerText="Apellidos" style="width:32%">#{usuarioAsig.paterno} #{usuarioasig.materno}</p:column> <p:column headerText="Tipo Control" style="width:20%">#{usuarioAsig.tipocontrol}</p:column> <p:column headerText="Habilitado" style="width:6%">#{usuarioAsig.habilitado}</p:column> </p:dataTable> </h:form> THE ERROR IS GENERATED: WARNING: asiginst.xhtml @51,103 listener="#{usuarioAsigController.onRowSelect}": java.lang.ClassCastException: java.util.HashMap cannot be cast to com.datos.entidades.qry.UsuarioAsig javax.el.ELException: asiginst.xhtml @51,103 listener="#{usuarioAsigController.onRowSelect}": java.lang.ClassCastException: java.util.HashMap cannot be cast to com.datos.entidades.qry.UsuarioAsig at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:111) at org.primefaces.behavior.ajax.AjaxBehaviorListenerImpl.processArgListener(AjaxBehaviorListenerImpl.java:69) at org.primefaces.behavior.ajax.AjaxBehaviorListenerImpl.processAjaxBehavior(AjaxBehaviorListenerImpl.java:56) at org.primefaces.event.SelectEvent.processListener(SelectEvent.java:40) at javax.faces.component.behavior.BehaviorBase.broadcast(BehaviorBase.java:102) at javax.faces.component.UIComponentBase.broadcast(UIComponentBase.java:760) at javax.faces.component.UIData.broadcast(UIData.java:1071) at javax.faces.component.UIData.broadcast(UIData.java:1093) at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:794) at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1259) at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:81) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:409) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:309) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassCastException: java.util.HashMap cannot be cast to com.datos.entidades.qry.UsuarioAsig at com.controller.UsuarioAsigController.onRowSelect(UsuarioAsigController.java:217) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.el.parser.AstValue.invoke(AstValue.java:264) at org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:278) at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105) ... 29 more

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • icefaces datatable component

    - by chetan
    I have two datatable in two different jspx page but when I call one then i try to call other old one still display what is the problem there. from the old page only datatable is display no other component are displayed. This is one jspx page -- -- <div style="margin-bottom: 20px;"> <div> <div class="page-navi"> <ice:dataPaginator id="dataScroll_3" for="companyDataTable1" paginator="true"> <f:facet name="first"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-first.gif" title="First Page" /> </f:facet> <f:facet name="last"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-last.gif" title="Last Page" /> </f:facet> <f:facet name="previous"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-previous.gif" title="Previous Page" /> </f:facet> <f:facet name="next"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-next.gif" title="Next Page" /> </f:facet> </ice:dataPaginator> </div> <ice:panelGroup> <ice:dataTable id="companyDataTable1" rendered="#{createLeaveBean.empRender}" binding="#{createLeaveBean.empTable}" value="#{createLeaveBean.lstEmployeeeInfo}" var="currentRow" width="80%" cellpadding="0" cellspacing="0" headerClass="std-table-header" styleClass="std-table" rows="10"> <ice:column style="width: 1%"> <f:facet name="header"> <ice:selectBooleanCheckbox id="selectallemp" partialSubmit="true" value="#{createLeaveBean.selectAll}" valueChangeListener="#{createLeaveBean.toggleSelectedFields}" onkeydown="moveFocus(event,'selectoneemp')" tabindex="8"></ice:selectBooleanCheckbox> </f:facet> <ice:selectBooleanCheckbox id="selectoneemp" value="#{currentRow.notify}" tabindex="9" ></ice:selectBooleanCheckbox> </ice:column> <ice:column style="width: 5%;"> <f:facet name="header"><ice:outputText value="Employee Id" /></f:facet> <ice:outputText value="#{currentRow.employeeInfoId}" /> </ice:column> <ice:column style="width: 34%;"> <f:facet name="header"><ice:outputText value="Employee Name" /></f:facet> <ice:outputText value="#{currentRow.firstName}" /> </ice:column> </ice:dataTable> </ice:panelGroup> </div> <ice:commandButton id="createleave" tabindex="12" value="Create" action="#{createLeaveBean.createLeavePolicyEmp}" styleClass="std-btn" style="margin-right: 10px;margin-left: 50px;margin-top: 15px"></ice:commandButton> <ice:commandButton id="cancelleave" tabindex="13" value="Cancel" action="#{createLeaveBean.cancelLeavePolicyEmp}" rendered="true" styleClass="std-btn" style="margin-top: 15px"></ice:commandButton> </div> This is second jspx page <div class="page-navi"> <ice:dataPaginator id="dataScroll_4" for="companyDataTable2" paginator="true"> <f:facet name="first"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-first.gif" title="First Page" /> </f:facet> <f:facet name="last"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-last.gif" title="Last Page" /> </f:facet> <f:facet name="previous"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-previous.gif" title="Previous Page" /> </f:facet> <f:facet name="next"> <ice:graphicImage url="/xmlhttp/css/xp/css-images/arrow-next.gif" title="Next Page" /> </f:facet> </ice:dataPaginator> </div> <ice:panelGroup> <ice:dataTable id="companyDataTable2" rendered="#{createLeaveBean.deptRender}" binding="#{createLeaveBean.empTable}" value="#{createLeaveBean.lstEmployeeeInfo}" var="currentRowww" width="96%" cellpadding="0" cellspacing="0" headerClass="std-table-header" styleClass="std-table" rows="10"> <ice:column style="width: 5%;"> <f:facet name="header"><ice:outputText value="Employee Id" /></f:facet> <ice:outputText value="#{currentRowww.employeeInfoId}" /> </ice:column> <ice:column style="width: 34%;"> <f:facet name="header"><ice:outputText value="Employee Name" /></f:facet> <ice:outputText value="#{currentRowww.firstName}" /> </ice:column> </ice:dataTable> </ice:panelGroup> <ice:commandButton id="createLeave" value="Create" action="#{createLeaveBean.createLeavePolicyDept}" styleClass="std-btn" tabindex="8" style="margin-right: 10px;margin-left: 40px;margin-top: 15px"></ice:commandButton> <ice:commandButton id="cancelLeave" value="Cancel" action="#{createLeaveBean.cancelLeavePolicyDept}" rendered="true" styleClass="std-btn" tabindex="9" style="margin-top: 15px"></ice:commandButton>

    Read the article

  • Pylons, SQlite and autoincrementing fields

    - by Maxfrank
    Hey! Just started working with Pylons in conjunction with SQLAlchemy, and my model looks something like this: from sqlalchemy import Column from sqlalchemy.types import Integer, String from helloworld.model.meta import Base class Person(Base): __tablename__ = "person" id = Column(Integer, primary_key=True) name = Column(String(100)) email = Column(String(100)) def __init__(self, name='', email=''): self.name = name self.email = email def __repr__(self): return "<Person('%s')" % self.name To avoid sqlite reusing id's that might have been deleted, I want to add AUTOINCREMENT to the column "id". I've looked through the documentation for sqlalchemy and saw that the sqlite_autoincrement can be issued. An example where this attribute is given can be found here. sqlite_autoincrement seems though to be issued when creating the table itself, and I just wondered how it can be supplied when using a declarative style of the model such as mine.

    Read the article

  • Error in retrieving data from Excel File

    - by Sreejesh Kumar
    I have an excel file. I wanted to pull the data from excel file to SQL Server table. And the data is successfully transferred.In the excel file, I removed a text from one column named "Risk" from one row.The text was lengthy one.now the package execution fails at the source ie from the excel file. The errors are shown as "[Audit [1]] Error: There was an error with output column "Risk" (100) on output "Excel Source Output" (9). The column status returned was: "DBSTATUS_UNAVAILABLE"." and "[Audit [1]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "Risk" (100)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "Risk" (100)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure." the error occurs only when I remove this particular text from this row.

    Read the article

  • Combobox in bound DataGridView

    - by Dan
    Hi. I've got a DataGridView control which is bound to a database table. I want one of the columns in the gridview to be of combobox type. The combobox should contain a list of hardcoded strings, which is the same for all rows in the datagridview. One of the fields in my database table is an index for this list of hardcoded strings. I've programatically added a new column to the gridview of type "DataGridViewComboBoxColumn", which successfully creates the column with comboboxes in it. However, that's then not bound to the index field in my DB table. The index field in my DB table is actually automatically bound to a column via the DataAdapter::Fill method. I've set this column to hidden, so it's hidden to the user. Obviously just before updating the dataadapter, I can programatically fixup the hidden column in my datatable with the SelectedIndex of my combobox. Just wondering if there's a better way of doing this? Thankyou for any help with this, Dan.

    Read the article

  • How to make an NSOutlineView indent multiple columns?

    - by Rinzwind
    What would be the easiest or recommended way for making an NSOutlineView indent multiple columns? By default, it only indents the outline column; and as far as I know there is no built-in support for making it indent other columns. I have an NSOutlineView which shows a comparison between two sets of hierarchical data. For visual appeal, if some item in the outline column is indented, I'd like to indent the item on the same row in another column by the same indentation amount. (There's also a third column which shows the result of comparing the two items, this column should never be indented.) Can this only be achieved by subclassing NSOutlineView? And what would need to be overridden in the subclass? Or is there an easier way to get it to indent multiple columns?

    Read the article

  • How can I copy Data from one sheet to another Sheet in Excel 07 Through Macro

    - by Mwaseem Alvi
    Hello, I am using MS Office 2007. Please let me know that how can I copy whole data from sheet one to sheet two. I want to copy the whole data from row 5 to onward in sheet two. The whole scenrio is given below in detail. Sheet one: Copy the data from column B and Row 3 Sheet Two: Paste the Copied Data in Column B and Row 3 Sheet One: Copy the whole data from Column B to Column G and Row 5 to onward Sheet Two: Paste whole copied data in sheet two from last filled row to onward Data dont overwrite on any row or column. Every data will be add in sheet two from sheet one when macro will be run. Thanks

    Read the article

  • Problem with between join for sqlalchemy orm relation.

    - by Gary van der Merwe
    I'm trying to create a relation that has a between join. Here is a shortish example of what I'm trying to do: #!/usr/bin/env python import sqlalchemy as sa from sqlalchemy import orm from sqlalchemy.engine.base import Engine from sqlalchemy.ext.declarative import declarative_base metadata = sa.MetaData() Base = declarative_base(metadata=metadata) engine = sa.create_engine('sqlite:///:memory:') class Network(Base): __tablename__ = "network" id = sa.Column(sa.Integer, primary_key=True) ip_net_addr_db = sa.Column('ip_net_addr', sa.Integer, index=True) ip_broadcast_addr_db = sa.Column('ip_broadcast_addr', sa.Integer, index=True) # This can be determined from the net address and the net mask, but we store # it in the db so that we can join with the address table. ip_net_mask_len = sa.Column(sa.SmallInteger) class Address(Base): __tablename__ = "address" ip_addr_db = sa.Column('ip_addr', sa.Integer, primary_key=True, index=True, unique=True) Network.addresses = orm.relation(Address, primaryjoin=Address.ip_addr_db.between( Network.ip_net_addr_db, Network.ip_broadcast_addr_db), foreign_keys=[Address.ip_addr_db]) metadata.create_all(engine) Session = orm.sessionmaker(bind=engine) Network() I you run this, you get this error: ArgumentError: Could not determine relation direction for primaryjoin condition 'address.ip_addr BETWEEN network.ip_net_addr AND network.ip_broadcast_addr', on relation Network.addresses. Do the columns in 'foreign_keys' represent only the 'foreign' columns in this join condition ? The answer to that question is Yes, but I cant figure out how to tell it that

    Read the article

  • How should I model the database for this problem? And which ORM can handle it?

    - by Kristof Claes
    I need to build some sort of a custom CMS for a client of ours. These are some of the functional requirements: Must be able to manage the list of Pages in the site Each Page can contain a number of ColumnGroups A ColumnGroup is nothing more than a list of Columns in a certain ColumnGroupLayout. For example: "one column taking up the entire width of the page", "two columns each taking up half of the width", ... Each Column can contain a number ContentBlocks Examples of a ContentBlock are: TextBlock, NewsBlock, PictureBlock, ... ContentBlocks can be given a certain sorting within a Column A ContentBlock can be put in different Columns so that content can be reused without having to be duplicated. My first quick draft of how this could look like in C# code (we're using ASP.NET 4.0 to develop the CMS) can be found at the bottom of my question. One of the technical requirements is that it must be as easy as possible to add new types of ContentBlocks to the CMS. So I would like model everything as flexible as possible. Unfortunately, I'm already stuck at trying to figure out how the database should look like. One of the problems I'm having has to do with sorting different types of ContentBlocks in a Column. I guess each type of ContentBlock (like TextBlock, NewsBlock, PictureBlock, ...) should have it's own table in the database because each has it's own different fields. A TextBlock might only have a field called Text whereas a NewsBlock might have fields for the Text, the Summary, the PublicationDate, ... Since one Column can have ContentBlocks located in different tables, I guess I'll have to create a many-to-many association for each type of ContentBlock. For example: ColumnTextBlocks, ColumnNewsBlocks and ColumnPictureBlocks. The problem I have with this setup is the sorting of the different ContentBlocks in a column. This could be something like this: TextBlock NewsBlock TextBlock TextBlock PictureBlock Where do I store the sorting number? If I store them in the associaton tables, I'll have to update a lot of tables when changing the sorting order of ContentBlocks in a Column. Is this a good approach to the problem? Basically, my question is: What is the best way to model this keeping in mind that it should be easy to add new types of ContentBlocks? My next question is: What ORM can deal with that kind of modeling? To be honest, we are ORM-virgins at work. I have been reading a bit about Linq-to-SQL and NHibernate, but we have no experience with them. Because of the IList in the Column class (see code below) I think we can rule out Linq-to-SQL, right? Can NHibernate handle the mapping of data from many different tables to one IList? Also keep in mind that this is just a very small portion of the domain. Other parts are Users belonging to a certain UserGroup having certain Permissions on Pages, ColumnGroups, Columns and ContentBlocks. The code (just a quick first draft): public class Page { public int PageID { get; set; } public string Title { get; set; } public string Description { get; set; } public string Keywords { get; set; } public IList<ColumnGroup> ColumnGroups { get; set; } } public class ColumnGroup { public enum ColumnGroupLayout { OneColumn, HalfHalf, NarrowWide, WideNarrow } public int ColumnGroupID { get; set; } public ColumnGroupLayout Layout { get; set; } public IList<Column> Columns { get; set; } } public class Column { public int ColumnID { get; set; } public IList<IContentBlock> ContentBlocks { get; set; } } public interface IContentBlock { string GetSummary(); } public class TextBlock : IContentBlock { public string GetSummary() { return "I am a piece of text."; } } public class NewsBlock : IContentBlock { public string GetSummary() { return "I am a news item."; } }

    Read the article

  • sql server 2005 indexes and low cardinality

    - by Peanut
    How does SQL Server determine whether a table column has low cardinality? The reason I ask is because query optimizer would most probably not use an index on a gender column (values 'm' and 'f'). However how would it determine the cardinality of the gender column to come to that decision? On top of this, if in the unlikely event that I had a million entries in my table and only one entry in the gender column was 'm', would SQL server be able to determine this and use the index to retrieve that single row? Or would it just know there are only 2 distinct values in the column and not use the index? I appreciate the above discusses some poor db design, but I'm just trying to understand how query optimizer comes to its decisions. Many thanks.

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >