Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 318/1300 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Tricky SQL query involving consecutive values

    - by Gabriel
    I need to perform a relatively easy to explain but (given my somewhat limited skills) hard to write SQL query. Assume we have a table similar to this one: exam_no | name | surname | result | date ---------+------+---------+--------+------------ 1 | John | Doe | PASS | 2012-01-01 1 | Ryan | Smith | FAIL | 2012-01-02 <-- 1 | Ann | Evans | PASS | 2012-01-03 1 | Mary | Lee | FAIL | 2012-01-04 ... | ... | ... | ... | ... 2 | John | Doe | FAIL | 2012-02-01 <-- 2 | Ryan | Smith | FAIL | 2012-02-02 2 | Ann | Evans | FAIL | 2012-02-03 2 | Mary | Lee | PASS | 2012-02-04 ... | ... | ... | ... | ... 3 | John | Doe | FAIL | 2012-03-01 3 | Ryan | Smith | FAIL | 2012-03-02 3 | Ann | Evans | PASS | 2012-03-03 3 | Mary | Lee | FAIL | 2012-03-04 <-- Note that exam_no and date aren't necessarily related as one might expect from the kind of example I chose. Now, the query that I need to do is as follows: From the latest exam (exam_no = 3) find all the students that have failed (John Doe, Ryan Smith and Mary Lee). For each of these students find the date of the first of the batch of consecutively failing exams. Another way to put it would be: for each of these students find the date of the first failing exam that comes after their last passing exam. (Look at the arrows in the table). The resulting table should be something like this: name | surname | date_since_failing ------+---------+-------------------- John | Doe | 2012-02-01 Ryan | Smith | 2012-01-02 Mary | Lee | 2012-01-04 Ann | Evans | 2012-02-03 How can I perform such a query? Thank you for your time.

    Read the article

  • Modifying SQL Server Schema Collection

    - by Mevdiven
    SQL Server XML Schema Collection is an interesting concept and I find it very useful when designing dynamic data content. However as I work my way through implementing Schema Collections, I find it very difficult to maintain them. Schema Collection DDL allows only CREATE and ALTER/ADD nodes to existing schemes. CREATE XML SCHEMA COLLECTION [ <relational_schema>. ]sql_identifier AS 'XSD Content' ALTER XML SCHEMA COLLECTION [ <relational_schema>. ]sql_identifier ADD 'Schema Component' When you want to remove any node from a schema you have to issue following DDL's. If that schema collection assigned to a table column, you have to alter table to remove schema collection association from that column Drop the schema collection object Re-Create schema collection Alter table column to re-associate schema collection to that column. This is pain when it comes to 100+ of schemes in a collection. Also you have to re-create XML indexes all over again, if any. Any solutions, suggestions, tricks to make this schema collection object editing process easier?

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Replication: SQL Server 2008 Publisher with SQL Server Express 2005 Subscriber

    - by Jeremy
    Here is the setup: SQL Server 2008 Enterprise Server with a Merge Publication. SQL Server 2005 Express with pull subscription. There is no web or ftp setup. This is direct merge replication. Using the RMO objects from C#, I get a "class cannot be found." COM Error when accessing the MergePullSubscription.SynchronizationAgent property. I've tried with both the 2008 RMO dll's (version 10 dll's) and the 2005 RMO dll's (version 9 dll's). When trying to use replmerge.exe, I get the following: 2010-04-10 04:12:05.263 Microsoft SQL Server Merge Agent 9.00.1399.06 2010-04-10 04:12:05.294 Copyright (c) 2000 Microsoft Corporation 2010-04-10 04:12:05.294 2010-04-10 04:12:05.294 The timestamps prepended to the output lines are express ed in terms of UTC time. 2010-04-10 04:12:05.294 User-specified agent parameter values: -Publisher SUN -PublisherDB PRIMROSE -PublisherSecurityMode 1 -Publication PRIMROSE -Distributor SUN -DistributorSecurityMode 1 -Subscriber PVILLE\SQLEXPRESS -SubscriberSecurityMode 1 -SubscriberDB PRIMROSE -SubscriptionType 1 -DistributorLogin sa -DistributorPassword ********** -DistributorSecurityMode 0 -PublisherLogin sa -PublisherPassword ********** -PublisherSecurityMode 0 -SubscriberLogin sa -SubscriberPassword ********** -SubscriberSecurityMode 0 2010-04-10 04:12:05.325 Connecting to Subscriber 'PVILLE\SQLEXPRESS' 2010-04-10 04:12:05.481 Connecting to Distributor 'SUN' 2010-04-10 04:12:05.513 The version of SQL Server running at the Distributor(10. 0.2531.??????????????????) is not compatible with the version of SQL Server runn ing at the Subscriber(9.00.1399.???????L?L?LHL?L?L?L?,?). 2010-04-10 04:12:05.513 Category:NULL Source: Merge Process Number: -2147200979 Message: The version of SQL Server running at the Distributor(10.0.2531.???????? ??????????) is not compatible with the version of SQL Server running at the Subs criber(9.00.1399.???????L?L?LHL?L?L?L?,?). Any ideas?

    Read the article

  • apostrophe in mysql/php

    - by fusion
    i'm trying to learn php/mysql. inserting data into mysql works fine but inserting those with apostrophe is generating an error. i tried using mysql_real_escape_string, yet this doesn't work. would appreciate any help. <?php include 'config.php'; echo "Connected <br />"; $auth = $_POST['author']; $quo = $_POST['quote']; $author = mysql_real_escape_string($auth); $quote = mysql_real_escape_string($quo); //************************** //inserting data $sql="INSERT INTO Quotes (vauthor, cquotes) VALUES ($author, $quote)"; if (!mysql_query($sql,$conn)) { die('Error: ' . mysql_error()); } echo "1 record added"; ... what am i doing wrong?

    Read the article

  • Modify Wordpress SQL Query to pull from within a category

    - by Levi
    Hi I am using a wordpress plugin called "kf most read" which stores a count of how many times a post was read, and lets you output a list of most read posts. This works well. The issue is, I am trying to pull the most read posts, but only the most read posts within the current category you are viewing. I am close to clueless when it comes to sql. Here us what the plugin is currently using to pull the most read posts: $sql = "SELECT count(mr.post_ID) as totHits, p.ID, p.post_title from $wpdb-posts p JOIN {$wpdb-prefix}kf_most_read mr on mr.post_ID = p.ID where mr.hit_ts = '".(time() - ( 86400 * $period))."' GROUP BY mr.post_ID order by totHits desc, ID ASC LIMIT $limit"; How could I incorporate the below query which pulls from a specific category into the above? $sql .= "LEFT JOIN $wpdb-term_taxonomy ON($wpdb-term_relationships.term_taxonomy_id = $wpdb-term_taxonomy.term_taxonomy_id)" ; $sql .= "WHERE $wpdb-term_taxonomy.term_id IN ($currentcat)" ; $sql .= "AND $wpdb-term_taxonomy.taxonomy = 'category'" ; Any Help on this would be much appreciated.

    Read the article

  • Insert into a star-schema

    - by shaun
    I've read a lot about star-schema's, about fact/deminsion tables, select statements to quickly report data, however the matter of data entry into a star-schema seems aloof to me. How does one "theoretically" enter data into a star-schema db? while maintaining the fact table. Is a series of INSERT INTO statement within giant stored proc with 20 params my only option (and how to populate the fact table). Many thanks.

    Read the article

  • Where can I download a free, text-rich dataset?

    - by blee
    I want to do a bit of lightweight testing and bench-marking for full-text search, so the dataset should have the qualities: 10,000 - 100,000 records. good dispersion of English words. In CSV or Excel format--i.e. I don't want to access it via API. Something like books or movies with title and description fields would be perfect. I browsed the UCI Machine Learning Repo, but it was too number-oriented. Thanks!

    Read the article

  • can't use db:seed in rails

    - by Stacia
    I updated my rails gem to 2.3.5 but I keep getting this error when running db:seed: $ rake db:seed --trace (in c:/Documents and Settings/Owner/workspace/thepatstudio) rake aborted! Don't know how to build task 'db:seed' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1728:in `[]' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2050:in `invoke_task' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' c:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 c:/Ruby/bin/rake:19:in `load' c:/Ruby/bin/rake:19 ~/workspace/thepatstudio (master) $ rails --version Rails 2.3.5 My environment.rb has the correct rails version on it and I also ran rake rails:update. What can I do?

    Read the article

  • SimpleJdbcCall ignoring JdbcTemplate fetch size

    - by user289429
    We are calling the pl/sql stored procedure through Spring SimpleJdbcCall, the fetchsize set on the JdbcTemplate is being ignored by SimpleJdbcCall. The rowmapper resultset fetch size is set to 10 even though we have set the jdbctemplate fetchsize to 200. Any idea why this happens and how to fix it? Have printed the fetchsize of resultset in the rowmapper in the below code snippet - Once it is 200 and other time it is 10 even though I use the same JdbcTemplate on both occassion. This direct execution through jdbctemplate returns fetchsize of 200 in the row mapper jdbcTemplate = new JdbcTemplate(ds); jdbcTemplate.setResultsMapCaseInsensitive(true); jdbcTemplate.setFetchSize(200); List temp = jdbcTemplate.query("select 1 from dual", new ParameterizedRowMapper() { public Object mapRow(ResultSet resultSet, int i) throws SQLException { System.out.println("Direct template : " + resultSet.getFetchSize()); return new String(resultSet.getString(1)); } }); This execution through SimpleJdbcCall is always returning fetchsize of 10 in the rowmapper jdbcCall = new SimpleJdbcCall(jdbcTemplate).withSchemaName(schemaName) .withCatalogName(catalogName).withProcedureName(functionName); jdbcCall.returningResultSet((String) outItValues.next(), new ParameterizedRowMapper<Map<String, Object>>() { public Map<String, Object> mapRow(ResultSet rs, int row) throws SQLException { System.out.println("Through simplejdbccall " + rs.getFetchSize()); return extractRS(rs, row); } }); outputList = (List<Map<String, Object>>) jdbcCall.executeObject(List.class, inParam);

    Read the article

  • How to explain to users the advantages of dumb primary key?

    - by Hao
    Primary key attractiveness I have a boss(and also users) that wants primary key to be sophisticated/smart/attractive control number(sort of like Social Security number, or credit card number format) I just padded the primary key(in Views) with zeroes to appease their desire to make the control number sophisticated,smart and attractive. But they wanted it as: first 2 digits as client code, then 4 digits as year year, then last 4 digits as transaction number on that client on a given year, then reset the transaction number of client to 1 when next year flows. Each client's transaction starts with 1. e.g. WM20090001, WM20090002, BB2009001, WM20100001, BB20100001 But as I wanted to make things as simple as possible, I forgo embedding their suggested smartness in primary key, I just keep the primary key auto increments regardless of client and year. But to make it not dull-looking(they really are adamant to make the primary key as smart control number), I made the primary key appears to them smart, on view query, I put the client code and four digit year code on front of the eight-zero padded autoincrement key, i.e. WM200900000001. Sort of slug-like information on autoincremented primary key. Keeping primary key autoincrement regardless of any other information, we are able keep other potential side effects problem when they edit a record, for example, if they made a mistake of entering the transaction on WM, then they edit the client code to BB, if we use smart primary key, the primary keys of WM customer will have gaps in their control number. Or worse yet, instead of letting the control numbers have gaps/holes, the user will request that subsequent records of that gap should shift up to that gap and have their subsequent primary keys re-adjust(decremented). How do you deal with these user requests(reasonable or otherwise)? Do you yield to their request? Or just continue using dumb primary key and explain them the repercussions of having a very smart/sophisticated primary key and educate them the significant advantages of having a dumb primary key? P.S. quotable quote(http://articles.techrepublic.com.com/5100-10878_11-1044961.html): "If you hold your tongue the first time users ask what is for them a reasonable request, things will work a lot better in the end."

    Read the article

  • Java Audit table logging, MySQL equivalent of CONTEXT_INFO.

    - by Julia
    Hi, I am looking for the MySQL equivalent of CONTEXT_INFO that is present in SQL Server. Or any other session variable like thing using which I can pass the username to the trigger. I am currently working on logging table data for audit. I need to pass the username of the logged in user to the delete trigger. Any ideas? We are deleting the rows from the table in a few cases and marking them as deleted in others. Any alternate solutions are welcome. I thought of using AOP but it could prove problematic when deleting a cascade. I want to look into Hibernate Interceptors, not sure at this point if that works. If I can find the MySQL equivalent of CONTEXT_INFO, my job is done and elegant as well. Thanks, Julia.

    Read the article

  • Query that ignore the spaces.

    - by xRobot
    What's the best way to run a query so that spaces in the fields are ignored? For example the following queries.... SELECT * FROM mytable WHERE username = "JohnBobJones" SELECT * FROM mytable WHERE username = "John Bob Jones" . would find the following entries: John Bob Jones JohnBob Jones JohnBobJones . I am using php or python but I think this doesn't matter.

    Read the article

  • Don't Understand Sql Server Error

    - by Jonathan Wood
    I have a table of users (User), and need to create a new table to track which users have referred other users. So, basically, I'm creating a many-to-many relation between rows in the same table. So I'm trying to create table UserReferrals with the columns UserId and UserReferredId. I made both columns a compound primary key. And both columns are foreign keys that link to User.UserID. Since deleting a user should also delete the relationship, I set both foreign keys to cascade deletes. When the user is deleted, any related rows in UserReferrals should also delete. But this gives me the message: 'User' table saved successfully 'UserReferrals' table Unable to create relationship 'FK_UserReferrals_User'. Introducing FOREIGN KEY constraint 'FK_UserReferrals_User' on table 'UserReferrals' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. I don't get this error. A cascading delete only deletes the row with the foreign key, right? So how can it cause "cycling cascade paths"? Thanks for any tips.

    Read the article

  • How to implement Object Databases in Asp.net MVC

    - by amexn
    I started my project in Asp.net MVC(c#) & SQL Server 2005.I want to implement Object Databases in my project. While searched in google i found "MongoDb" & db4o I didn't have enough knowledge in Object Databases & which one best suited for SQL Server 2005. Please suggest a good example/reference regarding Object Databases implementation in Asp.net MVC application

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Product table, many kinds of product, each product has many parameters

    - by StoneHeart
    Hi, i'm have not much experience in table design. My goal is a product table(s), it must design to fix some requirement below: Support many kind of products (TV, Phone, PC, ...). Each kind of product has different set of parameters like: Phone will have Color, Size, Weight, OS... PC will have CPU, HDD, RAM... Set of parameters must be dynamic. You can add or edit any parameter you like. I don't want make a table for each kind of product. So I need help to find a correct solution. Thanks.

    Read the article

  • Should foreign keys become table primary key?

    - by Carvell Fenton
    Hello again, I have a table (session_comments) with the following fields structure: student_id (foreign key to students table) session_id (foreign key to sessions table) session_subject_ID (foreign key to session_subjects table) user_id (foreign key to users table) comment_date_time comment Now, the combination of student_id, session_id, and session_subject_id will uniquely identify a comment about that student for that session subject. Given that combined they are unique, even though they are foreign keys, is there an advantage to me making them the combined primary key for that table? Thanks again.

    Read the article

  • Need help troubleshooting why Solr wont start (or why solr admin page wont show)

    - by Camran
    I can't get Solr working. I have Jetty, and my server OS is Ubuntu 9.10. It is a VPS server. So, when I execute the java -jar start.jar everything seems fine. I even do a netstat to check if there are any listeners on the port before the start and after the start, and it seems solr is starting. However, I cant access the admin page. I have even turned off the firewall. Here is some info about my server: I have changed DocumentRoot to var/www/SV/ I have Apache2, PHP5, MySql installed I have "disabled" iptables firewall I have removed the htaccess files (I used them to passw protect my site under develop) I have installed JRE (NOT JDK) on my server. I use the "example" which comes with Solr, so I use Jetty as container on my Server. My Server has 768MB RAM Doing a java -version command shows this: java version "1.6.0_15" Java(TM) SE Runtime Environment (build 1.6.0_15-b03) Java HotSpot(TM) Client VM (build 14.1-b02, mixed mode) And in the terminal the last lines when executing start.jar is: May 29, 2010 1:30:03 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@1dc64a5 main NOTE: Also before this last line, there is a line which makes me suspicious: Started SocketConnector @ 0.0.0.0:8983 // Should this be with leading zeros? Is there any ways you know to troubleshoot this? Memory issue maybe? Thanks

    Read the article

  • How to implement Object Databases in Asp.net MVC(c#)

    - by amexn
    I started my project in Asp.net MVC(c#) & MSSQL 2005.I want to implement Object Databases in my project. While searched in google i found "MongoDb" I didn't have enough knowledge in Object Databases & which one best suited for MSSQL 2005. Please suggest a good example/reference regarding Object Databases implementation in Asp.net MVC application

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >