Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 351/1252 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to handle expired items?

    - by Mark
    My site allows users to post things on the site with an expiry date. Once the item has expired, it will no longer be displayed in the listings. Posts can also be closed, canceled, or completed. I think it would be be nicest just to be able to check for one attribute or status ("is active") rather than having to check for [is not expired, is not completed, is not closed, is not canceled]. Handling the rest of those is easy because I can just have one "status" field which is essentially an enum, but AFAIK, it's impossible to set the status to "expired" as soon as that time occurs. How do people typically handle this?

    Read the article

  • Connecting to a fresh SQL Server installation

    - by ripper234
    I know mysql, and I'd like to learn sqlserver. I'm currently stuck on the basics of basics: How to install and configure sql server How to connect to it I installed Sql Server through Web Platform Installer, and have Visual Studio 2008 installed. Still, I can't understand how to connect to my server: I see that the SQL service itself (SQLEXPRESS) is running in both in services.msc and Sql Server Configuration Manager I try to connect to it via the Management Studio, but I don't understand what to do. Where do I begin?

    Read the article

  • Help me write a nicer SQL query in Rails

    - by Sainath Mallidi
    Hi, I am trying to write an SQL query to update some of the attributes that are regularly pulled from source. the output will be a text file with the following fields: author, title, date, popularity I have two tables to update one is the author information and the other is popularity table. And the Author Active Record object has one popularity. Currently I'm doing it like this.\ arr.each { |x| x = x.split(" ") results = Author.find_by_sql("SELECT authors.id FROM authors, priorities WHERE authors.id=popularity.authors_id AND authors.author = x[0]") results[0].popularity.update_attribute("popularity", x[3]) I need two tables because the popularity keeps changing, and I need only the top 1000 popular ones, but I still need to keep the previously popular ones also. Is there any nicer way to do this, instead of one query for every new object. Thanks.

    Read the article

  • MySQL query problem

    - by Luke
    Ok, I have the following problem. I have two InnoDB tables: 'places' and 'events'. One place can have many events, but event can be created without entering place. In this case event's foreign key is NULL. Simplified structure of the tables looks as follows: CREATE TABLE IF NOT EXISTS `events` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, `places_id` int(9) unsigned DEFAULT NULL, PRIMARY KEY (`id`), KEY `fk_events_places` (`places_id`), ) ENGINE=InnoDB; ALTER TABLE `events` ADD CONSTRAINT `events_ibfk_1` FOREIGN KEY (`places_id`) REFERENCES `places` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS `places` ( `id` int(9) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), ) ENGINE=InnoDB; Question is, how to construct query which contains name of the event and name of the corresponding place (or no value, in case there is no place assigned?). I am able to do it with two queries, but then I am visibly separating events which have place assigned from the ones that are without place. Help really appreciated.

    Read the article

  • SQL clone record with a unique index

    - by Milhous
    Is there a clean way of cloning a record in SQL that has an index(auto increment). I want to clone all the fields except the index. I currently have to enumerate every field, and use that in an insert select, and I would rather not explicitly list all of the fields, as they may change over time.

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • How should I migrate DDL changes from one environment to the next?

    - by Rl
    I make DDL changes using SQL Developer's GUI. Problem is, I need to apply those same changes to the test environment. I'm wondering how others handle this issue. Currently I'm having to manually write ALTER statements to bring the test environment into alignment with the development environment, but this is prone to error (doing the same thing twice). In cases where there's no important data in the test environment I usually just blow everything away, export the DDL scripts from dev and run them from scratch in test. I know there are triggers that can store each DDL change, but this is a heavily shared environment and I would like to avoid that if possible. Maybe I should just write the DDL stuff manually rather than using the GUI?

    Read the article

  • Using a variable derived from a drop-down list as the column name in a select statement ... Access DB

    - by user1459698
    I'm working with the world's worst DB that was already here so don't blame me for that. So, here's what I have so far ... module = txtModule.value presstype = txtPressType.value SQL_query = "SELECT * FROM tbl_spareparts WHERE '"& module &"' <> '""' AND '"& module &"' = '"& presstype &"' AND Manufacturer = '"& txtsrch.value &"' ORDER BY SAP_Part_No" Set rsData = conn.Execute(SQL_query) This brings up the following SQL statement: SELECT * FROM tbl_spareparts WHERE 'Banyan_Module' <> '"' AND 'Banyan_Module' = 'PB' AND Manufacturer = 'Tester' ORDER BY SAP_Part_No Is there any way I can use the module variable as a column name - obviously the ''s around the column name are causing an error. This is really bothering me. BTW, I'm writing this in VBScript inside a .HTA application page as it has to run locally on tech PCs. Thanks. R.

    Read the article

  • Hibernate Auto-Increment not working

    - by dharga
    I have a column in my DB that is set with Identity(1,1) and I can't get hibernate annotations to work for it. I get errors when I try to create a new record. In my entity I have the following. @GeneratedValue(strategy=GenerationType.IDENTITY, generator="native") @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; But when I try to add a new record I get the following error. Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. I don't want to set IDENTIY_INSERT to ON because I want the identity column in the db to manage the values. The SQL that is run is the following; where you can clearly see the insert. insert into dbo.MemberSelectedOptions (OptionStatusCd, EffectiveDate, TermDate, SelectionStatusDate, SysLstUpdtUserId, SysLstTrxDtm, SourceApplication, GroupId, MemberId, OptionId, SeqNo) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) What am I missing?

    Read the article

  • Fastest way to store/retrieve a dictionary - SQL, text file...?

    - by AP257
    Hi all, This is a really really super dumb question, so I apologise, but I'd be grateful for some advice. I've got a text file of words and word frequencies. It's very large - theoretically we're talking millions of rows. I just want to retrieve values from the file, and do it as quickly and efficiently as possible (for a web app, in Django). My question is: what is the best way to store and retrieve the values? Should import them into SQL? Or keep the file and use grep? Or put them into a JSON dictionary...? Or some other way? Sorry for the dumb question, would be very grateful for advice!

    Read the article

  • How to find the entity with the greatest primary key?

    - by simpatico
    I've an entity LearningUnit that has an int primary key. Actually, it has nothing more. Entity Concept has the following relationship with it: @ManyToOne @Size(min=1,max=7) private LearningUnit learningUnit; In a constructor of Concept I need to retrieve the LearningUnit with the greatest primary key. If no LearningUnit exists yet I instantiate one. I then set this.learningUnit to the retrieved/instantied. Finally, I call the empty constructor of Concept in a try-catch block, to have the entitymanager do the cardinality check. If an exception is thrown (I expect one in the case that already another 7 Concepts are referring to the same LearningUnit. In that case, I case instantiate a new LearningUnit with a new greater primary key. Please, also point out, if any, clear pitfalls in my outlined algorithm above.

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • Virtual directories as DB queries

    I have a site, e.g. site.com I would like users to be able to access it in their locale at site.com/somecity This is similar to craigslist, but they do it with subdomains e.g. sfbay.craigslist.org Using Apache HTTP server. MySql for DB. If you can provide a brief explanation and perhaps links to more thorough discussions, I would be quite interested in learning. I'm developing a web-app and wonder if I should focus some time to read up on Apache, or should I focus more of my time on server-side programming. Thanks!

    Read the article

  • Walking through an SQLite Table

    - by galford13x
    I would like to implement or use functionality that allows stepping through a Table in SQLite. If I have a Table Products that has 100k rows, I would like to retrive perhaps 10k rows at a time. Somthing similar to how a webpage would list data and have a < Previous .. Next > link to walk through the data. Are there select statements that can make this simple? I see and have tried using the ROWID in conjunction with LIMIT which seems ok if not ordering the data. // This seems works if not ordering. SELECT * FROM Products WHERE ROWID BETWEEN x AND y;

    Read the article

  • No operations allowed after statement closed issue

    - by Washu
    I have the next methods in my singleton to execute the JDBC connections public void openDB() throws ClassNotFoundException, IllegalAccessException, InstantiationException, SQLException { Class.forName("com.mysql.jdbc.Driver").newInstance(); String url = "jdbc:mysql://localhost/mbpe_peru";//mydb conn = DriverManager.getConnection(url, "root", "admin"); st = conn.createStatement(); } public void sendQuery(String query) throws SQLException { st.executeUpdate(query); } public void closeDB() throws SQLException { st.close(); conn.close(); } And I'm having a problem in a void where i have to call this twice. private void jButton1ActionPerformed(ActionEvent evt) { Main.getInstance().openDB(); Main.getInstance().sendQuery("call insertEntry('"+EntryID()+"','"+SupplierID()+"');"); Main.getInstance().closeDB(); Main.getInstance().openDB(); for(int i=0;i<dataBox.length;i++){ Main.getInstance().sendQuery("call insertCount('"+EntryID()+"','"+SupplierID()+"','"+BoxID()+"'); Main.getInstance().closeDB(); } } I have already tried to keep the connection open and send the 2 querys and after that closed and it didnt work... The only way it worked was to not use the methods, declare the commands for the connection and use different variables for the connection and the statement. I thought that if i close the Connecion and the Statement I could use the variable once again since is a method but I'm not able to. Is there any way to solve this using my methods for the JDBC connection?

    Read the article

  • Mysql- "FLUSH TABLES WITH READ LOCK" started automatically

    - by ming yeow
    I would like to understand how this happened. I was running a query that would take a long time, but should not lock up any table. However, my dbs were practically down - it seems like it was being locked up by "FLUSH TABLES WITH READ LOCK" 03:21:31 select type_id, count(*) from guid_target_infos group by type_id 02:38:11 select type_id, count(*) from guid_infos group by type_id 02:24:29 FLUSH TABLES WITH READ LOCK But i did not start this command. can someone tell me why it was started automatically?

    Read the article

  • Multimap Space Issue: Guava

    - by Arpssss
    In my Java code, I am using Guavas Multimap (com.google.common.collect.Multimap) by using this: Multimap< Integer, Integer Index = HashMultimap.create() Here, Multimap key is some portion of URL and value is another portion of URL (converted into integer). Now, I assign my JVM 2560 Mb (2.5 GB) heap space (by using Xmx and Xms). However, it can only store 9 millions of such (key,value) pairs of integers (approx 10 million). But, theoretically (according to memory occupied by int) it should store more. Can anybody help me, 1) Why is this happening (means Multimap is taking lots of space) ? I checked my code with out inserting pairs in Multimap, it takes only 1/2 MB space. 2) Is there any other way or home-baked solution to solve this space issue ? More clearly, how to solve this memory issue ? Thanks in advance and any idea is perfectly OK for me.

    Read the article

  • Can't create a MySQL query that generates 4 rows for each row in the table it references.

    - by UkraineTrain
    I need to create a MySQL query that generates 4 rows for each row in the table it references. I need some of the information in those rows to repeat and some to be different. In the table each row stands for one day. I need to break the day up in 6 hour increments, hence the four rows for each entry. I need to create one column which for each day will have '12AM', '6AM', '12PM', and '6PM' values and another column will have the corresponding numeric values calculated for those entries. Thanks a lot in advance and I will really appreciate any help on this.

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Fluent nHibernate - How to map a non-key column on a junction table?

    - by The Matt
    Taking an example that is provided on the Fluent nHibernate website, I need to extend it slightly: I need to add a 'Quantity' column to the StoreProduct table. How would I map this using nHibernate? An example mapping is provided for the given scenario above, but I'm not sure how I would get the Quantity column to map to a property on the Product class: public class StoreMap : ClassMap<Store> { public StoreMap() { Id(x => x.Id); Map(x => x.Name); HasMany(x => x.Employee) .Inverse() .Cascade.All(); HasManyToMany(x => x.Products) .Cascade.All() .Table("StoreProduct"); } }

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >