Search Results

Search found 34962 results on 1399 pages for 'drop database'.

Page 332/1399 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Non-Access or -base QBE tool?

    - by idbin.cath0br0
    Hello. I'm currently looking for a QBE tool that can execute queries on PostgreSQL or MySQL. OS doesn't really matter. Reason is that we've got to do QBE at school but I don't want to use neither Microsoft Access nor OpenOffice.org Base (lack of features). Any help would be appreciated.

    Read the article

  • cached data base

    - by radi
    hi , in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql

    Read the article

  • sql 2008 express connection problems

    - by user163457
    Hi, I've just installed a fresh copy of SQL 2008 Express. before I did anything I opened Management Studio and successfully connected using Window Authentication. However I tried to run the following on the command line "telnet localhost 1433" and got the error "Could not open connection to the host, on port 1433: Connect failed" I checked netstat and there is nothing listening on port 1433. Before I go any further, is there a problem with the install? thanks, Shane

    Read the article

  • Dynamic data validation in ASP.NET MVC

    - by user252160
    I've recently read about the model validation capabilities of ASP.NET MVC which are all very cool until a certain point. What happens if the application doesn't know the data that it works with because it is all stored in DB and built together at runtime. Just like in Drupal, I'd like to be able to define custom types at runtime, and assign runtime validation rules as well. Obviously, the idea of assigning attributes to well established models is now gone. What else could be done ? I am thinking in terms of rules being stored as JSON objects in the DB fields or something like that.

    Read the article

  • How to show unread subforums?

    - by bilygates
    I have written a simple forum in PHP using PostgreSQL. The forum consists of a number of subforums (or categories, if you like) that contain topics. I have a table that stores when was the last time a user visited a topic. It's something like this: user_id, topic_id, timestamp. I can easily determine what topics should be marked as unread by comparing the timestamp of the last topic reply with the timestamp of the last user visit. My question is: how do I efficiently determine what subforums (categories) should be marked as unread? All I've come up with is this: every time a user visits a topic, update the visit timestamp and check if all the topics from the current subforum are read or unread. If they are all read, mark the subforum as read for the user. Else, mark it as unread. But I think there must be another way. Thank you in advance.

    Read the article

  • PDO update query with conditional?

    - by dmontain
    I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • Hierarchical Data in MySQL is fast to retrieve?

    - by ajsie
    i've got a list of all countries - states - cities (- subcities/villages etc) in a XML file and to retrieve for example a state's all cities it's really quick with XML (using xml parser). i wonder, if i put all this information in mysql, is retrieving a state's all cities as fast as with XML? cause XML is designed to store hierarchical data while relational databases like mysql are not. the list contains like 500 000 entities. so i wonder if its as fast as XML using either of: Adjacency list model Nested Set model And which one should i use? Cause (theoretically) there could be unlimited levels under a state. And which is fastest for this huge dataset? Thanks!

    Read the article

  • Calling data from different model in Rails

    - by Danny McClelland
    Hi Everyone, I need to be able to call data from a different model - not just one field, but any of them. At the moment I have the following models: kase person company party I can call information from the company to the kase and from the person to the kase using this: <li>Client Company Address: <span class="address"><%=h @kase.company.companyaddress %></span></li> <li>Case Handler: <span><%=h @kase.person.personname %></span></li> Those two work, however if I add the following: <li>Client Company Fax: <span><%=h @kase.company.companyfax %></span></li> <li>Case Handler Tel: <span><%=h @kase.person.personmobile %></span></li> <li>Case Handler Email: <span><%=h @kase.person.personemail %></span></li> Any idea whats wrong? Thanks, Danny EDIT: I get the following error message: NoMethodError in Kases#show Showing app/views/kases/show.html.erb where line #37 raised: You have a nil object when you didn't expect it! The error occurred while evaluating nil.personname The lines that are noted are as follows: 34: <div id="clientinfo_showhide" style="display:none"> 35: <li>Client Company Address: <span class="address"><%=h @kase.company.companyaddress %></span></li> 36: <li>Client Company Fax: <span><%=h @kase.company.companyfax %></span></li> 37: <li>Case Handler: <span><%=h @kase.person.personname %></span></li> 38: <li>Case Handler Tel: <span><%=h @kase.person.personmobile %></span></li> 39: <li>Case Handler Email: <span><%=h @kase.person.personemail %></span></li> 40: </div> The model for the kase is as follows: class Kase belongs_to :company # foreign key: company_id belongs_to :person # foreign key in join table The model for the person is as follows: class Person has_many :kases # foreign key in join table belongs_to :company The model for the company is as follows: class Company has_many :kases has_many :people def to_s; companyname; end Hope this helps!

    Read the article

  • Yahoo Query Language Problem

    - by Damiano
    Hello everybody! Today, I've started with Yahoo Query Language. I would use it to retrive stocks details, so I'm talking about Yahoo Finance. I think there is a bug on this language. This is my query: select * from yahoo.finance.quoteslist where symbol='@^GSPC' I ALWAYS get 51 results! it's impossible, take a look at: http://it.finance.yahoo.com/q/cp?s=^GSPC There are 500 results! I also tried some paging parameters. select * from yahoo.finance.quoteslist(50,30) where symbol='@^GSPC' (to get from 50 to 80) select * from yahoo.finance.quoteslist(100) where symbol='@^GSPC' (to get the first 100 results) select * from yahoo.finance.quoteslist where symbol='@^GSPC' limit 30 offset 50 but ALWAYS the last stock is: <quote symbol="BBY"> <Symbol>BBY</Symbol> <LastTradePriceOnly>41.03</LastTradePriceOnly> <LastTradeDate>5/7/2010</LastTradeDate> <LastTradeTime>4:00pm</LastTradeTime> <Change>-0.48</Change> <Open>41.35</Open> <DaysHigh>42.35</DaysHigh> <DaysLow>39.60</DaysLow> <Volume>14129531</Volume> </quote> Why do I have this kind of problem? Thank you so much for your support! (P.S. I've tested it on Yahoo YQL console)

    Read the article

  • MS Access 2003 - Failure to create MDE file: error VBA is corrupt?

    - by Justin
    Ok so this is a brand new snag I have run into. I am trying to launch a new MDE from my source MDB file, and it is locking up Access. So in my mdb, I am first compacting and repairing, and then selecting create a new mde (just as I have done many times before). It looks like it is starting the process, but never gets to where it compacts when it is done, and access is not responding. So after I force close the app, I look in the folder where I am trying to create the MDE to and I see there is a new access db1 file there. If I try to open that it gives me an error that says file not found, and then it says the Visual Basic for Applications is corrupt. The thing is, I just made a very simple adjustment to the code since last launching an mde, and after this I double and triple checked it...its not that because its just a simple open this form and close this one addition. I did however have my source mdb file on a disc that I copied to my laptop, and then tried to re link the tables to the network drive (had them linked to other tables on my local drive so that I could develop offline)?? PLEASE HELP!!!

    Read the article

  • Multiple rows update trigger

    - by sony
    If I have a statement that updates multiple rows, only the trigger will fire only on the first or last row that is being updated (not sure which one). I need to make a trigger that fires for ALL the records that are being updated into a particular table

    Read the article

  • Fastest method for SQL Server inserts, updates, selects from C# ASP.Net 2.0+

    - by Ian
    Hi All, long time listener, first time caller. I use SPs and this isn't an SP vs code-behind "Build your SQL command" question. I'm looking for a high-throughput method for a backend app that handles many small transactions. I use SQLDataReader for most of the returns since forward only works in most cases for me. I've seen it done many ways, and used most of them myself. Methods that define and accept the stored procedure parameters as parameters themselves and build using cmd.Parameters.Add (with or without specifying the DB value type and/or length) Assembling your SP params and their values into an array or hashtable, then passing to a more abstract method that parses the collection and then runs cmd.Parameters.Add Classes that represent tables, initializing the class upon need, setting the public properties that represent the table fields, and calling methods like Save, Load, etc I'm sure there are others I've seen but can't think of at the moment as well. I'm open to all suggestions.

    Read the article

  • SQL query for selecting the firsts in a series by cloumn

    - by SP
    I'm having some trouble coming up with a query for what I am trying to do. I've got a table we'll call 'Movements' with the following columns: RecID(Key), Element(f-key), Time(datetime), Room(int) The table is holding a history of Movements for the Elements. One record contains the element the record is for, the time of the recorded location, and the room it was in at that time. What I would like are all records that indicate that an Element entered a room. That would mean the first (by time) entry for any element in a series of movements for that element in the same room. The input is a room number and a time. IE, I would like all of the records indicating that any Element entered room X after time Y. The closest I came was this Select Element, min(Time) from Movements where Time > Y and Room = x group by Element This will only give me one room entry record per Element though (If the Element has entered the room X twice since time Y I'll only get the first one back) Any ideas? Let me know if I have not explained this clearly. I'm using MS SQLServer 2005.

    Read the article

  • Any disadvantages to this tagging approach

    - by donpal
    I have a table of tags ID tag --- ------ 1 tagt 2 tagb 3 tagz 4 tagn In my items table, I'm using those tags in serialized format, comma delimited ID field1 tags ---- ------ ----- 1 value1 tagt,tagb 2 value2 tagb 3 value3 tagb,tagn 4 value4 When I need to find items that have this tag, I plan to deserialize the tags. But I'm not actually sure how to do it, and if it's better to have a third table for associations instead between the tags and the items.

    Read the article

  • mysql GROUP_CONCAT

    - by user301766
    I want to list all users with their corropsonding user class. Here are simplified versions of my tables CREATE TABLE users ( user_id INT NOT NULL AUTO_INCREMENT, user_class VARCHAR(100), PRIMARY KEY (user_id) ); INSERT INTO users VALUES (1, '1'), (2, '2'), (3, '1,2'); CREATE TABLE classes ( class_id INT NOT NULL AUTO_INCREMENT, class_name VARCHAR(100), PRIMARY KEY (class_id) ); INSERT INTO classes VALUES (1, 'Class 1'), (2, 'Class 2'); And this is the query statement I am trying to use but is only returning the first matching user class and not a concatenated list as hoped. SELECT user_id, GROUP_CONCAT(DISTINCT class_name SEPARATOR ",") AS class_name FROM users, classes WHERE user_class IN (class_id) GROUP BY user_id; Actual Output +---------+------------+ | user_id | class_name | +---------+------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1 | +---------+------------+ Wanted Output +---------+---------------------+ | user_id | class_name | +---------+---------------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1, Class 2 | +---------+---------------------+ Thanks in advance

    Read the article

  • Hierarchical Data in MySQL is as fast as XML to retrieve?

    - by ajsie
    i've got a list of all countries - states - cities (- subcities/villages etc) in a XML file and to retrieve for example a state's all cities it's really quick with XML (using xml parser). i wonder, if i put all this information in mysql, is retrieving a state's all cities as fast as with XML? cause XML is designed to store hierarchical data while relational databases like mysql are not. the list contains like 500 000 entities. so i wonder if its as fast as XML using either of: Adjacency list model Nested Set model And which one should i use? Cause (theoretically) there could be unlimited levels under a state (i heard that adjacency isn't good for unlimited child-levels). And which is fastest for this huge dataset? Thanks!

    Read the article

  • What is the scope of TRANSACTION in Sql server

    - by Shantanu Gupta
    I was creating a stored procedure and i got stuck in the writing methodology of me and my collegue. I am using SQL Server 2005 I was writing Stored procedure like this BEGIN TRAN BEGIN TRY INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH And my collegue was writing it like the below one BEGIN TRY BEGIN TRAN INSERT INTO Tags.tblTopic (Topic, TopicCode, Description) VALUES(@Topic, @TopicCode, @Description) INSERT INTO Tags.tblSubjectTopic (SubjectId, TopicId) VALUES(@SubjectId, @@IDENTITY) COMMIT TRAN END TRY BEGIN CATCH DECLARE @Error VARCHAR(1000) SET @Error= 'ERROR NO : '+ERROR_NUMBER() + ', LINE NO : '+ ERROR_LINE() + ', ERROR MESSAGE : '+ERROR_MESSAGE() PRINT @Error ROLLBACK TRAN END CATCH Here the only difference that you will find is the position of writing Begin TRAN. According to me the methodology of my collegue should not work when an exception occurs i.e. Rollback should not get executed because TRAN does'nt have scope. But when i tried to run both the code, both was working in the same way. I am confused to know how does TRANSACTION works. Is it scope free or what ?

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • How to track history of db tables that include many-to-many mapping tables?

    - by chacmool
    I have seen several questions here on tracking db history, but can't seem to find one that matches our situation. We need to track the history of several tables, some of which are many-to-many linking tables. Eg say we have this schema: EntityA id name EntityB id name ABLink A_id B_id So, tracking changes to EntityA or EntityB seems pretty straightforward. We can keep a log table with the same columns plus a date stamp and user. But what about the links? How do we maintain the set of links that are valid for a given version of the data? We need to be able to recreate a history of the data showing changes in chronological order. So if a link added or deleted, we indicate that. Etc.

    Read the article

  • More efficient method for grabbing all child units

    - by Hazior
    I have a table in SQL that links to itself through parentID. I want to find the children and their children and so forth until I find all the child objects. I have a recursive function that does this but it seems very ineffective. Is there a way to get sql to find all child objects? If so how?

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • BI with Django?

    - by Helmut
    Is there a way to develop Bi (Business Intelligence) solutions with Django? Therefore it should be possible to define models with more than one Datasource. Is anybody out there who has experienced BI with Django? How could it work ?

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >