Search Results

Search found 27905 results on 1117 pages for 'sql authority'.

Page 657/1117 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • sequence of events in ACCESS

    - by I__
    what is the proper way of doing the following: getting DATE as user input running a query generating a report that uses the query this is the solution i was thinking: have a form that takes user input run the query open the report what is the correct way of doing this?

    Read the article

  • Multiple user database design

    - by dieguitoweb
    I have to develop a basic social network for an academic purpose; but I need some tips for the users management.. The users are subdivided into 3 groups with different privilege: admins,analysts and standards users. For every user should be stored into the database the following information: name,lastname,e-mail,age,password. I'm not quite sure how I should design the database between theese two solutions: 1)one table called 'users' with the 'role' attribute that explain what a user can do and what can't do, and the permissions are managed via php 2)every application user is a database user created with the query 'CREATE ROLE' (It's a postgres database) and he has permissions on some tables granted with the 'GRANT' statement You should take into account that the project is for a database exam.. thanks

    Read the article

  • How to automatically check out a database file in a source controlled web application ?

    - by TheRHCP
    Hello, I am working on an ASP.NET web application, we are a small team (4 students) and we do not have access to a dedicated server to host the database instance. So for this web application we decided just to put the database file in the App_Data folder. The problem is that our project is source controled on TFS, so every time you open the solution and try to launch the web application, we get an expcetion saying that database is read-only. That is logical because the databse file is not automatically checked-out. Is there a workaround to avoid a manual check-out of the database file everytime we open the solution ? Thanks.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Whatz wrong with this MSSQl Query ?

    - by ClixNCash
    Whatz wrong this MSSQl Query : Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click Dim SQLData As New System.Data.SqlClient.SqlConnection("Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True") Dim cmdSelect As New System.Data.SqlClient.SqlCommand("SELECT COUNT(*) FROM Table1 WHERE Name ='" + TextBox1.Text + "'", SQLData) SQLData.Open() If cmdSelect.ExecuteScalar > 0 Then Label1.Text = "You have already voted this service" Return End If Dim con As New SqlConnection Dim cmd As New SqlCommand con.Open() cmd.Connection = con cmd.CommandText = "INSERT INTO Tabel1 (Name) VALUES('" & Trim(Label1.Text) & "')" cmd.ExecuteNonQuery() Label1.Text = "Thank You !" SQLData.Close() End Sub

    Read the article

  • Select count() max() Date HELP!!! mysql oracle

    - by DAVID
    Hi guys i have a table with shifts history along with emp ids im using this code to retrieve a list of employees and their total shifts by specifying the range to count from: SELECT ope_id, count(ope_id) FROM operator_shift WHERE ope_shift_date >=to_date( '01-MAR-10','dd-mon-yy') and ope_shift_date <= to_date('31-MAR-10','dd-mon-yy') GROUP BY OPE_ID which gives OPE_ID COUNT(OPE_ID) 1 14 2 7 3 6 4 6 5 2 6 5 7 2 8 1 9 2 10 4 10 rows selected. NOW how do i choose the employee with the highest no of shifts under the specified range date, please this is really important

    Read the article

  • How do I get every nth row in a table, or how do I break up a subset of a table into sets or rows of

    - by Jherico
    I have a table of heterogeneous pieces of data identified by a primary key (ID) and a type identifier (TYPE_ID). I would like to be able to perform a query that returns me a set of ranges for a given type broken into even page sizes. For instance, if there are 10,000 records of type '1' and I specify a page size of 1000, I want 10 pairs of numbers back representing values I can use in a BETWEEN clause in subsequent queries to query the DB 1000 records at a time. My initial attempt was something like this select id, rownum from CONTENT_TABLE where type_id = ? and mod(rownum, ?) = 0 But this doesn't work.

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • writting an sql query

    - by Praveen Prasad
    iam having 2 tables table Items Table (this table holds all items iam having) itemId --------- Item1 Item2 Item3 Item4 Item5 table 2 users_item relation UserId || ItemId 1 || Item1 1 || Item2 userId one has stored 2 items Item1,Item2. Now i want to write a query on table1 (Items table) so that it displays all items which user1 has NOT chosen.

    Read the article

  • Update table with if statement PS/SQL

    - by Matt
    I am trying to do something like this but am having trouble putting it into oracle coding. BEGIN IF ((SELECT complete_date FROM task_table WHERE task_id = 1) IS NULL) THEN UPDATE task_table SET complete_date = //somedate WHERE task_id = 1; ELSE UPDATE task_table SET complete_date = NULL; END IF; END; But this does not work i also tried IF EXISTS(SELECT complete_date FROM task_table WHERE task_id = 1) with no luck

    Read the article

  • my output parameters are always null when i use BeginExecuteNonQuery

    - by CharlesO
    I have a stored procedure that returns a varchar(160) as an output parameter of a stored procedure. Everything works fine when i use ExecuteNonQuery, i always get back the expected value. However, once i switch to use BeginExecuteNonQuery, i get a null value for the output. I am using connString + "Asynchronous Processing=true;" in both cases. Sadly the BeginExecuteNonQuery is about 1.5 times faster in my case...but i really need the output parameter. Thanks!

    Read the article

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • how do I integrate the aspnet_users table that comes with asp.net membership into my existing databa

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Count total number of callers?

    - by Kristopher Ives
    I'm currently doing this query to find the guy who makes the most calls: SELECT `commenter_name`, COUNT(*) AS `calls` FROM `comments` GROUP BY `commenter_name` ORDER BY `calls` LIMIT 1 What I want now is to be able to find out how many total unique callers. I tried using DISTINCT but I didn't get anywhere.

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • Relation to multiple tables of different types for rating?

    - by Tronic
    i have a table structure like this Products Team Images and want to implement a rating/commenting-feature, where users can rate each entry of all tables. what's the best way to make a single rating table? e.g. a user votes a a product and a team entry, and it should be possible to get alle these entries from a single table. what kind of table-structure is best for this purpose? i hope, my questions is clear enough :/ thanks in advance!

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >