Search Results

Search found 35019 results on 1401 pages for 'sql documentation'.

Page 646/1401 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • Group by date range on weeks/months interval

    - by khelll
    I'm using MySQL and I have the following table: | clicks | int | | day | date | I want to be able to generate reports like this, where periods are done in the last 4 weeks: | period | clicks | | 1/7 - 7/5 | 1000 | | 25/6 - 31/7 | .... | | 18/6 - 24/6 | .... | | 12/6 - 18/6 | .... | or in the last 3 months: | period | clicks | | July | .... | | June | .... | | April | .... | Any ideas how to make select queries that can generate the equivalent date range and clicks count?

    Read the article

  • Linq to Sql: Update Entity throug a new Object

    - by Dänu
    Hey Guys I'd like to update an entity via linq, but since I edit the entity in a view after serializing it, I don't have direct access to the entity inside the data context. I could do it like this: entity.property1 = obj.property1; entity.property2 = obj.property2; ... thats not cool... not cool at all. Next thing I tried is to do it via .attach() like so: context.Table.attach(entity, obj); doesn't work either. So is there another option short of reflection?

    Read the article

  • Are there more secure alternatives to the .Net SQLConnection class?

    - by KeyboardMonkey
    Hi SO people, I'm very surprised this issue hasn't been discussed in-depth: This article tells us how to use windbg to dump a running .Net process strings in memory. I spent much time researching the SecureString class, which uses unmanaged pinned memory blocks, and keeps the data encrypted too. Great stuff. The problem comes in when you use it's value, and assign it to the SQLConnection.ConnectionString property, which is of the System.String type. What does this mean? Well... It's stored in plain text Garbage Collection moves it around, leaving copies in memory It can be read with windbg memory dumps That totally negates the SecureString functionality! On top of that, the SQLConnection class is non-inheritable, I can't even roll my own with a SecureString property instead; Yay for closed-source. Yay. A new DAL layer is in progress, but for a new major version and for so many users it will be at least 2 years before every user is upgraded, others might stay on the old version indefinitely, for whatever reason. Because of the frequency the connection is used, marshalling from a SecureString won't help, since the immutable old copies stick in memory until GC comes around. Integrated Windows security isn't an option, since some clients don't work on domains, and other roam and connect over the net. How can I secure the connection string, in memory, so it can't be viewed with windbg?

    Read the article

  • Updating records in Postgres using FROM clause

    - by Summer
    Hi, I'm changing my db schema, and moving column 'seat' from old_table to new_table. First I added a 'seat' column to new_table. Now I'm trying to populate the column with the values from old_table. UPDATE new_table SET seat = seat FROM old_table WHERE old_table.id = new_table.ot_id; This returns ERROR: column reference "seat" is ambiguous. UPDATE new_table nt SET nt.seat = ot.seat FROM old_table ot WHERE ot.id = nt.ot_id; Returns ERROR: column "nt" of relation "new_table" does not exist Ideas?

    Read the article

  • How to use Insert .. select, with conditional vars from insert

    - by WmasterJ
    I have two separate tables both with user id columns uid. I want to take a value from all users in one table and insert it into the correct row for the correct user in the other table. INSERT INTO users2 (picture) SELECT pv.value FROM profile_values as pv, users2 as u WHERE pv.uid = u.uid AND pv.fid = 31 AND users2.uid=u.uid; But it's not working because i seem not to have access to users2.uid inside of the select statement. How would I accomplish this?

    Read the article

  • Passing Results from SQL to Google Maps API in CodeIgniter

    - by Jason Shultz
    I'm hoping to use google maps on my site. My addresses are stored in a db. I’m pulling up a page where the information is all dynamic. For example: mysite.com/site/business/5 (where 5 is the id of the business). Let’s say I do a query like this: function addressForMap($id) { $this->db->select(‘b.id, b.busaddress, b.buscity, b.buszip’); $this->db->from(‘business as b’); $this->db->where(‘b.id, $id); } How can I output the info to the google maps api correctly so that it display’s the map appropriately? The API interface takes the results like this: $marker['address'] = 'Crescent Park, Palo Alto';

    Read the article

  • Linq with a long where clause

    - by Jeremy Roberts
    Is there a better way to do this? I tried to loop over the partsToChange collection and build up the where clause, but it ANDs them together instead of ORing them. I also don't really want to explicitly do the equality on each item in the partsToChange list. var partsToChange = new Dictionary<string, string> { {"0039", "Vendor A"}, {"0051", "Vendor B"}, {"0061", "Vendor C"}, {"0080", "Vendor D"}, {"0081", "Vendor D"}, {"0086", "Vendor D"}, {"0089", "Vendor E"}, {"0091", "Vendor F"}, {"0163", "Vendor E"}, {"0426", "Vendor B"}, {"1197", "Vendor B"} }; var items = new List<MaterialVendor>(); foreach (var x in partsToChange) { var newItems = ( from m in MaterialVendor where m.Material.PartNumber == x.Key && m.Manufacturer.Name.Contains(x.Value) select m ).ToList(); items.AddRange(newItems); } Additional info: I am working in LINQPad.

    Read the article

  • Using application roles with DataReader

    - by Shahar
    I have an application that should use an application role from the database. I'm trying to make this work with queries that are actually run using Subsonic (2). To do this, I created my own DataProvider, which inherits from Subsonic's SqlDataProvider. It overrides the CreateConnection function, and calls sp_appsetrole to set the application role after the connection is created. This part works fine, and I'm able to get data using the application role. The problem comes when I try to unset the application role. I couldn't find any place in the code where my provider is called after the query is done, so I tried to add my own, by changing SubSonic code. The problem is that Subsonic uses a data reader. It loads data from the data reader, and then closes it. If I unset the application role before the data reader is closed, I get an error saying: There is already an open DataReader associated with this Command which must be closed first. If I unset the application role after the data reader is closed, I get an error saying ExecuteNonQuery requires an open and available Connection. The connection's current state is closed. I can't seem to find a way to close the data reader without closing the connection.

    Read the article

  • Little Employee/Shift timetable HELP!!!

    - by DAVID
    Morning Guys, I have the following tables: operator(ope_id, ope_name) ope_shift(ope_id, shift_id, shift_date) shift(shift_id, shift_start, shift_end) here is a better view of the data http://latinunit.net/emp_shift.txt here is the screenshot of a select statement to the tables http://img256.imageshack.us/img256/4013/opeshift.jpg im using this code SELECT OPE_ID, COUNT(OPE_ID) AS Total_shifts from operator_shift group by ope_id; to view the current total shifts per operator and it works, BUT if there was 500 more rows it would count them all aswell, THE QUESTION is, anyone has a better way of making my database work, or how can i tell the system that those rows are a whole month, i remember i friend said something about count then devide by 30 but im not sure, what if the month isnt finished? and you want to show the emp with highest shifts to date

    Read the article

  • Linq ChangeConflictException (ASP.NET MVC) - occurs when submitting DataContext changes

    - by Alex
    System.Data.Linq.ChangeConflictException: 2 of X updates failed. at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at PROJECT.Controllers.HomeController.ClickProc(Int32 id, String code, String n) This is what I get very often. This action is done thousands of times a day, and I get this exception about once every 5 seconds. From what I understand it happens when something changes in the database in the period between creating DataContext and updating it. Am I right? What's the problem here and how can I fix it?

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • Selecting all but one field?

    - by gsquare567
    instead of SELECT * FROM mytable, i would like to select all fields EXCEPT one (namely, the 'serialized' field, which stores a serialized object). this is because i think that losing that field will speed up my query by a lot. however, i have so many fields and am quite the lazy guy. is there a way to say... `SELECT ALL_ROWS_EXCEPT(serialized) FROM mytable` ? thanks!

    Read the article

  • Complicated SQL query

    - by Yandawl
    Please bare with me this is difficult to explain xD http://img90.imageshack.us/i/croppercapture1.png/ This is based on an undergraduate degree course where a student takes many units (say 4 core units and 1 optional unit per year). tblAwardCoreUnits and tblAwardOptUnits store which units are optional and core for which award, hence the relationship to tblAward and tblStudentCoreUnits and tblStudentOptUnits store the particular instances of those units which a particular student is taking. Secondly, a unit can have multiple events (say a lecture and a unit) and each of those events has sessions in which a student can attend, hence tblEvents, tblSessions and tblAttendances. The query I am trying to produce is to get a list of all level one students, grouped by their award that lists the percentage of attendances in all the units in the current level. I've tried and tried with this and the following is the best I've managed to come up with so far... I'd REALLY appreciate any help you can give with this! SELECT tblStudents.enrolmentNo, tblStudents.forename, tblStudents.surname, tblAwards.title, (SELECT COUNT((tblAttendances.attended + tblAttendances.authorisedAbsence)) AS SumOfAttendances FROM tblAttendances INNER JOIN (tblStudents ON tblStudents.enrolmentNo = tblAttendances.enrolmentNo)) / FROM tblUnits, tblAwards INNER JOIN ((tblStudents INNER JOIN tblStudentOptUnit ON tblStudents.studentID = tblStudentOptUnit.studentID) INNER JOIN tblStudentCoreUnit ON tblStudents.studentID = tblStudentCoreUnit.studentID) ON tblAwards.awardID = tblStudents.awardID WHERE (((tblStudents.level)="1") AND ((tblStudents.status)="enrolled")) GROUP BY tblAwards.title ORDER BY tblStudents.forname;

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Array not showing in Jlist but filled in console

    - by OVERTONE
    Hey there. been a busy debugger today. ill give ths short version. ive made an array list that takes names from a database. then i put the contents of the arraylist into an array of strings. now i want too display the arrays contents in a JList. the weird thing is it was working earlier. and ive two methods. ones just a little practice too make sure i was adding to the Jlist correctly. so heres the key codes. this is the layout of my code. variables constructor methods in my variables i have these 3 defined String[] contactListNames = new String[5]; ArrayList<String> rowNames = new ArrayList<String>(); JList contactList = new JList(contactListNames); simple enough. in my constructor i have them again. contactListNames = new String[5]; contactList = new JList(contactListNames); //i dont have the array list defined though. printSqlDetails(); // the prinSqldetails was too make sure that the connectionw as alright. and its working fine. fillContactList(); // this is the one thats causing me grief. its where all the work happens. // fillContactListTest(); // this was the tester that makes sure its adding to the list alright. heres the code for fillContactListTest() public void fillContactListTest() { for(int i = 0;i<3;i++) { try { String contact; System.out.println(" please fill the list at index "+ i); Scanner in = new Scanner(System.in); contact = in.next(); contactListNames[i] = contact; in.nextLine(); } catch(Exception e) { e.printStackTrace(); } } } heres the main one thats supposed too work. public void fillContactList() { int i =0; createConnection(); ArrayList<String> rowNames = new ArrayList<String>(); try { Statement stmt = conn.createStatement(); ResultSet namesList = stmt.executeQuery("SELECT name FROM Users"); try { while (namesList.next()) { rowNames.add(namesList.getString(1)); contactListNames =(String[])rowNames.toArray(new String[rowNames.size()]); // this used to print out contents of array list // System.out.println("" + rowNames); while(i<contactListNames.length) { System.out.println(" " + contactListNames[i]); i++; } } } catch(SQLException q) { q.printStackTrace(); } conn.commit(); stmt.close(); conn.close(); } catch(SQLException e) { e.printStackTrace(); } } i really need help here. im at my wits end. i just cant see why the first method would add to the JList no problem. but the second one wont. both the contactListNames array and array list can print fine and have the names in them. but i must be transfering them too the jlist wrong. please help p.s im aware this is long. but trust me its the short version.

    Read the article

  • When should I consider representing the primary-key ...?

    - by JMSA
    When should I consider representing the primary-key as classes? Should we only represent primary keys as classes when a table uses composite-key? For example: public class PrimaryKey { ... ... ...} Then private PrimaryKey _parentID; public PrimaryKey ParentID { get { return _parentID; } set { _parentID = value; } } And public void Delete(PrimaryKey id) {...} When should I consider storing data as comma-separated values in a column in a DB table rather than storing them in different columns?

    Read the article

  • Conditionally set a column to its default value in Postgres

    - by Evgeny
    I've got a PostgreSQL 8.4 table with an auto-incrementing, but nullable, integer column. I want to update some column values and, if this column is NULL then set it to its default value (which would be an integer auto-generated from a sequence), but I want to return its value in either case. So I want something like this: UPDATE mytable SET incident_id = COALESCE(incident_id, DEFAULT), other = 'somethingelse' WHERE ... RETURNING incident_id Unfortunately, this doesn't work - it seems that DEFAULT is special and cannot be part of an expression. What's the best way to do this?

    Read the article

  • same Linq for two tables

    - by Diana
    I need to do something like this, My two tables have the same signature, but different class so It suppose to work but it is not working. var myTable; if (booleanVariable == true) { myTable = table1; } else { myTable = table2; } var myLinq1 = from p in myTable join r in myOtherTable select p; In this case, I have to initialize myTable I have tried also, var myTable= table2; if (booleanVariable == true) { myTable = table1; } var myLinq1 = from p in myTable join r in myOtherTable select p; then var is type table2, then it can't be changed to table1 type. I need help, I don't want to make a copy paste of all the code. the linq query is huge, and it s nested with 5 or 6 queries. also I have to do this on 12 different methods. Thanks a lot for your help.

    Read the article

  • USing Min/Max with conditional operator

    - by user638501
    Hello All, I am trying to run a query to find max and min values, and then use a conditional operator. however when I try to run the following query, it gives me error - "misuse of aggregate: min()". My query is: SELECT a.prim_id, min(b.new_len*36) as min_new_len, max(b.new_len*36) as max_new_len FROM tb_first a, tb_second b WHERE a.sec_id = b.sec_id AND min_new_len > 1900 AND max_new_len < 75000 GROUP BY a.prim_id ORDER BY avg(b.new_len*36); Any suggestions ?

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • How do I Put Several Select Statements into Different Columns

    - by Russ Bradberry
    I basically have 7 select statement that I need to have the results output into separate columns. Normally I would use a crosstab for this but I need a fast efficient way to go about this as there are over 7 billion rows in the table. I am using the vertica database system. Below is an example of my statements: SELECT COUNT(user_id) AS '1' FROM event_log_facts WHERE date_dim_id=20100101 SELECT COUNT(user_id) AS '2' FROM event_log_facts WHERE date_dim_id=20100102 SELECT COUNT(user_id) AS '3' FROM event_log_facts WHERE date_dim_id=20100103 SELECT COUNT(user_id) AS '4' FROM event_log_facts WHERE date_dim_id=20100104 SELECT COUNT(user_id) AS '5' FROM event_log_facts WHERE date_dim_id=20100105 SELECT COUNT(user_id) AS '6' FROM event_log_facts WHERE date_dim_id=20100106 SELECT COUNT(user_id) AS '7' FROM event_log_facts WHERE date_dim_id=20100107

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >