Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 445/588 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • sql query question

    - by bu0489
    hey guys, just having a bit of difficulty with a query, i'm trying to figure out how to show the most popular naturopath that has been visited in a centre. My tables look as follows; Patient(patientId, name, gender, DoB, address, state,postcode, homePhone, businessPhone, maritalStatus, occupation, duration,unit, race, registrationDate , GPNo, NaturopathNo) and Naturopath (NaturopathNo, name, contactNo, officeStartTime, officeEndTime, emailAddress) now to query this i've come up with SELECT count(*), naturopathno FROM dbf10.patient WHERE naturopathno != 'NULL' GROUP BY naturopathno; which results in; COUNT(*) NATUROPATH 2 NP5 1 NP6 3 NP2 1 NP1 2 NP3 1 NP7 2 NP8 My question is, how would I go about selecting the highest count from this list, and printing that value with the naturopaths name? Any suggestions are very welcome, Brad

    Read the article

  • Virtual Function Implementation

    - by Gokul
    Hi, I have kept hearing this statement. Switch..Case is Evil for code maintenance, but it provides better performance(since compiler can inline stuffs etc..). Virtual functions are very good for code maintenance, but they incur a performance penalty of two pointer indirections. Say i have a base class with 2 subclasses(X and Y) and one virtual function, so there will be two virtual tables. The object has a pointer, based on which it will choose a virtual table. So for the compiler, it is more like switch( object's function ptr ) { case 0x....: X->call(); break; case 0x....: Y->call(); }; So why should virtual function cost more, if it can get implemented this way, as the compiler can do the same in-lining and other stuff here. Or explain me, why is it decided not to implement the virtual function execution in this way? Thanks, Gokul.

    Read the article

  • MySQL indexes: how do they work?

    - by bob-the-destroyer
    I'm a complete newbie with MySQL indexes. I have several MyISAM tables on MySQL 5.0x having utf8 charsets and collations with 100k+ records each. The primary keys are generally integer. Many columns on each table may have duplicate values. I need to quickly count, sum, average, or otherwise perform custom calculations on any number of fields in each table or joined on any number of others. I found this page giving an overview of MySQL index usage: http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html, but I'm still not sure I'm using indexes right. Just when I think I've made the perfect index out of a collection of fields I want to calculate against, I get the "index must be under 1000 bytes" error. Can anyone explain how to most efficiently create and use indexes to speed up queries? Caveat: upgrading Mysql is not possible in this case. Using Navicat Light for db administration, but this app isn't required.

    Read the article

  • DataReader Behaviour With SQL Server Locking

    - by Graham
    We are having some issues with our data layer when large datasets are returned from a SQL server query via a DataReader. As we use the DataReader to populate business objects and serialize them back to the client, the fetch can take several minutes (we are showing progress to the user :-)), but we've found that there's some pretty hard-core locking going on on the affected tables which is causing other updates to be blocked. So I guess my slightly naive question is, at what point are the locks which are taken out as a result of executing the query actually relinquished? We seem to be finding that the locks are remaining until the last row of the DataReader has been processed and the DataReader is actually closed - does that seem correct? A quick 101 on how the DataReader works behind the scenes would be great as I've struggled to find any decent information on it. I should say that I realise the locking issues are the main concern but I'm just concerned with the behaviour of the DataReader here.

    Read the article

  • MYSQL - Help with a more complicated Query

    - by Joe
    I have two tables: tbl_lists and tbl_houses Inside tbl_lists I have a field called HousesList - it contains the ID's for several houses in the following format: 1# 2# 4# 51# 3# I need to be able to select the mysql fields from tbl_houses WHERE ID = any of those ID's in the list. More specifically, I need to SELECT SUM(tbl_houses.HouseValue) WHERE tbl_houses.ID IN tbl_lists.HousesList -- and I want to do this select to return the SUM for several rows in tbl_lists. Anyone can help? I'm thinking of how I can do this in a SINGLE query since I don't want to do any mysql "loops" (within PHP).

    Read the article

  • username and password check linq query in c#

    - by b0x0rz
    this linq query var users = from u in context.Users where u.UserEMailAdresses.Any(e1 => e1.EMailAddress == userEMail) && u.UserPasswords.Any(e2 => e2.PasswordSaltedHash == passwordSaltedHash) select u; return users.Count(); returns: 1 even when there is nothing in password table. how come? what i am trying to do is get the values of email and passwordHash from two separate tables (UserEMailAddresses and UserPasswords) linked via foreign keys to the third table (Users). it should be simple - checking if email and password mach from form to database. but it is not working for me. i get 1 (for count) even when there are NO entries in the UserPasswords table. is the linq query above completely wrong, or...?

    Read the article

  • SQL Server 2008 automated database drop, create and fill

    - by lox
    For the database in my project I have a drop/create script for the database, a script for creating tables and SPs and an Access 2003 .mdb file with some exported values. To set up the database from scratch I can use my SQL management studio to first run one script, then the other and lastly manually run the sort of tedious import task. But I would like to do this as automated as possible. Hopefully something like putting the three files in a folder along with a fourth script to execute. Looking something like: run script "dropcreate.sql" run script "createtables.sql" import "values.mdb" How is this done? I hope to avoid using SSIS and the like. The tricky this is of course the import of data, where I can't seem to find a simple way. It is also important that the files a left as they are and not embedded into anything.

    Read the article

  • How to optimise MySQL query containing a subquery?

    - by aidan
    I have two tables, House and Person. For any row in House, there can be 0, 1 or many corresponding rows in Person. But, of those people, a maximum of one will have a status of "ACTIVE", the others will all have a status of "CANCELLED". e.g. SELECT * FROM House LEFT JOIN Person ON House.ID = Person.HouseID House.ID | Person.ID | Person.Status 1 | 1 | CANCELLED 1 | 2 | CANCELLED 1 | 3 | ACTIVE 2 | 1 | ACTIVE 3 | NULL | NULL 4 | 4 | CANCELLED I want to filter out the cancelled rows, and get something like this: House.ID | Person.ID | Person.Status 1 | 3 | ACTIVE 2 | 1 | ACTIVE 3 | NULL | NULL 4 | NULL | NULL I've achieved this with the following sub select: SELECT * FROM House LEFT JOIN ( SELECT * FROM Person WHERE Person.Status != "CANCELLED" ) Person ON House.ID = Person.HouseID ...which works, but breaks all the indexes. Is there a better solution that doesn't? I'm using MySQL and all relevant columns are indexed. EXPLAIN lists nothing in possible_keys. Thanks.

    Read the article

  • SQL Complicated Group / Join by Category

    - by Mike Silvis
    I currently have a database structure with two important tables. 1) Food Types (Fruit, Vegetables, Meat) 2) Specific Foods (Apple, Oranges, Carrots, Lettuce, Steak, Pork) I am currently trying to build a SQL statement such that I can have the following. Fruit < Apple, Orange Vegetables < Carrots, Lettuce Meat < Steak, Port I have tried using a statement like the following Select * From Food_Type join (Select * From Foods) as Foods on Food_Type.Type_ID = Foods.Type_ID but this returns every Specific Food, while I only want the first 2 per category. So I basically need my subquery to have a limit statement in it so that it finds only the first 2 per category. However if I simply do the following Select * From Food_Type join (Select * From Foods LIMIT 2) as Foods on Food_Type.Type_ID = Foods.Type_ID My statement only returns 2 results total.

    Read the article

  • Groupby in relationtable

    - by Dofs
    I am creating some tag functionality for a forum using linq2sql, and I have two tables [Tag] TagId TagName [ForumTagRelation] TagId ForumId I would like to retrieve, like SO, the most popular tags. I have tried to do this by: List<Tag> popularTags = db.Tags.Select(x => x.ForumTagRelations.GroupBy(y => y.TagId).OrderByDescending(z => z.Count())).Take(count).ToList(); But this just returns the following error: Error 1 Cannot implicitly convert type 'System.Collections.Generic.List<System.Linq.IOrderedEnumerable<System.Linq.IGrouping<System.Guid?,SampleWebsite.ForumTagRelation>>>' to 'System.Collections.Generic.IEnumerable<SampleWebsite.Tag>'. An explicit conversion exists (are you missing a cast?) The question is how I easily can return a list of tags which has the most counts in the ForumTagRelation table?

    Read the article

  • Complex Join - involving date ranges and sum...

    - by calumbrodie
    I have two tables that I need to join... I want to join table1 and table2 on 'id' - however in table two id is not unique. I only want one value returned for table two, and this value represents the sum of a column called 'total_sold' - within a specified date range (say one month).. SELECT ta.id, tb.total_sold as total_sold_this_week FROM table_a as ta LEFT JOIN table_b as tb ON ta.id=tb.id AND tb.date_sold BETWEEN ADDDATE(NOW(),INTERVAL -3 WEEK) AND NOW() this works but does not SUM the rows - only returning one row for each id... how do I get the sum from table b instead of only one row??? Please criticise if format of question could use more work - I can rewrite and provide sample data if required - this is a trivialised version of a much larger problem. -Thanks

    Read the article

  • Hibernate -using Table per subclass - how to link an existing superclass object to a sibclass object

    - by Chandni
    Hi, I have a User hibernate class, Clerk class and Consumer class. All these maps to their own tables in database. The User PK also acts as Clerk's and Consumer's PK. So now my problem is that if a user is initially a Clerk, he has a record in Users table and Clerks table. If that user wants to become a consumer, I want to link that User's record to the new Consumer's record. So even if I pass the userId to the consumer's record, it treats it as a new User to be persisted and gives a duplicate_key exception. How do I tell Hiernate to link the same user object with this new Consumer object. Thanks in advance, -Chandni

    Read the article

  • SQL server 2005 remote connection problem, cannot solve it help please thank you

    - by user287745
    note:- if this question does not fit this site please do not just close it but also redirect the question to the fitting sister site, thank you" the steps taken and the error are mentioned please help, i am stuck here! installed sql server 2005 express on both computers installed sql server management studio express on both computers ran each management studio and connect to instance sqlserver using windows authentication ( one computer connection example "A-63A9D4D7E7834\SQLEXPRESS" ) created a database in the databases named as "test1" created a few tables with data saved and exit. did everything what this site says " How to configure SQL Server 2005 to allow remote connections" [add h t t p here as spam prevention] ://support.microsoft.com/kb/914277/en-us" but i have just disable the firewalls completely :turn off connecting to A-63A9D4D7E7834 started "SQL Server Management Studio Express" on computer A-63A9D4D7E7834 sever name: "ALL-E425BE6C41D\SQLEXPRESS" authentication: "windows authentication" and CONNECT I GET THE FOLLOWING ERROR Cannot connect to ALL-E425BE6C41D\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed for user 'ALL-E425BE6C41D\Guest'. (Microsoft SQL Server, Error: 18456) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18456&LinkId=20476 BUTTONS: OK HELP

    Read the article

  • Useful ways for multilanguage navigation and static content on your site?

    - by vitto
    Hi, I have a big site running under Apache and PHP and in few mounths I should consider to add some different language version of it, but I'm not sure about the right way (or ways). My problem it isn't the user data, because I can use db tables with different languages (en, de, it, etc.) so I want concentrate my answer on navigation and static content. For now I can't use gettext because I don't have a dedicated server and I can't reboot it every time I want, but surely will be a future choice. So my main problems are these: In the site, I have classical XHTML elements like the menus, lists, div and various static texts in various pages (should be perferct for gettext, but I need a alternative) The other part of the sites has XHTML elements which are dynamically created via AJAX and jQuery, and here I haven't any idea of what can I do... So does exists some example I can see in some link to solve it (or some useful tecnique)?

    Read the article

  • In Elixir or SQLAlchemy, is there a way to also store a comment for a/each field in my entities?

    - by kchau
    Our project is basically a web interface to several systems of record. We have many tables mapped, and the names of each column aren't as well named and intuitive as we'd like... The users would like to know what data fields are available (i.e. what's been mapped from the database). But, it's pointless to just give them column names like: USER_REF1, USER_REF2, etc. So, I was wondering, is there a way to provide a comment in the declaration of my field? E.g. class SegregationCode(Entity): using_options(tablename="SEGREGATION_CODES") segCode = Field(String(20), colname="CODE", ... primary_key=True) #Have a comment attr too? If not, any suggestions?

    Read the article

  • how to read the txt file from the database(line by line)

    - by Ranjana
    i have stored the txt file to sql server database . i need to read the txt file line by line to get the content in it. my code : DataTable dtDeleteFolderFile = new DataTable(); dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0]; foreach (DataRow dr in dtDeleteFolderFile.Rows) { name = dr["FileName"].ToString(); records = Convert.ToInt32(dr["NoOfRecords"].ToString()); bytes = (Byte[])dr["Data"]; } FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open); StreamReader streamreader = new StreamReader(readfile); string line = ""; line = streamreader.ReadLine(); but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byt format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.

    Read the article

  • Can this MySQL subquery be optimised?

    - by Dan
    I have two tables, news and news_views. Every time an article is viewed, the news id, IP address and date is recorded in news_views. I'm using a query with a subquery to fetch the most viewed titles from news, by getting the total count of views in the last 24 hours for each one. It works fine except that it takes between 5-10 seconds to run, presumably because there's hundreds of thousands of rows in news_views and it has to go through the entire table before it can finish. The query is as follows, is there any way at all it can be improved? SELECT n.title , nv.views FROM news n LEFT JOIN ( SELECT news_id , count( DISTINCT ip ) AS views FROM news_views WHERE datetime >= SUBDATE(now(), INTERVAL 24 HOUR) GROUP BY news_id ) AS nv ON nv.news_id = n.id ORDER BY views DESC LIMIT 15

    Read the article

  • Clone DB table row through MVC in SQL Server

    - by sslepian
    Is there a simple solution for duplicating table rows in SQL Server as well as all table rows with foreign keys pointing to the cloned table row? I've got a "master" table and a bunch of "child" tables which have a foreign key into the ID of the master table. I need to not only create a perfect copy of the master table, but clone each and every child table referencing the master table. Is there a simpler way to do this than creating a new row in the master table, copying in the information from the row to be cloned, then going through each child table and doing the same with each row pointing to the cloned row in the master table? I'm using a SQL Server 2005 Database accessed through C# ASP.net MVC 1.0.

    Read the article

  • Alternative design for a synonyms table?

    - by Majid
    I am working on an app which is to suggest alternative words/phrases for input text. I have doubts about what might be a good design for the synonyms table. Design considerations: number of synonyms is variable, i.e. football has one synonym (soccer), but in particular has two (particularly, specifically) if football is a synonym to soccer, the relation exists in the opposite direction as well. our goal is to query a word and find its synonyms we want to keep the table small and make adding new words easy What comes to my mind is a two column design with col a = word and col b = delimited list of synonyms Is there any better alternative? What about using two tables, one for words and the other for relations?

    Read the article

  • Duplicate partitioning key performance impact

    - by Anshul
    I've read in some posts that having duplicate partitioning key can have a performance impact. I've two tables like: CREATE TABLE "Test1" ( CREATE TABLE "Test2" ( key text, key text, column1 text, name text, value text, age text, PRIMARY KEY (key, column1) ... ) PRIMARY KEY (key, name,age) ) In Test1 column1 will contain column name and value will contain its corresponding value.The main advantage of Test1 is that I can add any number of column/value pairs to it without altering the table by just providing same partitioning key each time. Now my question is how will each of these table schema's impact the read/write performance if I've millions of rows and number of columns can be upto 50 in each row. How will it impact the compaction/repair time if I'm writing duplicate entries frequently?

    Read the article

  • FTS: Searching across multiple fields 'intelligently'

    - by Wild Thing
    Hi, I have a SP using FTS (Full Text Search). I want searches across multiple fields, 'intelligently' ranking results based on the weights I assign. Consider a search on a view fetching data from tables: Book, Author and Genre. Now, I want the searcher to be able to do: "Ludlum Fiction", "Robert Ludlum Bourne", "Bourne Ludlum", etc. Unfortunately, the only way I have been able to do that at present is this: http://pastebin.com/fdce11ff This is pretty bad, because I am manually breaking up the search string. I know I am doing this completely the wrong way, but can't figure out the right way to search across multiple fields in FTS. Can somebody help please?

    Read the article

  • How can I Export a Table in Access using VBA into a specific sheet in an Excel spreadsheet?

    - by Bryan
    I have a some tables, we will call them Table1,Table2.... and I need them to be Exported into specific spreadsheets in a macro enabled Excel File (.xlsm) that already exists. So I would need to put Table1 into Sheet2, Table2 into Sheet3... and so on. I had been doing this manually by going to the export menu in Access but it is getting monotonous so I would like to automate the process. The Excel file will already have code in each spreadsheet which would need to still be intact.

    Read the article

  • New to programming

    - by Shaun
    I have a form (Quote) with an auto-number ID, on the form at the moment are two subforms that show different items (sub 1 shows partition modules sub 2 shows partition abutments) both forms use the same parts tables to build them. Both forms are linked to the quote form using the ID. All works well until the forms is refreshed or re-loaded, subform 1 shows the module names and quantities and blank spaces for the abutment names but shows the quantiews for the abutments, the reverse of this is shown in the abutments subform 2. When the lists for the variuos types and the detailed parts lists are printed they are correct. This seems to be only a visual problem. All based on Access 2003. Subform 1 SELECT Quote_Modules.ModuleID, Quote_Modules.QuoteID, Quote_Modules.ModuleDescription, Quote_Modules.ModuleQty, Quote.Style, Quote.Trim FROM Quote INNER JOIN Quote_Modules ON Quote.QuoteID=Quote_Modules.QuoteID ORDER BY Quote_Modules.ModuleID; Subform 2 SELECT Quote_Modules.ModuleID, Quote_Modules.QuoteID, Quote_Modules.ModuleDescription, Quote_Modules.ModuleQty, Quote.Style, Quote.Trim FROM Quote INNER JOIN Quote_Modules ON Quote.QuoteID=Quote_Modules.QuoteID ORDER BY Quote_Modules.ModuleID;

    Read the article

  • Core Data => Adding a related object always nil

    - by mongeta
    Hello, I have two tables related: DataEntered and Model DataEntered -currentModel Model One DataEntered can have only ONE Model, but a Model can stay into many DataEntered. The relationship is from DataEntered to Model (No To Many-relathionship) and no inverse relation. XCode generates the setters for DataEnteredModel: @property (nonatomic, retain) NSSet * current_model; - (void)addCurrent_modelObject:(CarModel *)value; - (void)addCurrent_model:(NSSet *)value; I have a Table and when I select a model, I want to store it to DataEntered: Model *model = [fetchedResultsController objectAtIndexPath:indexPath]; NSLog(@"Model %@",model.name); // ==> gives me the correct model name [dataEntered addCurrent_modelObject:model]; // ==> always nil [dataEntered setCurrent_model:[fetchedResultsController objectAtIndexPath:indexPath]]; // the same, always nil what I'm doing wrong ????? thanks, r.

    Read the article

  • Recommendations for an in memory database vs thread safe data structures

    - by yx
    TLDR: What are the pros/cons of using an in-memory database vs locks and concurrent data structures? I am currently working on an application that has many (possibly remote) displays that collect live data from multiple data sources and renders them on screen in real time. One of the other developers have suggested the use of an in memory database instead of doing it the standard way our other systems behaves, which is to use concurrent hashmaps, queues, arrays, and other objects to store the graphical objects and handling them safely with locks if necessary. His argument is that the DB will lessen the need to worry about concurrency since it will handle read/write locks automatically, and also the DB will offer an easier way to structure the data into as many tables as we need instead of having create hashmaps of hashmaps of lists, etc and keeping track of it all. I do not have much DB experience myself so I am asking fellow SO users what experiences they have had and what are the pros & cons of inserting the DB into the system?

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >