Search Results

Search found 8523 results on 341 pages for 'bobby tables'.

Page 138/341 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • In an Android application, should I have one content provider per table or only one for the entire a

    - by Andrew Dyer
    I have years of experience with Microsoft .NET development (primarily C#) and have been working to come up to speed on Android and Java. So far, I've built a small application with a couple screens and a working content provider. All of the examples I've seen for developing content providers typically work with a single table, so I got the impression that this was the convention. I built a couple more content providers for other tables and ran into the "Unknown URI" IllegalArgumentException when I tried to test them. The exception is being thrown by one of my content providers, but not the one I was intending to call. It appears that my application is using the first content provider in the AndroidManifest.xml file, which now has me wondering if I should only have a single content provider for the entire application. Are there any best practices and/or examples for working with multiple tables in an Android application? Should I have one content provider per table or only one for the entire application? If the former, how do I resolve URIs to the proper provider? If the latter, how do I keep my content provider code from being polluted with switch statements?

    Read the article

  • Error No 150 mySQL

    - by buddhamagnet
    Hi all. This is making me sweat - I am getting error 150 when I try and create a table in mySQL. I've scoured the forums to no avail. The statement uses foreign key constraints - both tables are InnoDB, all relevant columns have the same data type and both tables have the same charset and collation. Here's the CREATE TABLE and the original CREATE TABLE statement for the table that's being referenced. Any ideas? New table: CREATE TABLE `approval` ( `rev_id` int(10) UNSIGNED NOT NULL, `rev_page` int(10) UNSIGNED NOT NULL, `user_id` int(10) UNSIGNED NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`rev_id`,`rev_page`,`user_id`), KEY `FK_approval_user` (`user_id`), CONSTRAINT `FK_approval_revision` FOREIGN KEY (`rev_id`, `rev_page`) REFERENCES `revision` (`rev_id`, `rev_page`), CONSTRAINT `FK_approval_user` FOREIGN KEY (`user_id`) REFERENCES `user` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Referenced table: CREATE TABLE `revision` ( `rev_id` int(10) unsigned NOT NULL auto_increment, `rev_page` int(10) unsigned NOT NULL, `rev_text_id` int(10) unsigned NOT NULL, `rev_comment` tinyblob NOT NULL, `rev_user` int(10) unsigned NOT NULL default '0', `rev_user_text` varbinary(255) NOT NULL default '', `rev_timestamp` binary(14) NOT NULL default '\0\0\0\0\0\0\0\0\0\0\0\0\0\0', `rev_minor_edit` tinyint(3) unsigned NOT NULL default '0', `rev_deleted` tinyint(3) unsigned NOT NULL default '0', `rev_len` int(10) unsigned default NULL, `rev_parent_id` int(10) unsigned default NULL, PRIMARY KEY (`rev_id`), UNIQUE KEY `rev_page_id` (`rev_page`,`rev_id`), KEY `rev_timestamp` (`rev_timestamp`), KEY `page_timestamp` (`rev_page`,`rev_timestamp`), KEY `user_timestamp` (`rev_user`,`rev_timestamp`), KEY `usertext_timestamp` (`rev_user_text`,`rev_timestamp`) ) ENGINE=InnoDB AUTO_INCREMENT=4904 DEFAULT CHARSET=binary MAX_ROWS=10000000 AVG_ROW_LENGTH=1024;

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • Elegant code question: How to avoid creating unneeded object

    - by SeaDrive
    The root of my question is that the C# compiler is too smart. It detects a path via which an object could be undefined, so demands that I fill it. In the code, I look at the tables in a DataSet to see if there is one that I want. If not, I create a new one. I know that dtOut will always be assigned a value, but the the compiler is not happy unless it's assigned a value when declared. This is inelegant. How do I rewrite this in a more elegant way? System.Data.DataTable dtOut = new System.Data.DataTable(); . . // find table with tablename = grp // if none, create new table bool bTableFound = false; foreach (System.Data.DataTable d1 in dsOut.Tables) { string d1_name = d1.TableName; if (d1_name.Equals(grp)) { dtOut = d1; bTableFound = true; break; } } if (!bTableFound) dtOut = RptTable(grp);

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Why represent shopping carts and order invoices differently in a domain model?

    - by Todd
    I've built some shopping cart systems in the past, but I always designed them such that the final order invoice is just a shopping cart that has been marked as "purchased". All the logic for adding/removing/changing items in a cart is also the logic for the order. All data is stored in the same tables in the database. But it seems this is not the proper way to design an e-commerce site.. Can someone explain the benefit of separating the shopping cart from invoices in the domain model? It seems to me this would lead to a lot of duplicated code, an extra set of tables in the database, and make it harder to maintain in the event the system need to start accommodating more complicated orders (like specifying selected options for an item which may or may not change the price/availability/shipping time of the order). I'm assuming I just haven't seen the light, as every book and other example I see seems to separate these two seemingly similar concerns -- but I can't find any explanation as to the benefit of doing such! It's also the case in the systems that I design that changes are often made after the initial order is confirmed. It's not uncommon for items to be removed, replaced, or added afterwards (but prior to fulfillment).

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • Update Options on Existing jQuery Object

    - by Vince Kronlein
    I'm using Bootstrap and DataTables in my app and I have a default initializer for tables based on class. I can just add the class data-table to the table and it gets instantiated with the default values I want. I'd like to know how to change or update specific options based on a specific table. if ($.fn.dataTable) { $('.data-table').dataTable( { sDom: "R<'row'<'span6'l><'span6'f>r>t<'row'<'span6'i><'span6'p>>", sPaginationType: "bootstrap", oLanguage: { "sLengthMenu": "_MENU_ &nbsp; records per page" }, aoColumnDefs: [ { "bSortable": false, "aTargets": [ 0 ] } ] }); } All my data tables have a checkbox in the first column so the above removal of sorting works for all of them. But I'd like to be able to update the aoColumnDefs on a table by table basis so I can add other columns that I don't want sorted. So let's say I have a table: $('#member-list'), how do I access this object and update it's datatables options in jQuery? I can't find any reference or help anywhere. Thanks a lot! -V

    Read the article

  • many to many relationship mysql select

    - by zeina
    Let's consider 2 tables "schools" and "students". Now a student may belong to different schools in his life, and a school have many students. So this is a many to many example. A third table "links" specify the relation between student and school. Now to query this I do the following: Select sc.sid , -- stands for school id st.uid, -- stands for student id sc.sname, -- stands for school name st.uname, -- stands for student name -- select more data about the student joining other tables for that from students s left join links l on l.uid=st.uid -- l.uid stands for the student id on the links table left join schools sc on sc.sid=l.sid -- l.sid is the id of the school in the links table where st.uid=3 -- 3 is an example this query will return duplicate data for the user id if he has more than one school, so to fix this I added group by st.uid, yet I also need the list of school name related to the same user. Is there a way to do it with fixing the query I wrote instead of having 2 queries? So as example I want to have Luci of schools ( X, Y, Z, R, ...) etc

    Read the article

  • C#: Using one Data Table in order to fill 2 different comboboxes?

    - by odiseh
    Hi all. I have 2 comboboxes on a form (form1) called combobox1 and combobox2. Each comboboxes should be filled with data stored in 2 different tables in Sql server 2005: table1 and table2 I mean: combobox1 -- table1 combobox2 -- table2 I fill data table with proper data and then bind the comboboxes to it separately. My problem is: after filling 2 combos, both of them have equal data got from table2. This is my code: DataTable tb1 = new DataTable(); //Filling tb1 with data got from table1 combobox1.Items.Clear(); combobox1.DataSource = tb1; combobox1.DisplayMember = "Name"; combobox1.ValueMember = "ID"; combobox1.SelectedIndex = -1; //filling tb1 with data got from table2 combobox2.Items.Clear(); combobox2.DataSource = tb1; combobox2.DisplayMember = "Name"; combobox2.ValueMember = "ID"; combobox2.SelectedIndex = -1; What's wrong? It seems that if I get 2 different data tables (tb1 and tb2) , every thing will be all right. Any suggestions please. Thank you

    Read the article

  • How would you organize this in asp.net mvc?

    - by chobo
    I have an asp.net mvc 2.0 application that contains Areas/Modules like calendar, admin, etc... There may be cases where more than one area needs to access the same Repo, so I am not sure where to put the Data Access Layers and Repositories. First Option: Should I create Data Access Layer files (Linq to SQL in my case) with their accompanying Repositories for each area, so each area only contains the Tables, and Repositories needed by those areas. The benefit is that everything needed to run that module is one place, so it is more encapsulated (in my mind anyway). The downside is that I may have duplicate queries, because other modules may use the same query. Second Option Or, would it be better to place the DAL and Repositories outside the Area's and treat them as Global? The advantage is I won't have any duplicate queries, but I may be loading a lot of unnecessary queries and DAL tables up for certain modules. It is also more work to reuse or modify these modules for future projects (though the chance of reusing them is slim to none :)) Which option makes more sense? If someone has a better way I'd love to hear it. Thanks!

    Read the article

  • How can I manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • T4 trouble compiling transformation

    - by John Leidegren
    I can't figure this one out. Why doesn't T4 locate the IEnumerable type? I'm using Visual Studio 2010. And I just hope someone knows why? <#@ template debug="true" hostspecific="false" language="C#" #> <#@ assembly name="System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" #> <#@ import namespace="System" #> <#@ import namespace="System.Data" #> <#@ import namespace="System.Data.SqlClient" #> <#@ output extension=".cs" #> public static class Tables { <# var q = @" SELECT tbl.name 'table', col.name 'column' FROM sys.tables tbl INNER JOIN sys.columns col ON col.object_id = tbl.object_id "; // var source = Execute(q); #> } <#+ static IEnumerable Execute(string cmdText) { using (var conn = new SqlConnection(@"Data Source=.\SQLEXPRESS;Initial Catalog=t4build;Integrated Security=True;")) { conn.Open(); var cmd = new SqlCommand(cmdText, conn); using (var reader = cmd.ExecuteReader()) { while (reader.Read()) { } } } } #> Error 2 Compiling transformation: The type or namespace name 'IEnumerable' could not be found (are you missing a using directive or an assembly reference?) c:\Projects\T4BuildApp\T4BuildApp\TextTemplate1.tt 26 9

    Read the article

  • SQL Server 2008 - Full Text Query

    - by user208662
    Hello, I have two tables in a SQL Server 2008 database in my company. The first table represents the products that my company sells. The second table contains the product manufacturer’s details. These tables are defined as follows: Product ------- ID Name ManufacturerID Description Manufacturer ------------ ID Name As you can imagine, I want to make this as easy as possible for our customers to query this data. However, I’m having problems writing a forgiving, yet powerful search query. For instance, I’m anticipating people to search based on phonetical spellings. Because of this, the data may not match the exact data in my database. In addition, I think some individuals will search by manufacturer’s name first, but I want the matching product names to appear first. Based on these requirements, I’m currently working on the following query: select p.Name as 'ProductName', m.Name as 'Manufacturer', r.Rank as 'Rank' from Product p inner join Manufacturer m on p.ManufacturerID=m.ID inner join CONTAINSTABLE(Product, Name, @searchQuery) as r Oddly, this query is throwing an error. However, I have no idea why. Squiggles appear to the right of the last parenthesis in management studio. The tool tip says "An expression of non-boolean type specified in a context where a condition is expected". I understand what this statement means. However, I guess I do not know how COntainsTable works. What am I doing wrong? Thank you

    Read the article

  • Select rows from table1 and all the children from table2 into an object

    - by Patrick
    I want to pull data from table "Province_Notifiers" and also fetch all corresponding items from table "Province_Notifier_Datas". The table "Province_Notifier" has a guid to identify it (PK), table "Province_Notifier_Datas" has a column called BelongsToProvinceID witch is a foreign key to the "Province_Notifier" tables guid. I tried something like this: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) }; This line is not working and i am trying to figure out how topull the data from table2 into that Province_Notifier_Datas variable. Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) I can add a record easily by adding the second table row into the Province_Notifier_Datas but i can't fetch it back. Province_Notifier dbNotifier = new Province_Notifier(); // set some values for dbNotifier dbNotifier.Province_Notifier_Datas.Add( new Province_Notifier_Data { BelongsToProvince_ID = userInput.Value.ProvinceId, EventText = GenerateNotificationDetail(notifierDetail) }); This works and inserts the data correctly into both tables. Edit: These error messages is thrown: Cannot convert from 'Province_Notifier_Data' to 'System.Collections.Generic.IEnumerable' If i look in Visual Studio, the variable "Province_Notifier_Datas" is of type System.Data.Linq.EntitySet The best overloaded method match for 'System.Collections.Generic.List.AddRange(System.Collections.Generic.IEnumerable)' has some invalid arguments Edit: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID into data2list select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = new EntitySet<Province_Notifier_Data>().AddRange(data2List) }; Error 3 The name 'data2List' does not exist in the current context.

    Read the article

  • The Impossible Layout?

    - by Kyle K
    I'm beginning to think this is impossible, but thought I'd ask you guys. Basically it's a 2 column layout, but the "business" wants the following: -Always take up the entire browser window -Accommodate resizing of browser window -Left column will be fixed width, but that width should be flexible from page-to-page. -Left column has a region at the top with fixed height. -Left column has a bottom region. It should take up the remaining vertical space of browser window. If content is very large, it will have a scroll bar just for that region. -Right column should take up remaining horizontal space of browser window. -Right column has a region at the top with fixed height. -Right column has a bottom region. It should take up the remaining vertical space of browser window. If content is very large, it will have a scroll bar just for that region. I've tried everything...divs, floated, absolutely positioned, tables, divs in tables... Is this even possible? Here's an image of what it should look like: http://imgur.com/zk1jP.png

    Read the article

  • Use a foreign key mapping to get data from the other table using Python and SQLAlchemy.

    - by Az
    Hmm, the title was harder to formulate than I thought. Basically, I've got these simple classes mapped to tables, using SQLAlchemy. I know they're missing a few items but those aren't essential for highlighting the problem. class Customer(object): def __init__(self, uid, name, email): self.uid = uid self.name = name self.email = email def __repr__(self): return str(self) def __str__(self): return "Cust: %s, Name: %s (Email: %s)" %(self.uid, self.name, self.email) The above is basically a simple customer with an id, name and an email address. class Order(object): def __init__(self, item_id, item_name, customer): self.item_id = item_id self.item_name = item_name self.customer = None def __repr__(self): return str(self) def __str__(self): return "Item ID %s: %s, has been ordered by customer no. %s" %(self.item_id, self.item_name, self.customer) This is the Orders class that just holds the order information: an id, a name and a reference to a customer. It's initialised to None to indicate that this item doesn't have a customer yet. The code's job will assign the item a customer. The following code maps these classes to respective database tables. # SQLAlchemy database transmutation engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() customers_table = Table('customers', metadata, Column('uid', Integer, primary_key=True), Column('name', String), Column('email', String) ) orders_table = Table('orders', metadata, Column('item_id', Integer, primary_key=True), Column('item_name', String), Column('customer', Integer, ForeignKey('customers.uid')) ) metadata.create_all(engine) mapper(Customer, customers_table) mapper(Orders, orders_table) Now if I do something like: for order in session.query(Order): print order I can get a list of orders in this form: Item ID 1001: MX4000 Laser Mouse, has been ordered by customer no. 12 What I want to do is find out customer 12's name and email address (which is why I used the ForeignKey into the Customer table). How would I go about it?

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • Sub Query making Query slow.

    - by Muhammad Kashif Nadeem
    Please copy and paste following script. DECLARE @MainTable TABLE(MainTablePkId int) INSERT INTO @MainTable SELECT 1 INSERT INTO @MainTable SELECT 2 DECLARE @SomeTable TABLE(SomeIdPk int, MainTablePkId int, ViewedTime1 datetime) INSERT INTO @SomeTable SELECT 1, 1, DATEADD(dd, -10, getdate()) INSERT INTO @SomeTable SELECT 2, 1, DATEADD(dd, -9, getdate()) INSERT INTO @SomeTable SELECT 3, 2, DATEADD(dd, -6, getdate()) DECLARE @SomeTableDetail TABLE(DetailIdPk int, SomeIdPk int, Viewed INT, ViewedTimeDetail datetime) INSERT INTO @SomeTableDetail SELECT 1, 1, 1, DATEADD(dd, -7, getdate()) INSERT INTO @SomeTableDetail SELECT 2, 2, NULL, DATEADD(dd, -6, getdate()) INSERT INTO @SomeTableDetail SELECT 3, 2, 2, DATEADD(dd, -8, getdate()) INSERT INTO @SomeTableDetail SELECT 4, 3, 1, DATEADD(dd, -6, getdate()) SELECT m.MainTablePkId, (SELECT COUNT(Viewed) FROM @SomeTableDetail), (SELECT TOP 1 s2.ViewedTimeDetail FROM @SomeTableDetail s2 INNER JOIN @SomeTable s1 ON s2.SomeIdPk = s1.SomeIdPk WHERE s1.MainTablePkId = m.MainTablePkId) FROM @MainTable m Above given script is just sample. I have long list of columns in SELECT and around 12+ columns in Sub Query. In my From clause there are around 8 tables. To fetch 2000 records full query take 21 seconds and if I remove Subquiries it just take 4 seconds. I have tried to optimize query using 'Database Engine Tuning Advisor' and on adding new advised indexes and statistics but these changes make query time even bad. Note: As I have mentioned that this is test data to explain my question the real data has lot of tables joins columns but without Sub-Query the results us fine. Any help thanks.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >