Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 153/188 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Does WPF break an Entity Framework ObjectContext?

    - by David Veeneman
    I am getting started with Entity Framework 4, and I am getting ready to write a WPF demo app to learn EF4 better. My LINQ queries return IQueryable<T>, and I know I can drop those into an ObservableCollection<T> with the following code: IQueryable<Foo> fooList = from f in Foo orderby f.Title select f; var observableFooList = new ObservableCollection<Foo>(fooList); At that point, I can set the appropriate property on my view model to the observable collection, and I will get WPF data binding between the view and the view model property. Here is my question: Do I break the ObjectContext when I move my foo list to the observable collection? Or put another way, assuming I am otherwise handling my ObjectContext properly, will EF4 properly update the model (and the database)? The reason why I ask is this: NHibernate tracks objects at the collection level. If I move an NHibernate IList<T> to an observable collection, it breaks NHibernate's change tracking mechanism. That means I have to do some very complicated object wrapping to get NHibernate to work with WPF. I am looking at EF4 as a way to dispense with all that. So, to get EF4 working with WPF, is it as simple as dropping my IQueryable<T> results into an ObservableCollection<T>. Does that preserve change-tracking on my EDM entity objects? Thanks for your help.

    Read the article

  • How to profile Doctrine in Zend Framework

    - by David Zapata
    Good day. I'm using Doctrine as ORM for my Zend Framework project. This is the first time I use it. I've followed the ZendCasts Doctrine chapters, and everything works for me, but I needed to perform some profiling; There is a Doctrine_Connection_Profiler class that should be used to profile the Doctrine Model internal queries, but I've tried to use it without success. I always get a "PDOException: You cannot serialize or unserialize PDOStatement instances" exception when I perform my Unit Tests. Here is a example: $conn = Doctrine_Manager::connection($doctrineConfig['dsn'], $dbconfname); ... if( APPLICATION_ENV != 'production'){ $obj_doctrine_profiler = new Doctrine_Connection_Profiler(); $conn->setListener($obj_doctrine_profiler); } All of my Unit Tests works if I comment/delete the $conn->setListener($obj_doctrine_profiler); line. This code block is located in my Bootstrap.php class; the weird thing is, the web application works just fine even with the mentioned code line. Thank you so much for your help. please excuse me if my english is not the best.

    Read the article

  • Does DefaultAppPool run with special elevated privilegs on IIS?

    - by Leeks and Leaks
    I'm running a piece of code within a web page that queries the IIS metabase using ADSI. The code is as simple as this: DirectoryEntry iisNode = new DirectoryEntry("/LM/W3SVC/1/ROOT/MyAspWebsite-1-128886021498831845"); foreach (DirectoryEntry de in iisNode.Parent.Children) { System.Console.WriteLine(de.Name); } This works fine when I run the page/site under the DefaultAppPool on IIS7/W2K8. However when I create my own app pool and leave the properties the same as the default app pool, this code fails with the following error: Caught: System.Runtime.InteropServices.COMException Failed to parse virtual directory: /LM/W3SVC/1/ROOT/MyAspWebsite-1-128889542757187500 System.Runtime.InteropServices.COMException (0x80070005): Access is denied. What special privileges does the DefaultAppPool have? I don't see any documented. I need this to work in non default app pools, but without giving the entire worker process elevated privileges. I've also tried using the username and password parameters of the DirectoryEntry constructor, by using the Admin on the machine that IIS7 is running on, but that didn't change anything. I'll also note that this works fine on IIS6 and W2K3. Any help is appreciated.

    Read the article

  • Accessing Unicode telugu text from Ms-Access Database in Java

    - by Ravi Chandra
    I have an MS-Access database ( A English-telugu Dictionary database) which contains a table storing English words and telugu meanings. I am writing a dictionary program in Java which queries the database for a keyword entered by the user and display the telugu meaning. My Program is working fine till I get the data from the database, but when I displayed it on some component like JTextArea/JEditorPane/ etc... the telugu text is being displayed as '????'. Why is it happening?. I have seen the solution for "1467412/reading-unicode-data-from-an-access-database-using-jdbc" which provides some workaround for Hebrew language. But it is not working for telugu. i.e I included setCharset("UTF8")before querying from the database. but still I am getting all '?'s. As soon as I got data from Resultset I am checking the individual characters, all the telugu characters are being encoded by 63 only. This is my observation. I guess this must be some type of encoding problem. I would be very glad if somebody provides some solution for this. Thanks in advance.

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • Using LINQ to SQL and chained Replace

    - by White Dragon
    I have a need to replace multiple strings with others in a query from p in dx.Table where p.Field.Replace("A", "a").Replace("B", "b").ToLower() = SomeVar select p Which provides a nice single SQL statement with the relevant REPLACE() sql commands. All good :) I need to do this in a few queries around the application... So i'm looking for some help in this regard; that will work as above as a single SQL hit/command on the server It seems from looking around i can't use RegEx as there is no SQL eq Being a LINQ newbie is there a nice way for me to do this? eg is it possible to get it as a IQueryable "var result" say and pass that to a function to add needed .Replace()'s and pass back? Can i get a quick example of how if so? EDIT: This seems to work! does it look like it would be a problem? var data = from p in dx.Videos select p; data = AddReplacements(data, checkMediaItem); theitem = data.FirstOrDefault(); ... public IQueryable<Video> AddReplacements(IQueryable<Video> DataSet, string checkMediaItem) { return DataSet.Where(p => p.Title.Replace(" ", "-").Replace("&", "-").Replace("?", "-") == checkMediaItem); }

    Read the article

  • Why does the page posts take so long?

    - by Olle
    Hi! I am having some problems with some page post backs that take a loooong time to execute. If I do a "appcmd list requests" I can get something like this: REQUEST "79000001800004e3" (url:POST /dir/file.aspx, time:87219 msec, client:xxx.xxx.xxx.xxx, stage:ExecuteRequestHandler, module:ManagedPipelineHandler) REQUEST "8600000080002f82" (url:POST /dir/file.aspx, time:61391 msec, client:xxx.xxx.xxx.xxx, stage:AcquireRequestState, module:Session) REQUEST "5e00010280000420" (url:POST /dir/file.aspx, time:21047 msec, client:xxx.xxx.xxx.xxx, stage:AcquireRequestState, module:Session) It's one particular file that causes the problem (dir/file.aspx in this case). It comes from the same IP-adress. And the first on is from ManagedPipelineHandler module and the two after that from Session module. I do not have any details about the web browser, or anything more about the client for that matter. I have looked for sql dead locks and did not find any. There are no long running sql queries at all. Do you have any idea of what can be the problem? Regards.

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • Grails - Recursive computing call "StackOverflowError" and don't update on show

    - by Kedare
    Hello, I have a little problem, I have made a Category domain on Grails, but I want to "cache" the string representation on the database by generating it when the representation field is empty or NULL (to avoid recursive queries at each toString), here is the code : package devkedare class Category { String title; String description; String representation; Date dateCreated; Date lastUpdate; Category parent; static constraints = { title(blank:false, nullable:false, size:2..100); description(blank:true, nullable:false); dateCreated(nullable:true); lastUpdate(nullable:true); autoTimestamp: true; } static hasMany = [childs:Category] @Override String toString() { if((!this.representation) || (this.representation == "")) { this.computeRepresentation(true) } return this.representation; } String computeRepresentation(boolean updateChilds) { if(updateChilds) { for(child in this.childs) { child.representation = child.computeRepresentation(true); } } if(this.parent) { this.representation = "${this.parent.computeRepresentation(false)}/${this.title}"; } else { this.representation = this.title } return this.representation; } void beforeUpdate() { this.computeRepresentation(true); } } There is 2 problems : It don't update the representation when the representation if NULL or empty and toString id called, it only update it on save, how to fix that ? On some category, updating it throw a StackOverflowError, here is an example of my actual category table as CSV : Uploaded the csv file here, pasting csv looks buggy: http://kedare.free.fr/Dist/01.csv Updating the representation works everywhere except on "IT" that throw a StackOverlowError (this is the only category that has many childs). Any idea of how can I fix those 2 problems ? Thank you !

    Read the article

  • Query eror handling in CodeIgniter

    - by Sajith S Narayanan
    Hi All, I am trying to execute an MySql query using the CI Active methods. If the query is malformed, then CI invokes internal server error 500 and quits without processing the next steps.. I need to roll back all the other queries processed before that error statement, and the roll back is also not happening.. can you help pls. The code snippets is as below: function dbInsertInformationToDB($data_array) { $returnID = ""; $uniqueDataArray = array(); // I prepare a array of values here $uniqueTableList = filter_unique_tables($data_array[2]); $this->db->trans_begin(); // inserting is done here // when there is a query error in $this->db->insert().. it is not rolling back the previous query executed foreach($uniqueTableList as $table_name) { $uniqueDataArray = filterDataArray($data_array,$table_name,2); $this->db->insert($table_name,$uniqueDataArray); if ($this->db->_error_message()) { $error = "I am caught!!"; } $returnID = $this->db->affected_rows(); } if ($this->db->trans_status() === FALSE) { $this->db->trans_rollback(); } else { $this->db->trans_commit(); } return "ERROR"; }

    Read the article

  • AppEngine JDO-QL : Problem with multiple AND and OR in query

    - by KlasE
    Hi, I'm working with App Engine(Java/JDO) and are trying to do some querying with lists. So I have the following class: @PersistenceCapable(identityType = IdentityType.APPLICATION, detachable="true") public class MyEntity { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) Key key; @Persistent List<String> tags = new ArrayList<String>(); @Persistent String otherVar; } The following JDO-QL works for me: ( tags.contains('a') || tags.contains('b') || tags.contains('c') || tags.contains('d') ) && otherVar > 'a' && otherVar < 'z' This seem to return all results where tags has one or more strings with one or more of the given values and together with the inequality search on otherVar But the following do not work: (tags.contains('a') || tags.contains('_a')) && (tags.contains('b') || tags.contains('_b')) && otherVar > 'a' && otherVar < 'z' In this case I want all hits where there is at least one a (either a or _a) and one b (either b or _b) together with the inequality search as before. But the problem is that I also get back the results where there is a but no b, which is not what I want. Maybe I have missed something obvious, or done a coding error, or possibly there is a restriction on how you can write these queries in appengine, so any tips or help would be very welcome. Regards Klas

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • Delphi Application using COMMIT and ROLLBACK for Multiple SQL Updates

    - by Matt
    Is it possible to use the SQL BEGIN TRANSACTION, COMMIT TRANSACTION, ROLLBACK TRANSACTION when embedding SQL Queries into an application with mutiple calls to the SQL for Table Updates. For example I have the following code: Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; Q.Close; Q.SQL.Clear; Q.SQL.ADD(<Select Some Data>); Q.Open; Set Some Variables Q.Close; Q.SQL.Clear; Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; What I would like to do is if the second update fails I want to roll back the first transaction. If I set a unique notation for the BEGIN, COMMIT, ROLLBACK so as to specify what is being committed or rolled back, is it feasible. i.e. before the first Update specify BEGIN TRANSACTION_A then after the last update specify COMMIT TRANSACTION_A I hope that makes sense. If I was doing this in a SQL Stored Procedure then I would be able to specify this at the start and end of the procedure, but I have had to break the code down into manageable chunks due to process blocks and deadlocks on a heavy loaded SQL Server.

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Ruby on Rails ActiveRecord/Include/Associations can't get my query to work

    - by Cypher
    I just started learning Rails and I'm just trying to set up query via associations. All the queries I try to write seem to be doing bizzare things and end up trying to query two tables parsed together with an '_' as one table. I have no clue why this would ever happen My tables are as follows: schools: id name variables: id name type var_entries: id variable_id entry school_entries: id school_id var_entry_id my rails association tables are $local = { :adapter => "mysql", :host => "localhost", :port => "3306".to_i, :database => "spy_2", :username =>"root", :password => "vertrigo" } class School < ActiveRecord::Base establish_connection $local has_many :school_entries has_many :var_entries, :through => school_entries end class Variable < ActiveRecord::Base establish_connection $local has_many :var_entries has_many :school_entries, :through => :var_entries end class VarEntry < ActiveRecord::Base establish_connection $local has_many_and_belongs_to :school_entries belongs_to :variables end class SchoolEntry < ActiveRecord::Base establish_connection $local belongs_to :school has_many :var_entries end I want to do this sql query: SELECT school_id, variable_id,rank FROM school_entries, variables, var_entries, schools WHERE var_entries.variable_id = variables.id AND school_entries.var_entry_id = var_entries.id AND schools.id = school_entries.school_id AND variables.type = 'number'; and put it into Rails notation: here is one of my many failed attempts schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "variables.type = 'number'") the error: 'const_missing': uninitialized constant VarEntry::Variables (NameError) if i remove variables schools = VarEntry.all(:include => [:school_entries, :variables], :conditions => "type = 'number'") the error is: Mysql::Error: Unkown column 'type' in 'where clause': SELECT * FROM 'var_entries' WHERE (type=number) (ActiveRecord::StatementInvalid) Can anyone tell me where I'm going horribly wrong?

    Read the article

  • Saving twice don't update my object in JDO

    - by Javi
    Hello I have an object persisted in the GAE datastore using JDO. The object looks like this: public class MyObject implements Serializable, StoreCallback { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String id; @Persistent private String firstId; ... } As usually when the object is stored for the first time a new id value is generated for the identifier. I need that if I don't provide a value for firstId it sets the same value as the id. I don't want to solve it with a special getter which checks for null value in firstId and then return the id value because I want to make queries relating on firstId. I can do it in this way by saving the object twice (Probably there's a better way to do this, but I'll do it in this way until I find a better one). But it is not working. when I debug it I can see that result.firstId is set with the id value and it seems to be persisted, but when I go into the datastore I see that firstId is null (as it was saved the first time). This save method is in my DAO and it is called in another save method in the service annotated with @Transactional. Does anyone have any idea why the second object in not persisted properly? @Override public MyObject save(MyObject obj) { PersistenceManager pm = JDOHelper.getPersistenceManagerFactory("transactions-optional"); MyObject result = pm.makePersistent(obj); if(result.getFirstId() == null){ result.setFirstId(result.getId()); result = pm.makePersistent(result); } return result; } Thanks.

    Read the article

  • Way to automate setting of MergeOptions

    - by Nix
    I am looking for an automated way to iterate over all ObjectQueries and set the merge option to no tracking (read only context). Once i find out how to do it i will be able to generate a default read only context using a T4 template. Is this possible? For example lets say i have these tables in my object context SampleContext TableA TableB TableC I would have to go through and do the below. SampleContext sc = new SampleContext(); sc.TableA.MergeOption = MergeOption.NoTracking; sc.TableB.MergeOption = MergeOption.NoTracking; sc.TableC.MergeOption = MergeOption.NoTracking; I am trying to find a way to generalize this using object context. I want to get it down to something like foreach(var objectQuery : sc){ objectQuery.MergeOption = MergeOption.NoTracking; } Preferably I would like to do it using the baseclass(ObjectContext): ObjectContext baseClass = sc as ObjectContext var objectQueries = sc.MetadataWorkspace.GetItem("Magic Object Query Option); But i am not sure i can even get access to the queries. Any help would be appreciated.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • How to query two tables based on whether or not record exists in a third?

    - by Katherine
    I have three tables, the first two fairly standard: 1) PRODUCTS table: pid pname, etc 2) CART table: cart_id cart_pid cart_orderid etc The third is designed to let people save products they buy and keep notes on them. 3) MYPRODUCTS table: myprod_id myprod_pid PRODUCTS.prod_id = CART.cart_prodid = MYPRODUCTS.myprod_pid When a user orders, they are presented with a list of products on their order, and can optionally add that product to myproducts. I am getting the info necessary for them to do this, with a query something like this for each order: SELECT cart.pid, products.pname, products.pid FROM products, cart WHERE products.pid = cart_prodid AND cart_orderid=orderid This is fine the first time they order. However, if they subsequently reorder a product which they have already added to myproducts, then it should NOT be possible for them to add it to myproducts again - basically instead of 'Add to MyProducts' they need to see 'View in MyProducts'. I am thinking I can separate the products using two queries: Products never added to MyProducts By somehow identifying whether the user has the product in MyProducts already, and if so excluding it from the query above. Products already in MyProducts By reversing the process above. I need some pointers on how to do this though.

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • AJAX CascadingDropDown ViewState Problem

    - by Steven
    Question: How do I maintain both the contents (from queries) and selected value of both dropdowns after postback? Source Code: Download my source code from this link (link now works). Just add a reference to your AjaxControlToolkit User Action: Select a value from each dropdown. Click Submit. After Postback: StatesDrop: (Selected value), CitiesDrop "Select a City" Before and after: I believe that when the first dropdown gets its selected value, the second dropdown refreshes and therefore loses its selected value. C# answers also welcome. Default.aspx Active States<br /><asp:DropDownList ID="StatesDrop" runat="server" /><br /> Active Cities<br /><asp:DropDownList ID="CitiesDrop" runat="server" /><br /> <ajax:CascadingDropDown ID="StatesCasc" TargetControlID="StatesDrop" ServicePath="WebService1.asmx" ServiceMethod="GetActiveStates" Category="States" runat="server" PromptText="Select a State" PromptValue="?" /> <ajax:CascadingDropDown ID="CitiesCasc" TargetControlID="CitiesDrop" ServicePath="WebService1.asmx" ServiceMethod="GetActiveCities" Category="Cities" runat="server" ParentControlID="StatesDrop" PromptText="Select a City" PromptValue="?" /> WebService1.asmx.vb Imports System.Web.Services Imports System.Web.Services.Protocols Imports System.ComponentModel Imports System.Web.Script.Services Imports AjaxControlToolkit <System.Web.Script.Services.ScriptService()> _ <System.Web.Services.WebService(Namespace:="http://tempuri.org/")> _ <System.Web.Services.WebServiceBinding _ (ConformsTo:=WsiProfiles.BasicProfile1_1)> _ <ToolboxItem(False)> _ Public Class WebService1: Inherits System.Web.Services.WebService <WebMethod()> _ Public Function GetActiveStates (ByVal knownCategoryValues As String, _ ByVal category As String) As CascadingDropDownNameValue() Dim values As New List(Of CascadingDropDownNameValue)() 'Fill values array' Return values.ToArray() End Function <WebMethod()> _ Public Function GetActiveCities (ByVal knownCategoryValues As String, _ ByVal category As String) As CascadingDropDownNameValue() Dim values As New List(Of CascadingDropDownNameValue)() Dim kv As StringDictionary = _ CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues) Dim SelState As String = "" If kv.ContainsKey("State") Then SelState = kv("State") 'Fill values array' Return values.ToArray() End Function End Class Default.aspx.vb Imports System.Web.Services Imports System.Web.Script.Services Imports AjaxControlToolkit Partial Public Class _Default Inherits System.Web.UI.Page Protected Sub Submit_Click(ByVal sender As Object, _ ByVal e As EventArgs) Handles SubmitBtn.Click ResultsGrid.DataBind() End Sub End Class

    Read the article

  • Graph-structured databases and Php

    - by stagas
    I want to use a graph database using php. Can you point out some resources on where to get started? Is there any example code / tutorial out there? Or are there any other methods of storing data that relate to each other in totally random/abstract situations? - Very abstract example of the relations needed: John relates to Mary, both relate to School, John is Tall, Mary is Short, John has Blue Eyes, Mary has Green Eyes, query I want is which people are related to 'Short people that have Green Eyes and go to School' - answer John - Another example: TrackA -> ArtistA -> ArtistB -> AlbumA -----> [ label ] -> AlbumB -----> [ A ] -> TrackA:Remix -> Genre:House -> [ Album ] -----> [ label ] TrackB -> [ C ] [ B ] Example queries: Which Genre is TrackB closer to? answer: House - because it's related to Album C, which is related to TrackA and is related to Genre:House Get all Genre:House related albums of Label A : result: AlbumA, AlbumB - because they both have TrackA which is related to Genre:House - It is possible in MySQL but it would require a fixed set of attributes/columns for each item and a complex non-flexible query, instead I need every attribute to be an item by itself and instead of 'belonging' to something, to be 'related' to something.

    Read the article

  • Got a table of people, who I want to link to each other, many-to-many, with the links being bidirect

    - by dflock
    Imagine you live in very simplified example land - and imagine that you've got a table of people in your MySQL database: create table person ( person_id int, name text ) select * from person; +-------------------------------+ | person_id | name | +-------------------------------+ | 1 | Alice | | 2 | Bob | | 3 | Carol | +-------------------------------+ and these people need to collaborate/work together, so you've got a link table which links one person record to another: create table person__person ( person__person_id int, person_id int, other_person_id int ) This setup means that links between people are uni-directional - i.e. Alice can link to Bob, without Bob linking to Alice and, even worse, Alice can link to Bob and Bob can link to Alice at the same time, in two separate link records. As these links represent working relationships, in the real world they're all two-way mutual relationships. The following are all possible in this setup: select * from person__person; +---------------------+-----------+--------------------+ | person__person_id | person_id | other_person_id | +---------------------+-----------+--------------------+ | 1 | 1 | 2 | | 2 | 2 | 1 | | 3 | 2 | 2 | | 4 | 3 | 1 | +---------------------+-----------+--------------------+ For example, with person__person_id = 4 above, when you view Carol's (person_id = 3) profile, you should see a relationship with Alice (person_id = 1) and when you view Alice's profile, you should see a relationship with Carol, even though the link goes the other way. I realize that I can do union and distinct queries and whatnot to present the relationships as mutual in the UI, but is there a better way? I've got a feeling that there is a better way, one where this issue would neatly melt away by setting up the database properly, but I can't see it. Anyone got a better idea?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >