Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 352/1302 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Rails - single ID for multiple models

    - by user352351
    I'm building an app which will allow a user to scan the barcode on a 'shelf', 'box' or 'product' which will then bring up that particular item or all the associated items. As these are all separate models with their own ID's, I need a global ID table. I was thinking of a polymorphic table called 'barcodes' barcodes id barcode_number barcodable Is there an easy way to do this? Or is polymorphic the best way?

    Read the article

  • Questions regarding databases MS Access [on hold]

    - by user2977751
    Good day! I am using MS Access 2007. I was wondering, how can I link a specific item on a table to another item in another table? For example, if I search for a particular product, all the suppliers of that product will be displayed. I want to connect my supply field to different suppliers field in different tables so that when I search for a particular supply, all the suppliers for that supply will be shown. And if it is also possible to link the records into a pre-set template in excel. THANKS!

    Read the article

  • php mysql_fetch_array() error

    - by user1877823
    I am getting this error while i am trying to delete a record the query is working but this line remains on the page. i want to echo "Deleted" written in the while should show up but the while loop is not working, i have tried and searched alot nothing helps! mysql_fetch_array() expects parameter 1 to be resource, boolean given in delete.php on line 27 delete.php <html> <body> <form method="post"> Id : <input type="text" name="id"> Name : <input type="text" name="name"> Description : <input type="text" name="des"> <input type="submit" value="delete" name="delete"> </form> <?php include("connect.php"); $id = $_POST['id']; $name = $_POST['name']; $des = $_POST['des']; $result = mysql_query("DELETE FROM fact WHERE id='$id'") or die(mysql_error()); while($row = mysql_fetch_array($result)) { echo "Deleted"; } mysql_close($con); ?> </body> </html> connect.php <?php $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Dataentry", $con); ?> How should i make the while loop work..

    Read the article

  • "=null" and select statement!

    - by user329820
    Hi I have asked this question before in this forum and they told me that it will retun an empty result set,I want to know that if I set the column with null values it will retun an empty result set?also the ANSI_NULLS is OFF ,thanks SELECT 'A' FROM T WHERE A = NULL;

    Read the article

  • Clustered Index

    - by Charu
    Which type of index(clustered/non clustrered) should be used for Insert/Update/Delete statement in SQL Server. I know it creates an additional overhead but is it better in performance as comparison to non clustered index? Also which index should be use for Select statements in SQL Server?

    Read the article

  • .NET DB Query Without Allocations?

    - by Michael Covelli
    I have been given the task of re-writing some libraries written in C# so that there are no allocations once startup is completed. I just got to one project that does some DB queries over an OdbcConnection every 30 seconds. I've always just used .ExecuteReader() which creates an OdbcDataReader. Is there any pattern (like the SocketAsyncEventArgs socket pattern) that lets you re-use your own OdbcDataReader? Or some other clever way to avoid allocations? I haven't bothered to learn LINQ since all the dbs at work are Oracle based and the last I checked, there was no official Linq To Oracle provider. But if there's a way to do this in Linq, I could use one of the third-party ones.

    Read the article

  • What statistics app should I use for my website?

    - by Camran
    I have my own server (with root access). I need statistics of users who visit my website etc etc... I have looked at an app called Webalyzer... Is this a good choice? I run apache2 on a Ubuntu 9 system... If you know of any good statistics apps for servers please let me know. And a follow-up question: All statistics are saved in log-files right? So how large would these log-files become then? Possibility to split them would be good, dont know if this is possible with Webalyzer though...

    Read the article

  • Which of these queries is more efficient?

    - by Brian
    Which of these queries are more efficient? select 1 as newAndClosed from sysibm.sysdummy1 where exists ( select 1 from items where new = 1 ) and not exists ( select 1 from status where open = 1 ) select 1 as newAndClosed from items where new = 1 and not exists ( select 1 from status where open = 1 )

    Read the article

  • Check mysql_query results if DELETE query worked?

    - by Camran
    I have a DELETE query which deletes a record from a mysql db. is there any way to make sure if the delete was performed or not? I mean, for a query to FIND stuff you do $res=mysql_query($var); $nr=mysql_num_rows($res); and you get nr of rows returned. Is there any similiar method for deletion of records? Thanks

    Read the article

  • Solr authentication possible? (or apache port authentication would also work)

    - by Camran
    Currently anybody can access the solr admin page by going to my_ip:8983/solr I can't have it like that, so how can I make it prompt for password or something? I have setup my servers apache2.conf file to prompt for password whenever my site is accessed by www.mydomain.com. But when using another port, the "require password" wont show up. Any ideas how to secure this? Don't point me to the SolrSecurity wiki because it's simply too outdated. I have tried it without luck. Thanks

    Read the article

  • How to break up a table holding 100mil+ number of records?

    - by Chiao
    We're currently storing answers for 52 predefined questions for our clients in our matchmaking site. we have over 30million unique users summing up for worst case of a 52x30million rows. Of these 52 questions, 11 are required and always answered. Our previous solution was to open an answer table for each question. This solution distributed our answer rows for faster insert/delete/update. But it also caused us an unconventional programming such as dynamically opening a table each time a question is added/updated, or removing an answer table if it was to be destroyed permanently. We want to come up with a better solution for our third version but could't get very far yet. Any ideas to accomplish this in any other, perhaps a more conventional, way?

    Read the article

  • many-to-many relationship in CI (not using ORM)

    - by Ross
    I'm implementing a categories system in my CI app and trying to work out the best way of working with many to many relationships. I'm not using an ORM at this stage, but could use say Doctrine if necessary. Each entry may have multiple categories. I have three tables (simplified) Entries: entryID, entryName Categories: categoryID, categoryname Entry_Category: entryID, categoryID my CI code returns a record set like this: entryID, entryName, categoryID, categoryName but, as expected with Many-to-Many relationships, each "entry" is repeated for each "category". What would the best way to "group" the categories so that when I output the results, I am left with something like: Entry Name Appears in Category: Foo, Bar rather than: Entry Name Appears in Category: Foo Entry Name Appears in Category: Bar I believe the option is to track if the post ID matches a previous entry, and if so, store the respective category, and output it as one, rather than several, but am unsure of how to do this in CI. thanks for any pointers (I appreciate this is may be a vague/complex question without a better knowledge of the system).

    Read the article

  • stuck with creating rent table

    - by From.ME.to.YOU
    i want to create a php with mysql to do the following: lets say that i have a shop i want to rent, rent will be weekly or monthly. I'm searching for the best way to create this table, so i can do easy queries to calculate free weeks or months. EDIT let say i have ID, START_DATE,RENING_TYPE,CLIENT_ID where Start_date is the start date for renting, and RENTING_TYPE is weekly or monthly how should i run a query to know all the empty weeks or month so new clients may reserve that week/month for example a client reserve July month another client reserve the first week in June, if a new client logged in to my system and want to check all the available weeks/months, how can i achieve that ?

    Read the article

  • How should I structure my settings table with mysql?

    - by incrediman
    I want to use MySQL to store a bunch of admin settings - what's the best way to structure the table? Like this? Setting _|_ Value setting1 | a setting2 | b setting3 | c setting4 | d setting5 | e Or like this? |--------|_setting1_|_setting2_|_setting3_|_setting4_|_setting5_| Settings | a | b | c | d | e | Or maybe some other way?

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Why is it called NoSQL?

    - by beef jerky
    I've recently worked with MongoDB and learned about its schemaless design. However, I'm confused with the term NoSQL? Why is it called that? Doesn't it use SQL or SQL-like queries? I've also read from an article that the main difference lies in how data is stored. In the case of MongoDB, it's stored like JSON documents. Is this true? Also, I'm confused why I always see 'NoSQL vs relational databases'. Aren't NoSQL databases relational? I believe documents in MongoDB are still related/linked through some keys (please correct me if I'm wrong). So why is it labeled as non-relational? Thanks in advance!

    Read the article

  • Normalization two types of customers into one table

    - by JDewzy
    I am trying to model a sales situation where you can sell to a person or to a business with a contact person. I cannot figure out the proper way to do this. It seems like 2 tables would be incorrect. But how do I model a Customer table that can be a business or a person? Would I just have a boolean for "business" and an additional "business_name" field that would default to Null. But then I have to do an if/then on the columns and that seems like poor design. Any advice, direction, or links is appreciated.

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Oracle Database 11g bevet&eacute;s k&ouml;zben: Val&oacute;s felhaszn&aacute;l&oacute;i tapasztalatok

    - by Lajos Sárecz
    Tavaly tartott Magyarországon is Upgrade Workshop-ot Mike Dietrich, aki az alábbi videóban néhány érdekes ügyfél sztorit oszt meg azzal kapcsolatban, miért érdemes Oracle Database 11g-re váltani, milyen elonyei származtak azoknak az ügyfeleknek, akik már túl vannak az upgrade folyamatán. Ha nem elég meggyozoek a külföldi példák, akkor 3 hét múlva a HOUG Konferencia “Korszeru adatközpontok” szekciójában magyar ügyfelek 11g upgrade történetei is megismerhetok lesznek.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >