Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 344/1302 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Mysql query question

    - by brux
    I have 2 tables: Customer: customerid - int, pri-key,auto fname - varchar sname -varchar housenum - varchar street -varchar Items: itemid - int,pri-key,auto type - varchar collectiondate - date releasedate - date customerid - int I need a query which will get me all items that have a releasedate 3 days prior to (and including) the current date. i.e The query should return customerid,fname,sname,street,housenum,type,releasedate for all items which have releasedate within (and including)3 days prior today thanks in advance

    Read the article

  • PHP - How to retrieve session in php

    - by Klaus Jasper
    I created a table that contains id - names - jobs and page that shows the names only and beside each name there is button Job and session that contains the id. this is my code $query = mysql_query("SELECT * FROM table"); while($fetch = mysql_fetch_array("$query")){ $name = $fetch['names']; $id = $fetch['id']; echo '</br>'; echo $name; $_SESSION['name'] = $id; echo "<button>Job</button>"; } I want when the user click on button Job redirect to a page that contains the job of that session. so how can I do it?

    Read the article

  • Why should I abstract my data layer?

    - by Gazillion
    OOP principles were difficult for me to grasp because for some reason I could never apply them to web development. As I developed more and more projects I started understanding how some parts of my code could use certain design patterns to make them easier to read, reuse, and maintain so I started to use it more and more. The one thing I still can't quite comprehend is why I should abstract my data layer. Basically if I need to print a list of items stored in my DB to the browser I do something along the lines of: $sql = 'SELECT * FROM table WHERE type = "type1"';' $result = mysql_query($sql); while($row = mysql_fetch_assoc($result)) { echo '<li>'.$row['name'].'</li>'; } I'm reading all these How-Tos or articles preaching about the greatness of PDO but I don't understand why. I don't seem to be saving any LoCs and I don't see how it would be more reusable because all the functions that I call above just seem to be encapsulated in a class but do the exact same thing. The only advantage I'm seeing to PDO are prepared statements. I'm not saying data abstraction is a bad thing, I'm asking these questions because I'm trying to design my current classes correctly and they need to connect to a DB so I figured I'd do this the right way. Maybe I'm just reading bad articles on the subject :) I would really appreciate any advice, links, or concrete real-life examples on the subject!

    Read the article

  • With SQL can you use a sub-query in a WHERE LIKE clause?

    - by Jason
    I'm not even sure how to even phrase this as it sounds weird conceptually, but I'll give it a try. Basically I'm looking for a way to create a query that is essentially a WHERE IN LIKE SELECT statement. As an example, if I wanted to find all user records with a hotmail.com email address, I could do something like: SELECT UserEmail FROM Users WHERE (UserEmail LIKE '%hotmail.com') But what if I wanted to use a subquery as the matching criteria? Something like this: SELECT UserEmail FROM Users WHERE (UserEmail LIKE (SELECT '%'+ Domain FROM Domains)) Is that even possible? If so, what's the right syntax?

    Read the article

  • What tool for managing Oracle DB do you suggest?

    - by Artic
    What tool for managing Oracle DB do you suggest? I need to execute scripts and manage data in tables and develop some scripts and packages. I'v tried SQL developer and actually don't like it. Want some more features for developing (debug, code assist, integrated help and so on.)

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • New replicaset resident memory is larger than the existing sets

    - by eded
    From the mongodb tutorial of how to resync a set, I wipe all the files in /data/db and restart the mongod process to resync the data. Everything looks ok, I get the same number of documents as the existing two sets(primary and one secondary). However, when I check the memory on MMS. it shows me my new resynced set/mongod process has a different memory status value than the other two. For existing twos using db.serverStatus.mem shows like the following: "mem" : { "bits" : 64, "resident" : 239, "virtual" : 66348, "supported" : true, "mapped" : 32865, "mappedWithJournal" : 65730 } however, the new resynced set shows like: "mem" : { "bits" : 64, "resident" : 1239, "virtual" : 52447, "supported" : true, "mapped" : 25700, "mappedWithJournal" : 51400 } the resynced resident memory is 6-10 times more than the existing ones. I wouder if it is normal because all data comes in suddenly during the resyncing?? and even virtual and mapped value are different too. Can anyone explain?? thanks

    Read the article

  • Can I set ignore_dup_key on for a primary key?

    - by Mr. Flibble
    I have a two-column primary key on a table. I have attempted to alter it to set the ignore_dup_key to on with this command: ALTER INDEX PK_mypk on MyTable SET (IGNORE_DUP_KEY = ON); But I get this error: Cannot use index option ignore_dup_key to alter index 'PK_mypk' as it enforces a primary or unique constraint. How else should I set IGNORE_DUP_KEY to on?

    Read the article

  • what does "out of range" mean?

    - by user329820
    Hi I have checked these statements with mysql and no error will happen and also the out put will be 0 rows BUT my friend checked it and he found an error for SELECT becaouse it is out of range !! IS he correct? thanks CREATE TABLE T1(A INTEGER NULL); SELECT * FROM T1;

    Read the article

  • How do you write a recursive stored procedure

    - by Grayson Mitchell
    I simply want a stored procedure that calculates a unique id and inserts it. If it fails it just calls itself to regenerate said id. I have been looking for an example, but cant find one, and am not sure how I should get the sp to call itself, and set the appropriate output parameter. I would also appreciate someone pointing out how to test this sp also. ALTER PROCEDURE [dbo].[DataContainer_Insert] @SomeData varchar(max), @DataContainerId int out AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; BEGIN TRY SELECT @UserId = MAX(UserId) From DataContainer INSERT INTO DataContainer (UserId, SomeData) VALUES (@UserId, SomeData) SELECT @DataContainerId = scope_identity() END TRY BEGIN CATCH --try again exec DataContainer_Insert @DataContainerId, @SomeData END CATCH END

    Read the article

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

  • How to create unique user key

    - by Grayson Mitchell
    Scenario: I have a fairly generic table (Data), that has an identity column. The data in this table is grouped (lets say by city). The users need an identifier in order for printing on paper forms, etc. The users can only access their cites data, so if they use the identity column for this purpose they will see odd numbers (e.g. a 'New York' user might see 1,37,2028... as the listed keys. Idealy they would see 1,2,3... (or something similar) The problem of course is concurrency, this being a web application you can't just have something like: UserId = Select Count(*)+1 from Data Where City='New York' Has anyone come up with any cunning ways around this problem?

    Read the article

  • Many tables for many users?

    - by Seagull
    I am new to web programming, so excuse the ignorance... ;-) I have a web application that in many ways can be considered to be a multi-tenant environment. By this I mean that each user of the application gets their own 'custom' environment, with absolutely no interaction between those users. So far I have built the web application as a 'single user' environment. In other words, I haven't actually done anything to support multi-users, but only worked on the functionality I want from the app. Here is my problem... What's the best way to build a multi-user environment: All users point to the same 'core' backend. In other words, I build the logic to separate users via appropriate SQL queries (eg. select * from table where user='123' and attribute='456'). Each user points to a unique tablespace, which is built separately as they join the system. In this case I would simply generate ALL the relevant SQL tables per user, with some sort of suffix for the user. (eg. now a query would look like 'select * from table_ where attribute ='456'). In short, it's a difference between "select * from table where USER=" and "select * from table_USER".

    Read the article

  • When/Why to use Cascading in SQL Server?

    - by Joel Coehoorn
    When setting up foreign keys in SQL Server, under what circumstances should you have it cascade on delete or update, and what is the reasoning behind it? This probably applies to other databases as well. I'm looking most of all for concrete examples of each scenario, preferably from someone who has used them successfully.

    Read the article

  • using dummy row with NOT NULL to solve DEFAUT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup valve field. I have the first row in every table which is PK id = 1 as a dummy row with just "none" in all the columns. this way I can use NOT NULL in my schema and if needed reference to the none row values which should be null. Is this a good design or any other work arounds?

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >