Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 356/1300 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Any way to make this PostgreSQL count query any faster?

    - by Ben Dauphinee
    I'm running a case-insensitive search on a table with 7.2 million rows, and I was wondering if there was any way to make this query any faster? Currently, it takes approx 11.6 seconds to execute, with just one search parameter, and I'm worried that as soon as I add more than one, this query will become massively slow. SELECT count(*) FROM "exif_parse" WHERE (description ~* 'canon')

    Read the article

  • Is this safe? Is this OK to do in MYSQL?

    - by alex
    I have always done this: mysqldump -hlocalhost -uuser -ppass MYDATABASE > /home/f/db_backup/MYDATABASE.sql mysql -uuser -ppass MYDATABASE < MYDATABASE.sql But, if I do this instead...is this safe? Is this identical to the above??? mysqldump -hlocalhost -uuser -ppass MYDATABASE | gzip > /home/f/db_backup/MYDATABASE.sql.gz zcat MYDATABASE.sql.gz | mysql -uuser -ppass MYDATABASE

    Read the article

  • PhpMyAdmin; Should I disable root login?

    - by Camran
    I have this setup in Phpmyadmin: USER HOST PASSW PRIVILEGES GRANT debian-sys-maint localhost Yes ALL PRIVILEGES YES phpmyadmin localhost Yes USAGE NO root 127.0.0.1 Yes ALL PRIVILEGES YES root localhost Yes ALL PRIVILEGES YES root my_hostname Yes ALL PRIVILEGES YES username localhost Yes ALL PRIVILEGES YES Where "username" is my username and "my_hostname" is my hostname. I am currently only logging in as the last one (username, localhost). Also, I have php which also uses the last ones login details. Should I disable the other ones? And, what other security measures should I take? BTW: My server is Linux and I have root access. Thanks

    Read the article

  • References/walkthroughs for maintaining database schemas with Visual Studio 2010?

    - by user206356
    I have Visual Studio 2010 Beta 2 and SQL Server 2008 installed. I'm working with a populated database and want to modify various column types. SQL Server Management Studio requires me to drop tables to do this, and get pretty finicky given my moderate level of knowledge of SQL Server. However, I heard the new database project type supports changing the database schema to the desired format and it will handle creating and running all the scripts to implement the changes. I've created a VS2010 database project using the existing database as the source, but so far haven't had much luck figuring out the appropriate method to make the changes without getting an error. As a result, I'm looking for any reference info I can find on using VS2010's capabilities in this area. Any suggestions?

    Read the article

  • Difference between dates when grouping in SQL

    - by CeejeeB
    I have a table of purchases containing a user_id and a date_of_purchase. I need to be able to select all the users who have made 2 purchases within 12 months of each other. The dates can be any point in time as long as they are less than 12 months apart. e.g. user_id date_of_purchase 123 01/Jan/2010 124 01/Aug/2010 123 01/Feb/2010 124 05/Aug/2008 In this example i want user_id 123

    Read the article

  • What's the most simple way to retrieve all data from a table and save it back in .NET 3.5?

    - by zoman
    I have a number of tables containing some basic (business related) mapping data. What's the most simple way to load the data from those tables, then save the modified values back. (all data should be replaced in the tables) An ORM is out of question as I would like to avoid creating domain objects for each table. The actual editing of the data is not an issue. (it is exported into Excel where the data is edited, then the file is uploaded with the modified data) The technology is .NET 3.5 (ASP.NET MVC) and SQL Server 2005. Thanks.

    Read the article

  • About curse of dimensionality

    - by Dan
    My question is about this topic I've been reading about a bit. Basically my understanding is that in higher dimensions all points end up being very close to each other. The doubt I have is whether this means that calculating distances the usual way (euclidean for instance) is valid or not. If it were still valid, this would mean that when comparing vectors in high dimensions, the two most similar wouldn't differ much from a third one even when this third one could be completely unrelated. Is this correct? Then in this case, how would you be able to tell whether you have a match or not?

    Read the article

  • getting smallest of coordinates that differ by N or more in Python

    - by user248237
    suppose I have a list of coordinates: data = [[(10, 20), (100, 120), (0, 5), (50, 60)], [(13, 20), (300, 400), (100, 120), (51, 62)]] and I want to take all tuples that either appear in each list in data, or any tuple that differs from all tuples in lists other than its own by 3 or less. How can I do this efficiently in Python? For the above example, the results should be: [[(100, 120), # since it occurs in both lists (10, 20), (13, 20), # since they differ by only 3 (50, 60), (51, 60)]] (0, 5) and (300, 400) would not be included, since they don't appear in both lists and are not different from elements in lists other than their own by 3 or less. how can this be computed? thanks.

    Read the article

  • how to get result from this data.

    - by Shantanu Gupta
    I want to compute result from this table. I want quantity 1 - quantity2 as another column in the table shown below. this table has more such records I am trying to query but not been able to get result. select * from v order by is_active desc, transaction_id desc PK_GUEST_ITEM_ID FK_GUEST_ID QUANTITY TRANSACTION_ID IS_ACTIVE ---------------- -------------------- ---------------------- -------------------- ----------- 12963 559 82000 795 1 12988 559 79000 794 0 12987 559 76000 793 0 12986 559 73000 792 0 12985 559 70000 791 0 12984 559 67000 790 0 12983 559 64000 789 0 12982 559 61000 788 0 12981 559 58000 787 0 12980 559 55000 786 0 12979 559 52000 785 0 12978 559 49000 784 0 12977 559 46000 783 0 12976 559 43000 782 0 I want another column that will contain the subtraction of two quantities . DESIRED RESULT SHOULD BE SOMETHING LIKE THIS PK_GUEST_ITEM_ID FK_GUEST_ID QUANTITY Result TRANSACTION_ID IS_ACTIVE ---------------- -------------------- ---------------------- -------------------- ----------- 12963 559 82000 3000 795 1 12988 559 79000 3000 794 0 12987 559 76000 3000 793 0 12986 559 73000 3000 792 0 12985 559 70000 3000 791 0 12984 559 67000 3000 790 0 12983 559 64000 3000 789 0 12982 559 61000 3000 788 0 12981 559 58000 3000 787 0 12980 559 55000 3000 786 0 12979 559 52000 3000 785 0 12978 559 49000 3000 784 0 12977 559 46000 3000 783 0 12976 559 43000 NULL 782 0

    Read the article

  • How to translate this 2 queries from Mysql to Postgresql? :

    - by xRobot
    How can I translate this 2 queries in postgresql ? : CREATE TABLE `example` ( `id` int(10) unsigned NOT NULL auto_increment, `from` varchar(255) NOT NULL default '0', `message` text NOT NULL, `lastactivity` timestamp NULL default '0000-00-00 00:00:00', `read` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `from` (`from`) ) DEFAULT CHARSET=utf8; Query: SELECT * FROM table_1 LEFT OUTER JOIN table_2 ON ( table_1.id = table_2.id ) WHERE (table_1.lastactivity > NOW()-100);

    Read the article

  • Table naming convention?

    - by MattSlay
    In our manufacturing shop, each Employee hits the time clock every time they change Jobs or Machines (work centers) during their work day. Each record created in the Time Clock app has foreign keys that link the record to: the Employee, the Job, and the Machine which they are about to operate. I’m trying to determine the best name for this table… If I were tempted to call it ClockRecords or TimeClockRecords, why wouldn’t I also consider naming it JobTimeRecords, or why not MachineTimeRecords. Any ideas on a good name?

    Read the article

  • How can an improvement to the query cache be tracked?

    - by Bill Paetzke
    I am parameterizing my web app's ad hoc sql. As a result, I expect the query plan cache to reduce in size and have a higher hit ratio. Perhaps even other important metrics will be improved. Could I use perfmon to track this? If so, what counters should I use? If not perfmon, how could I report on the impact of this change?

    Read the article

  • Using SQL Server for web applications

    - by rem
    As far as I understand, due to license reqirements all web applications, which use MS SQL Server, use SQL Server Express (free) or SQL Server web edition (processor license). Is it so? What are other specific features of SQL Server usage for web app?

    Read the article

  • 'Good' programming form in maintaining / updating / accessing files by entry

    - by zhermes
    Basic Question: If I'm storying/modifying data, should I access elements of a file by index hard-coded index, i.e. targetFile.getElement(5); via a hardcoded identifier (internally translated into index), i.e. target.getElementWithID("Desired Element"), or with some intermediate DESIRED_ELEMENT = 5; ... target.getElement(DESIRED_ELEMENT), etc. Background: My program (c++) stores data in lots of different 'dataFile's. I also keep a list of all of the data-files in another file---a 'listFile'---which also stores some of each one's properties (see below, but i.e. what it's name is, how many lines of information it has etc.). There is an object which manages the data files and the list file, call it a 'fileKeeper'. The entries of a listFile look something like: filename , contents name , number of lines , some more numbers ... Its definitely possible that I may add / remove fields from this list --- but in general, they'll stay static. Right now, I have a constant string array which holds the identification of each element in each entry, something like: const string fileKeeper::idKeys[] = { "FileName" , "Contents" , "NumLines" ... }; const int fileKeeper::idKeysNum = 6; // 6 - for example I'm trying to manage this stuff in 'good' programatic form. Thus, when I want to retrieve the number of lines in a file (for example), instead of having a method which just retrieves the '3'rd element... Instead I do something like: string desiredID = "NumLines"; int desiredIndex = indexForID(desiredID); string desiredElement = elementForIndex(desiredIndex); where the function indexForID() goes through the entries of idKeys until it finds desiredID then returns the index it corresponds to. And elementForIndex(index) actually goes into the listFile to retrieve the index'th element of the comma-delimited string. Problem: This still seems pretty ugly / poor-form. Is there a way I should be doing this? If not, what are some general ways in which this is usually done? Thanks!

    Read the article

  • I built my rails app with sqlite and without specifying any db field sizes, Is my app now foobared for production?

    - by Tim Santeford
    I've been following a lot of good tutorials on building rails apps but I seem to be missing the whole specifying and validating db field sizes part. I love not needing to have to think about it when roughing out an app (I would have never done this with a PHP or ASP.net app). However, now that I'm ready to go to production, I think I might have done myself a disservice by not specifying field sizes as I went. My production db will be MySQL. What is the best practice here? Do I need to go through all of my migration files and specify sizes, update all the models with validation, and update all my form partial views with input max widths? or am I missing a critical step in my development process?

    Read the article

  • How to handle ids and polymorphic associations in views if compound keys are not supported?

    - by duncan
    I have a Movie plan table: movie_plans (id, description) Each plan has items, which describe a sequence of movies and the duration in minutes: movie_plan_items (id, movie_plan_id, movie_id, start_minutes, end_minutes) A specific instance of that plan happens in: movie_schedules (id, movie_plan_id, start_at) However the schedule items can be calculated from the movie_plan_items and the schedule start time by adding the minutes create view movie_schedule_items as select CONCAT(p.id, '-', s.id) as id, s.id as movie_schedule_id, p.id as movie_plan_item_id, p.movie_id, p.movie_plan_id, (s.start_at + INTERVAL p.start_minutes MINUTE) as start_at, (s.start_at + INTERVAL p.end_minutes MINUTE) as end_at from movie_plan_items p, movie_schedules s where s.movie_plan_id=p.movie_plan_id; I have a model over this view (readonly), it works ok, except that the id is right now a string. I now want to add a polymorphic property (like comments) to various of the previous tables. Therefore for movie_schedule_items I need a unique and persistent numeric id. I have the following dilemma: I could avoid the id and have movie_schedule_items just use the movie_plan_id and movie_schedule_id as a compound key, as it should. But Rails sucks in this regard. I could create an id using String#hash or a md5, thus making it slower or collision prone (and IIRC String#hash is no longer persistent across processes in Ruby 1.9) Any ideas on how to handle this situation?

    Read the article

  • mysql query to get unique value from one column

    - by vesselyp
    i have a table named locations of which i want to select and get values in such a way that it should select only distinct values from a column but select all other values . table name: locations column names 1: country values : America, India, India, India column names 2: state/Province : Newyork, Punjab, Karnataka, kerala when i select i should get India only once and all the three states listed under India . is ther any way..??? sombody please help

    Read the article

  • Question about Benchmark function in Mysql ( Incredible results ).

    - by xRobot
    I have 2 tables: author with 3 millions of rows. book with 20 miles rows. . So I have benchmarked this query with a join: SELECT BENCHMARK(100000000, 'SELECT book.title, author.name FROM `book` , `author` WHERE book.id = author.book_id ') And this is the result: Query took 0.7438 sec ONLY 0.7438 seconds for 100 millions of query with a join ??? Do I make some mistakes or this is the right result ?

    Read the article

  • MSSQL. Compare columns in two tables.

    - by maxt3r
    Hi, i've recently done a migration from a really old version of some application to the current version and i faced some problems while migrating databases. I need a query that could help me to compare columns in two tables. I mean not the data in rows, i need to compare the columns itself to figure out, what changes in table structure i've missed.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >