Search Results

Search found 61944 results on 2478 pages for 'text database'.

Page 500/2478 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Question about Benchmark funcion in Mysql ( Incredible results ).

    - by xRobot
    I have 2 tables: author with 3 millions of rows. book with 20 miles rows. . So I have benchmarked this query with a join: SELECT BENCHMARK(100000000, 'SELECT book.title, author.name FROM `book` , `author` WHERE book.id = author.book_id ') And this is the result: Query took 0.7438 sec ONLY 0.7438 seconds for 100 millions of query with a join ??? Do I make some mistakes or this is the right result ?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • How do I avoid a race condition in my Rails app?

    - by Cathal
    Hi, I have a really simple Rails application that allows users to register their attendance on a set of courses. The ActiveRecord models are as follows: class Course < ActiveRecord::Base has_many :scheduled_runs ... end class ScheduledRun < ActiveRecord::Base belongs_to :course has_many :attendances has_many :attendees, :through => :attendances ... end class Attendance < ActiveRecord::Base belongs_to :user belongs_to :scheduled_run, :counter_cache => true ... end class User < ActiveRecord::Base has_many :attendances has_many :registered_courses, :through => :attendances, :source => :scheduled_run end A ScheduledRun instance has a finite number of places available, and once the limit is reached, no more attendances can be accepted. def full? attendances_count == capacity end attendances_count is a counter cache column holding the number of attendance associations created for a particular ScheduledRun record. My problem is that I don't fully know the correct way to ensure that a race condition doesn't occur when 1 or more people attempt to register for the last available place on a course at the same time. My Attendance controller looks like this: class AttendancesController < ApplicationController before_filter :load_scheduled_run before_filter :load_user, :only => :create def new @user = User.new end def create unless @user.valid? render :action => 'new' end @attendance = @user.attendances.build(:scheduled_run_id => params[:scheduled_run_id]) if @attendance.save flash[:notice] = "Successfully created attendance." redirect_to root_url else render :action => 'new' end end protected def load_scheduled_run @run = ScheduledRun.find(params[:scheduled_run_id]) end def load_user @user = User.create_new_or_load_existing(params[:user]) end end As you can see, it doesn't take into account where the ScheduledRun instance has already reached capacity. Any help on this would be greatly appreciated.

    Read the article

  • What's the best way to test SQL Server connection programmatically?

    - by backslash17
    I need to develop a single routine that will be fired each 5 minutes to check if a list of SQL Servers (10 to 12) are up and running. I can try to obtain a simple query in each one of the servers but this means that I have to create a table, view or stored procedure in every server, even if I use any already made SP I need to have a registered user in each server too. The servers are not in the same physical location so having those requirements would be a complex task. Is there a way to simply "ping" from C# one SQL Server? Thanks in advance!

    Read the article

  • How to use Externel Triggers on Oracle 11g..

    - by RBA
    Hi, I want to fire a trigger whenever an insert command is fired.. The trigger will access a pl/sql file which can change anytime.. So the query is, if we design the trigger, how can we make sure this dynamic thing happens.. As during the stored procedure, it is not workingg.. I think - it should work for 1) External Procedures 2) Execute Statement Please correct me, if I am wrong.. I was working on External Procedures but i am not able to find the way to execute the external procedure from here on.. SQL> CREATE OR REPLACE FUNCTION Plstojavafac_func (N NUMBER) RETURN NUMBER AS 2 LANGUAGE JAVA 3 NAME 'Factorial.J_calcFactorial(int) return int'; 4 / @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ SQL> CREATE OR REPLACE TRIGGER student_after_insert 2 AFTER INSERT 3 ON student 4 FOR EACH ROW How to call the procedure from heree... And does my interpretations are right,, plz suggest.. Thanks.

    Read the article

  • Any tools or techniques for validating constraints programmatically between databases?

    - by Brandon
    If you had two databases, that had two tables between them that would normally implement a one to one (or many to many) constraint but cannot since they are separate databases, how would you validate this relationship in an application or test? Is there a simple way to do this? For example, a tool or technique that can, given a constraint type, tables and fields, does the validation. I imagine that this isn't the first time this come up so I'm hoping people can share their solution. Thanks.

    Read the article

  • In mySQL, Is it possible to SELECT from two tables and merge the columns?

    - by Travis
    If I have two tables in mysql that have similar columns... TABLEA id name somefield1 TABLEB id name somefield1 somefield2 How do I structure a SELECT statement so that I can SELECT from both tables simultaneously, and have the result sets merged for the columns that are the same? So for example, I am hoping to do something like... SELECT name, somefield1 FROM TABLEA, TABLEB WHERE name="mooseburgers"; ...and have the name, and somefield1 columns from both tables merged together in the result set. Thank-you for your help!

    Read the article

  • New replicaset resident memory is larger than the existing sets

    - by eded
    From the mongodb tutorial of how to resync a set, I wipe all the files in /data/db and restart the mongod process to resync the data. Everything looks ok, I get the same number of documents as the existing two sets(primary and one secondary). However, when I check the memory on MMS. it shows me my new resynced set/mongod process has a different memory status value than the other two. For existing twos using db.serverStatus.mem shows like the following: "mem" : { "bits" : 64, "resident" : 239, "virtual" : 66348, "supported" : true, "mapped" : 32865, "mappedWithJournal" : 65730 } however, the new resynced set shows like: "mem" : { "bits" : 64, "resident" : 1239, "virtual" : 52447, "supported" : true, "mapped" : 25700, "mappedWithJournal" : 51400 } the resynced resident memory is 6-10 times more than the existing ones. I wouder if it is normal because all data comes in suddenly during the resyncing?? and even virtual and mapped value are different too. Can anyone explain?? thanks

    Read the article

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • Many-to-one relationship in SQLAlchemy

    - by Arrieta
    This is a beginner-level question. I have a catalog of mtypes: mtype_id name 1 'mtype1' 2 'mtype2' [etc] and a catalog of Objects, which must have an associated mtype: obj_id mtype_id name 1 1 'obj1' 2 1 'obj2' 3 2 'obj3' [etc] I am trying to do this in SQLAlchemy by creating the following schemas: mtypes_table = Table('mtypes', metadata, Column('mtype_id', Integer, primary_key=True), Column('name', String(50), nullable=False, unique=True), ) objs_table = Table('objects', metadata, Column('obj_id', Integer, primary_key=True), Column('mtype_id', None, ForeignKey('mtypes.mtype_id')), Column('name', String(50), nullable=False, unique=True), ) mapper(MType, mtypes_table) mapper(MyObject, objs_table, properties={'mtype':Relationship(MType, backref='objs', cascade="all, delete-orphan")} ) When I try to add a simple element like: mtype1 = MType('mtype1') obj1 = MyObject('obj1') obj1.mtype=mtype1 session.add(obj1) I get the error: AttributeError: 'NoneType' object has no attribute 'cascade_iterator' Any ideas?

    Read the article

  • how to get transform a local image to a web accessable image

    - by hguser
    Hi: Generally,if we want to display a image in the web page,we give the uri of the image resource like: http://host:port/image/xxx.jpg. Now,there are some images in my file system,and I save its absolute path in the db. Like this; id name address image 1 xxxx xxxx C:/images/xxx.jpg Now if the entity is retrived,its image should be displayed in the page. How to make it? What I thought is copy the image under the web server dir,then build its url,then the page can render it. But I wonder if this is a good idea? Is there any other way?

    Read the article

  • How Indices Cope with MVCC ?

    - by geeko
    Greetings Overflowers, To my understanding (and I hope I'm not right) changes to indices cannot be MVCCed. I'm wondering if this is also true with big records as copies can be costly. Since records are accessed via indices (usually), how MVCC can be effective ? Do, for e.g., indices keep track of different versions of MVCCed records ? Any recent good reading on this subject ? Really appreciated ! Regards

    Read the article

  • Solr; What does this mean?

    - by Camran
    At the end of the README.txt file which is located in the example directory under solr, I find this line: NOTE: This Solr example server references SolrCell jars outside of the server directory with statements in the solrconfig.xml. If you make a copy of this example server and wish to use the ExtractingRequestHandler (SolrCell), you will need to copy the required jars into solr/lib or update the paths to the jars in your solrconfig.xml What does this mean? Do I have to make some adjustment before uploading solr to my server? Also, if you know, what is Solr-nightly:s difference to regular solr? The tutorial states "solr-nightly.zip" but on their download section I cant find it.

    Read the article

  • Relation to multiple tables of different types for rating?

    - by Tronic
    i have a table structure like this Products Team Images and want to implement a rating/commenting-feature, where users can rate each entry of all tables. what's the best way to make a single rating table? e.g. a user votes a a product and a team entry, and it should be possible to get alle these entries from a single table. what kind of table-structure is best for this purpose? i hope, my questions is clear enough :/ thanks in advance!

    Read the article

  • Trying to create fields based on a case statement

    - by dido
    I'm having some trouble with the query below. I am trying to determine if the "category" field is A, B or C and then creating a field based on the category. That field would sum up payments field. But I'm running into error saying "incorrect syntax near keyword As". I am creating this in a SQL View. Using SQL Server 2008 SELECT r.id, r.category CASE WHEN r.category = 'A' then SUM(r.payment) As A_payments WHEN r.category = 'B' then SUM(r.payment) As B_payments WHEN r.category = 'C' then SUM(r.payment) As C_payments END FROM r_invoiceTable As r GROUP BY r.id, r.category I have data where all of the above cases should be executed because the data that I have has A,B and C Sample Data- r_invoiceTable Id --- Category ---- Payment 222 A ---- 50 444 A ---- 30 111 B ---- 90 777 C ---- 20 555 C ---- 40 Desired Output A_payments = 80, B_payments = 90, C_payments = 60

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >