Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 77/443 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Complex Quary on cassandra

    - by Sadiqur Rahman
    I have heard on cassandra database engine few days ago and searching for a good documentation on it. after studying on cassandra I got cassandra is more scalable than other data engine. I also read on Amazon SimpleDB but as SimpleDB has a limitation 10GB/table and Google Datastore is slower than Amazon SimpleDB, I prefer not to use them (Google Datastore, Amazon SimpleDB). So for making our site scaled specially high write rates with massive data, I like to use Cassandra as out Data Engine. But before starting using cassandra I am confused on "How to handle complex data using casssandra". I am giving you the MySQL database structure below, Please read this and give me a good suggestion. Users Table hasColum ID Primary hasColum email Unique hasColum FirstName hasColum LastName Category Table hasColum ID Primary hasColum Parent hasColum Category Posts Table hasColum ID Primary hasColum UID Index foreign key linked to users-ID hasColum CID Index foreign key linked to Category-ID hasColum Title hasColum Post Index hasColum PunDate Comments hasColum ID primary hasColum UID Index foreign key linked to users-ID hasColum PID Index foreign key linked to Posts-ID hasColum Comment User Group hasColum ID primary hasColum Name UserToGroup Table (for many to many relation only) hasColum UID foreign key linked to Users-ID hasColum GID foreign key linked to Group-ID Finally for your information, I like to use SimpleCassie PHP Class http://code.google.com/p/simpletools-php/ So, it will be very helpful if you can give me example using SimpleCassie

    Read the article

  • django-uni-form helpers and CSRF tags over POST

    - by linked
    Hi, I'm using django-uni-forms to display my fields, with a rather rudimentary example straight out of their book. When I render the form fields using <form>{%csrf_tag%} {%form|as_uni_form%}</form>, everything works as expected. However, django-uni-form Helpers allow you to generate the form tag (and other helper-related content) using the following syntax -- {% with form.helper as helper %}{% uni_form form helper%}{%endwith%} -- This creates the <form> tag for me, so there's nowhere to embed my own CSRF_token. When I try to use this syntax, the form renders perfectly, but without a CSRF token, and so submitting the form fails every time. Does anyone have experience with this? Is there an established way to add the token? I much prefer the second syntax, for re-use reasons. Thanks!

    Read the article

  • C# thread functions not properly sharing a static data member

    - by Umer
    I have a class as following public class ScheduledUpdater { private static Queue<int> PendingIDs = new Queue<int>(); private static bool UpdateThreadRunning = false; private static bool IsGetAndSaveScheduledUpdateRunning = false; private static DataTable ScheduleConfiguration; private static Thread updateRefTableThread; private static Thread threadToGetAndSaveScheduledUpdate; public static void ProcessScheduledUpdates(int ID) { //do some stuff // if ( updateRefTableThread not already running) // execute updateRefTableThread = new Thread(new ThreadStart(UpdateSchedulingRefTableInThrear)); // execute updateRefTableThread.Start(); //do some stuff GetAndSaveScheduledUpdate(ID) } private static void UpdateSchedulingRefTableInThrear() { UpdateSchedulingRefTable(); } public static void UpdateSchedulingRefTable() { // read DB and update ScheduleConfiguration string query = " SELECT ID,TimeToSendEmail FROM TBLa WHERE MODE = 'WebServiceOrder' AND BDELETE = false "; clsCommandBuilder commandBuilder = new clsCommandBuilder(); DataSet ds = commandBuilder.GetDataSet(query); if (ds != null && ds.Tables.Count > 0 && ds.Tables[0].Rows.Count > 0) { List<string> lstIDs = new List<string>(); for (int i = 0; i < ds.Tables[0].Rows.Count; i++) { lstIDs.Add(ds.Tables[0].Rows[i]["ID"].ToString()); if (LastEmailSend.Contains(ds.Tables[0].Rows[i]["ID"].ToString())) LastEmailSend[ds.Tables[0].Rows[i]["ID"].ToString()] = ds.Tables[0].Rows[i]["TimeToSendEmail"].ToString(); else LastEmailSend.Add(ds.Tables[0].Rows[i]["ID"].ToString(), ds.Tables[0].Rows[i]["TimeToSendEmail"].ToString()); } if (lstIDs.Count > 0) { string Ids = string.Join(",", lstIDs.ToArray()).Trim(','); dhDBNames dbNames = new dhDBNames(); dbNames.Default_DB_Name = dbNames.ControlDB; dhGeneralPurpose dhGeneral = new dhGeneralPurpose(); dhGeneral.StringDH = Ids; DataSet result = commandBuilder.GetDataSet(dbNames, (object)dhGeneral, "xmlGetConfigurations"); if (result != null && result.Tables.Count > 0) { if (ScheduleConfiguration != null) ScheduleConfiguration.Clear(); ScheduleConfiguration = result.Tables[0]; } } } } public static void GetAndSaveScheduledUpdate(int ID) { //use ScheduleConfiguration if (ScheduleConfiguration == null)[1] UpdateSchedulingRefTable(); DataRow[] result = ScheduleConfiguration.Select("ID = "+ID); //then for each result row, i add this to a static Queue PendingIDs } } The function UpdateSchedulingRefTable can be called any time from outside world (for instance if someone updates the schedule configuration manually) ProcessScheduledUpdates is called from a windows service every other minute. Problem: Datatable ScheduleConfiguration is updated in the UpdateSchedulingRefTable (called from outside world - say manually) but when i try to use Datatable ScheduleConfiguration in GetAndSaveScheduledUpdate, i get the older version of values.... What am I missing in this stuff??? About EDIT: I thought the stuff i have not shown is quite obvious and possibly not desired, perhaps my structure is wrong :) and sorry for incorrect code previously, i made a simple function call as a thread initialization... sorry for my code indentation too because i don't know how to format whole block...

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • A keyboard shortcut from a sequence of keystrokes for Linux

    - by Little Bobby Tables
    I am looking for a way to define an Emacs-style keys sequence as a keyboard shortcut in Linux - Specifically, in Gnome, but more general solutions are also acceptable. For example, I would like a sequence like "Alt-w t" (that is, first press Alt-w and then t) to open a terminal, "Alt-w c" to close a window, and so on. The rationale behind this question is twofold: Make more use of desktop-wide keyboard shortcuts Make an old keyboard, that has no Win key, usable with desktop-wide keyboard shortcuts, without causing too many collisions with application - Specifically with Emacs. Thanks!

    Read the article

  • Emacs and windows manager keyboard shortcuts without Win key

    - by Little Bobby Tables
    I found a classic M-Series keyboard and I want to use it. However, it does not have the "Windows" key (a.k.a "Super"), only the Shift, Control and Alt modifiers. My keyboard shortcuts are cluttered as-is, since that I try to control both Emacs and the windows manager (Gnome) only from the keyboard. I rely on the "Super" key to identify the windows manager shortcuts. What it the best practice for keyboard-centric work without the "Super" key?

    Read the article

  • Work Item Visualizer for TFS 2010 - New Extension

    - by MikeParks
    I released another new extension to the Visual Studio Gallery again today called Work Item Visualizer for TFS 2010. I've only heard positive things about it so far, hopefully it stays that way :) Basically, it creates a diagram of all work items linked to a work item ID which the user specifies in a search box. This extension was coded using DGML (the same graph rendering language used for the Visual Studio 2010 Architecture Tools). It was pretty cool getting a chance to create something using some of the newest technology out there. Well, I just wanted to throw a blog up to get the word out on it a little more. If you're using Visual Studio 2010 with Team Foundation Server 2010, feel free to check it out! Thanks everyone. Download Link: http://visualstudiogallery.msdn.microsoft.com/en-us/a35b6010-750b-47f6-a7a5-41f0fa7294d2   What it does: ·         Creates a DGML graph to visualize linked TFS Work Items by entering a Work Item ID in the toolbar search box   How it benefits you: ·         Allows you to easily analyze the hierarchy of your TFS Work Items ·         Gain the ability to perform basic risk/impact analysis when creating or editing Work Items ·         Great for meetings in the case that you need to discuss the entire scope of linked Work Items ·         Easier project planning ·         Eliminates the need to create TFS queries or reports to view tree of Work Items ·         Easily lets you see the entire tree of work items linked to the one you’re working on   Navigation Tips: ·         Use Ctrl + Mouse Wheel Scroll to zoom in and out ·         Use Ctrl + Left Mouse click (and hold) to move document around ·         Right click on DGML area for more options (Like copy image or viewing in groups) ·         Clicking on each node highlights that node and the links connected to it ·         Colors in the legend can be changed ·         When work item nodes are deleted, the view is automatically updated ·         Double clicking on work item node will open up the Work Items URL   Try it out on work items that have several of links and let us know what you think. A big thanks goes out to everyone working on the http://visualization.codeplex.com/ project for publishing the source code on CodePlex which really helped me learn how DGML (Directed Graph Markup Language - New to Visual Studio 2010 Architecture Tools) works!    - Mike

    Read the article

  • general questions about link spam

    - by hen3ry
    Hello, A CMS-based site I manage is suffering from a small but ominously growing number of almost certainly bot-emplaced, invisible spam links placed in registered-user-only shoutboxes and user forums. "Link Spam", yes? Until recently, I've kept my eyes on narrow tech issues, and I'm having trouble understanding what's going on. I understand that we need to tighten up our registration procedures, but more generally... Do I understand correctly that our primary interest in combatting link spam on our site is that major search engines reduce or zero the search visibility of sites that contain link spam? Although we're non-commercial, we don't want to be at the bottom of the rankings, or eliminated altogether. Are the linked-to sites the direct beneficiaries of the spam links, or is there some kind of indirection? What is the likely relationship between the link-spammers and the owners of the (directly or indirectly) linked-to sites? Are the owners of the linked-to sites paying the link-spammers for higher visibility? Are the owners aware that this method is being used? It is my impression that major search engines are capable these days of detecting that given sites are being promoted by link spam, and that these sites may consequently be reduced in search rank or dropped altogether. Do these sanctions occur frequently? Is there any potential value in sending notifications to the owners of the linked-to sites that their visibility is at risk? TIA, hen3ry

    Read the article

  • Using Python to traverse a parent-child data set

    - by user132748
    I have a dataset of two columns in a csv file. Th purpose of this dataset is to provide a linking between two different id's if they belong to the same person. e.g (2,3,5 belong to 1) e.g COLA COLB 1 2 ; 1 3 ; 1 5 ; 2 6 ; 3 7 ; 9 10 In the above example 1 is linked to 2,3,5 and 2 is the linked to 6 and 3 is linked to 7. What I am trying to achieve is to identify all records which are linked to 1 directly (2,3,5) or indirectly(6,7) and be able to say that these id's in column B belong to same person in column A and then either dedupe or add a new column to the output file which will have 1 populated for all rows that link to 1 e.g of expected output colA colB GroupField 1 2 1; 1 3 1; 1 5 1 ; 2 6 1 ;3 7 1; 9 10 9; 10 11 9 I am a newbie and so am not sure on how to approach this problem.Appreciate any inputs you'll can provide.

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • How can I copy a SQL record which has related records in other tables to the same database?

    - by DerekVS
    Hi. I created a function in C# which allows me to copy a record and its related children to a new record and new related children in the same database. (This is for an application that allows the use of previous work as a template for new work.) Anyway, it works great... Here's a description of how it accomplishes the copy: It populates a two-column memory-based look-up table with the current primary key of each record. Next, as it individually creates each new copy record, it updates the look-up table with the Identity PK of the new record [retrieved from SCOPE_IDENTITY()]. Now, when it copies over any related children, it can look up the new parent PK to set the FK on the new record. In testing, it only took a minute to copy a relational structure on a local instance of SQL Server 2005 Express Edition. Unfortunately it is proving to be horribly slow in production! My users are dealing with 60,000+ records per parent record over the LAN to our SQL Server! While my copy function still works, each of those records represents an individual SQL UPDATE command and it loads the SQL Server at about 17% CPU from its normal 2% idle. I just finished testing a 50,000 record copy and it took almost 20 minutes! Is there a way to duplicate this functionality in SQL queries or stored procecures to make the SQL server do all of the copy work instead of blasting it over the LAN from each client? (We're running Microsoft SQL Server 2005 Standard Edition.) Thanks! -Derek

    Read the article

  • Replacement relevance sorting for MySQL fulltext in InnoDB tables?

    - by Giles Smith
    I am using MySQL InnoDB and want to do fulltext style searches on certain columns. What is the best way of sorting the results in order of relevance? I am trying to do something like: SELECT columns, (LENGTH(column_name) - LENGTH(REPLACE(column_name, '%searchterm%', ''))) AS score FROM table WHERE column_name LIKE '%searchterm%' ORDER BY score However this will become quite complicated once I start searching more than 1 column or using more than one keyword. I already have several joins happening so have simplified the query above. Can anyone suggest a better way? I don't want to use any 3rd party software like Sphinx etc

    Read the article

  • Dumping views with mysqldump in the right order.

    - by Bushibytes
    I have a script that backs up our database, which contains multiple tables and views constructed from tables. The command used is: mysqldump -u UserName -ppassword -h hostname DatabaseName dump.sql; I have noticed however that some view definitions are backed up before the definitions of the tables. This causes an issue when restoring using the classic mysql -u UserName -p < dump.sql As when it tries to create the view, the table it needs does not exist yet. It is possible to edit the dump files to be restored, but I was wondering: Is there a way to either make sure that mysqldump backs up the tables and views in the right order? Or is there a way to restore from a dump that will find the right tables to create first (or create sane temporary tables)? Edit for version: mysqldump Ver 10.11 Distrib 5.0.51b, for redhat-linux-gnu (x86_64)

    Read the article

  • Database per application VS One big database for all applications

    - by Jorge Vargas
    Hello, I'm designing a few applications that will share 2 or 3 database tables and all of the other tables will be independent of each app. The shared databases contain mostly user information, and there might occur the case where other tables need to be shared, but that's my instinct speaking. I'm leaning over the one database for all applications solution because I want to have referential integrity, and I won't have to keep the same information up to date in each of the databases, but I'm probably going to end with a database of 100+ tables where only groups of ten tables will have related information. The database per application approach helps me keep everything more organized, but I don't know a way to keep the related tables in all databases up to date. So, the basic question is: which of both approaches do you recommend? Thanks, Jorge Vargas.

    Read the article

  • Analyzing data from same tables in diferent db instances.

    - by Oscar Reyes
    Short version: How can I map two columns from table A and B if they both have a common identifier which in turn may have two values in column C Lets say: A --- 1 , 2 B --- ? , 3 C ----- 45, 2 45, 3 Using table C I know that id 2 and 3 belong to the same item ( 45 ) and thus "?" in table B should be 1. What query could do something like that? EDIT Long version ommited. It was really boring/confusing EDIT I'm posting some output here. From this query: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) I have the following parts: select distinct( activityin ) from taskperformance where rolein = 0 Output: http://question1337216.pastebin.com/f5039557 select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) Output: http://question1337216.pastebin.com/f6cef9393 And finally: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) Output: http://question1337216.pastebin.com/f346057bd Take for instace activityin 335 from first query ( from taskperformance B) . It is present in actvities from A. But is not in taskperformace in A ( but a the related activities: 92, 208, 335, 595 ) Are present in the result. The corresponding role in is: 1

    Read the article

  • [Repost-ish] Impossibly slow queries, Tables indexed, How can I speed it up?

    - by colorfulgrayscale
    Hi guys, I posted a little earlier on here at http://stackoverflow.com/questions/2656837/query-results-taking-too-long-on-200k-database-speed-up-tips asking about slow executing SQL queries. I was told to index the columns; I did. and its still slow (slow as in, i never see the results, both mysql and sqlite freeze up on query). Help would be greatly appreciated. Here is the SQL SELECT equipment.`unitID` AS `equipment_unitID`, equipment.`fleetCode` AS `equipment_fleetCode`, equipment.type AS equipment_type, equipment.tiremap AS equipment_tiremap, tiremap.`TireID` AS `tiremap_TireID`, tiremap.`WorkMap` AS `tiremap_WorkMap`, tiremap.`Position` AS `tiremap_Position`, tiremap.`DepthMap` AS `tiremap_DepthMap`, tiremap.timestamp AS tiremap_timestamp, workreference.`aMap` AS `workreference_aMap`, workreference.`bMap` AS `workreference_bMap`, tirework.`RO` AS `tirework_RO`, tirework.location AS tirework_location, tirework.mileage AS tirework_mileage, tirework.`mechanicCode` AS `tirework_mechanicCode`, tirework.`partNumber` AS `tirework_partNumber`, tirework.`historyID` AS `tirework_historyID`, tirework.workmap AS tirework_workmap, tirework.timestamp AS tirework_timestamp FROM equipment, tiremap, workreference, tirework WHERE equipment.tiremap = tiremap.`TireID` AND tiremap.`WorkMap` = workreference.`aMap` AND workreference.`bMap` = tirework.workmap LIMIT 5 and here is the EXPLAIN for it id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE equipment ALL tiremap 14079 1 SIMPLE tiremap ref PRIMARY,WorkMap,TireID,WorkMap_2 PRIMARY 52 tire.equipment.tiremap 3 1 SIMPLE workreference ref aMap,bMap aMap 52 tire.tiremap.WorkMap 1 1 SIMPLE tirework eq_ref NewIndex1 NewIndex1 52 tire.workreference.bMap 1

    Read the article

  • Oracle vocabulary, what is the mysql/SQL Server equivalent of a database

    - by jeph perro
    Hi, I need some help with vocabulary, I don't use Oracle that often but I am familiar with MySQL and SQL Server. I have an application I need to upgrade and migrate, and part of the procedure to do that involves exporting to an XML file, allowing the installer to create new database tables, then import to the new database tables from the XML file. In Oracle, my database connection specifies a username, password, and an SID. Let's call the SID, "COMPANY-APPS". This SID contains tables belonging to several applications, each of which connects with a different user ( "WIKIUSER", "BUGUSER", "TIMETRACKERUSER" ). My question is: Can I re-use the same user to create the new tables ( with the same names ). In MySQL or SQL Server, I would create a new database and grant my user privileges to create tables in it. OR, do I need to create a new database user for my upgraded tables?

    Read the article

  • Oracle vocabulary, what is the mysql/mssql equivalent of a database

    - by jeph perro
    Hi, I need some help with vocabulary, I don't use Oracle that often but I am familiar with MySQL and MSSQL. I have an application I need to upgrade and migrate, and part of the procedure to do that involves exporting to an XML, create new database tables, then populate the new database tables from the XML export. In Oracle, my database connection specifies a username, password, and an SID. This SID contains tables belonging to several applications. My question is: Can I re-use the same user to create the new tables ( with the same names ). In MySQL or MSSQL, I would create a new database and grant my user privileges to create tables in it. OR, do I need to create a new database user for my upgraded tables?

    Read the article

  • storing session data in mysql using php is not retrieving the data properly from the tables.

    - by Ronedog
    I have a problem retrieving some data from the $_SESSION using php and mysql. I've commented out the line in php.ini that tells the server to use the "file" to store the session info so my database will be used. I have a class that I use to write the information to the database and its working fine. When the user passes their credentials the class gets instantiated and the $_SESSION vars get set, then the user gets redirected to the index page. The index.php page includes the file where the db session class is, which when instantiated calles session_start() and the session variables should be in $_SESSION, but when I do var_dump($_SESSION) there is nothing in the array. However, when I look at the data in mysql, all the session information is in there. Its acting like session_start() has not been called, but by instantiating the class it is. Any idea what could be wrong? Here's the HTML: <?php include_once "classes/phpsessions_db/class.dbsession.php"; //used for sessions var_dump($_SESSION); ?> <html> . . . </html> Here's the dbsession class: <?php error_reporting(E_ALL); class dbSession { function dbSession($gc_maxlifetime = "", $gc_probability = "", $gc_divisor = "") { // if $gc_maxlifetime is specified and is an integer number if ($gc_maxlifetime != "" && is_integer($gc_maxlifetime)) { // set the new value @ini_set('session.gc_maxlifetime', $gc_maxlifetime); } // if $gc_probability is specified and is an integer number if ($gc_probability != "" && is_integer($gc_probability)) { // set the new value @ini_set('session.gc_probability', $gc_probability); } // if $gc_divisor is specified and is an integer number if ($gc_divisor != "" && is_integer($gc_divisor)) { // set the new value @ini_set('session.gc_divisor', $gc_divisor); } // get session lifetime $this->sessionLifetime = ini_get("session.gc_maxlifetime"); //Added by AARON. cancel the session's auto start,important, without this the session var's don't show up on next pg. session_write_close(); // register the new handler session_set_save_handler( array(&$this, 'open'), array(&$this, 'close'), array(&$this, 'read'), array(&$this, 'write'), array(&$this, 'destroy'), array(&$this, 'gc') ); register_shutdown_function('session_write_close'); // start the session @session_start(); } function stop() { $new_sess_id = $this->regenerate_id(true); session_unset(); session_destroy(); return $new_sess_id; } function regenerate_id($return_val=false) { // saves the old session's id $oldSessionID = session_id(); // regenerates the id // this function will create a new session, with a new id and containing the data from the old session // but will not delete the old session session_regenerate_id(); // because the session_regenerate_id() function does not delete the old session, // we have to delete it manually //$this->destroy($oldSessionID); //ADDED by aaron // returns the new session id if($return_val) { return session_id(); } } function open($save_path, $session_name) { // global $gf; // $gf->debug_this($gf, "GF: Opening Session"); // change the next values to match the setting of your mySQL database $mySQLHost = "localhost"; $mySQLUsername = "user"; $mySQLPassword = "pass"; $mySQLDatabase = "sessions"; $link = mysql_connect($mySQLHost, $mySQLUsername, $mySQLPassword); if (!$link) { die ("Could not connect to database!"); } $dbc = mysql_select_db($mySQLDatabase, $link); if (!$dbc) { die ("Could not select database!"); } return true; } function close() { mysql_close(); return true; } function read($session_id) { $result = @mysql_query(" SELECT session_data FROM session_data WHERE session_id = '".$session_id."' AND http_user_agent = '".$_SERVER["HTTP_USER_AGENT"]."' AND session_expire > '".time()."' "); // if anything was found if (is_resource($result) && @mysql_num_rows($result) > 0) { // return found data $fields = @mysql_fetch_assoc($result); // don't bother with the unserialization - PHP handles this automatically return unserialize($fields["session_data"]); } // if there was an error return an empty string - this HAS to be an empty string return ""; } function write($session_id, $session_data) { // global $gf; // first checks if there is a session with this id $result = @mysql_query(" SELECT * FROM session_data WHERE session_id = '".$session_id."' "); // if there is if (@mysql_num_rows($result) > 0) { // update the existing session's data // and set new expiry time $result = @mysql_query(" UPDATE session_data SET session_data = '".serialize($session_data)."', session_expire = '".(time() + $this->sessionLifetime)."' WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } } else // if this session id is not in the database { // $gf->debug_this($gf, "inside dbSession, trying to write to db because session id was NOT in db"); $sql = " INSERT INTO session_data ( session_id, http_user_agent, session_data, session_expire ) VALUES ( '".serialize($session_id)."', '".$_SERVER["HTTP_USER_AGENT"]."', '".$session_data."', '".(time() + $this->sessionLifetime)."' ) "; // insert a new record $result = @mysql_query($sql); // if anything happened if (@mysql_affected_rows()) { // return an empty string return ""; } } // if something went wrong, return false return false; } function destroy($session_id) { // deletes the current session id from the database $result = @mysql_query(" DELETE FROM session_data WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } // if something went wrong, return false return false; } function gc($maxlifetime) { // it deletes expired sessions from database $result = @mysql_query(" DELETE FROM session_data WHERE session_expire < '".(time() - $maxlifetime)."' "); } } //End of Class $session = new dbsession(); ?>

    Read the article

  • Database with users design

    - by 1110
    I am in database design development phase. Application will work with large number of users (LARGE :)) I designed 80% of database but I have one Users table which is connected to everything else: Users {UserId, FirstName, LastName, Username, Password, PasswordQuestion, PasswordAnswer, Gender, RoleId, LastLoginDate etc etc} I saw asp.net membership database structure where Users and Membership are two tables. My questions are: Should I use one users table with all users data in it or more tables? If answer is 'more tables', what tables to use? Any advice on how to structure relation between those tables? This is sample relation that I have, and trying to improve. I don't understand why user and userChild are separated tables?

    Read the article

  • Complex Rails queries across multiple tables, unions, and will_paginate. Solved.

    - by uberllama
    Hi folks. I've been working on a complex "user feed" type of functionality for a while now, and after experimenting with various union plugins, hacking named scopes, and brute force, have arrived at a solution I'm happy with. S.O. has been hugely helpful for me, so I thought I'd post it here in hopes that it might help others and also to get feedback -- it's very possible that I worked on this so long that I walked down an unnecessarily complicated road. For the sake of my example, I'll use users, groups, and articles. A user can follow other users to get a feed of their articles. They can also join groups and get a feed of articles that have been added to those groups. What I needed was a combined, pageable feed of distinct articles from a user's contacts and groups. Let's begin. user.rb has_many :articles has_many :contacts has_many :contacted_users, :through => :contacts has_many :memberships has_many :groups, :through => :memberships contact.rb belongs_to :user belongs_to :contacted_user, :class_name => "User", :foreign_key => "contacted_user_id" article.rb belongs_to :user has_many :submissions has_many :groups, :through => :submissions group.rb has_many :memberships has_many :users, :through => :memberships has_many :submissions has_many :articles, :through => :submissions Those are the basic models that define my relationships. Now, I add two named scopes to the Article model so that I can get separate feeds of both contact articles and group articles should I desire. article.rb # Get all articles by user's contacts named_scope :by_contacts, lambda {|user| {:joins => "inner join contacts on articles.user_id = contacts.contacted_user_id", :conditions => ["articles.published = 1 and contacts.user_id = ?", user.id]} } # Get all articles in user's groups. This does an additional query to get the user's group IDs, then uses those in an IN clause named_scope :by_groups, lambda {|user| {:select => "DISTINCT articles.*", :joins => :submissions, :conditions => {:submissions => {:group_id => user.group_ids}}} } Now I have to create a method that will provide a UNION of these two feeds into one. Since I'm using Rails 2.3.5, I have to use the construct_finder_sql method to render a scope into its base sql. In Rails 3.0, I could use the to_sql method. user.rb def feed "(#{Article.by_groups(self).send(:construct_finder_sql,{})}) UNION (#{Article.by_contacts(self).send(:construct_finder_sql,{})})" end And finally, I can now call this method and paginate it from my controller using will_paginate's paginate_by_sql method. HomeController.rb @articles = Article.paginate_by_sql(current_user.feed, :page => 1) And we're done! It may seem simple now, but it was a lot of work getting there. Feedback is always appreciated. In particular, it would be great to get away from some of the raw sql hacking. Cheers.

    Read the article

  • Free tools/libraries to compare tables with filtering for SQL Server 2008 and visualize/sync diffs t

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >