Search Results

Search found 8320 results on 333 pages for 'tables'.

Page 53/333 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How to map one class against multiple tables with SQLAlchemy?

    - by tote
    Lets say that I have a database structure with three tables that look like this: items - item_id - item_handle attributes - attribute_id - attribute_name item_attributes - item_attribute_id - item_id - attribute_id - attribute_value I would like to be able to do this in SQLAlchemy: item = Item('item1') item.foo = 'bar' session.add(item) session.commit() item1 = session.query(Item).filter_by(handle='item1').one() print item1.foo # => 'bar' I'm new to SQLAlchemy and I found this in the documentation (http://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-against-multiple-tables): j = join(items, item_attributes, items.c.item_id == item_attributes.c.item_id). \ join(attributes, item_attributes.c.attribute_id == attributes.c.attribute_id) mapper(Item, j, properties={ 'item_id': [items.c.item_id, item_attributes.c.item_id], 'attribute_id': [item_attributes.c.attribute_id, attributes.c.attribute_id], }) It only adds item_id and attribute_id to Item and its not possible to add attributes to Item object. Is what I'm trying to achieve possible with SQLAlchemy? Is there a better way to structure the database to get the same behaviour of "dynamic columns"?

    Read the article

  • What to do if 2 (or more) relationship tables would have the same name?

    - by primehunter326
    So I know the convention for naming M-M relationship tables in SQL is to have something like so: For tables User and Data the relationship table would be called UserData User_Data or something similar (from here) What happens then if you need to have multiple relationships between User and Data, representing each in its own table? I have a site I'm working on where I have two primary items and multiple independent M-M relationships between them. I know I could just use a single relationship table and have a field which determines the relationship type, but I'm not sure whether this is a good solution. Assuming I don't go that route, what naming convention should I follow to work around my original problem?

    Read the article

  • How to crosscheck two tables and insert relevant data into a new table in MYSQL?

    - by JackDamery
    I'm trying to crosscheck a row that exists in two tables using a MYSQL query in phpmyadmin and then if a userID is found in both tables, insert their userID and user name into another table. Here's my code: INSERT INTO userswithoutmeetings SELECT user.userID IF('user.userID'='meeting.userID'); I keep getting plagued by this error: 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF('user.userID'='meeting.userID')' at line 3 Other statements I've tried have worked but not deposited the values in the table.

    Read the article

  • How to copy tables from one website to another with php?

    - by Lost_in_code
    I have 2 websites, lets say - example.com and example1.com example.com has a database fruits which has a table apple with 7000 records. I exported apple and tried to import it to example1.com but I'm always getting "MYSQL Server has gone away" error. I suspect this is due to some server side restriction. So, how can I copy the tables without having to contact the system admins? Is there a way to do this using PHP? I went through example of copying tables, but that was inside the same database. Both example.com and example1.com are on the same server.

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Sending changes from multiple tables in disconnected dataset to SQLServer...

    - by Stecy
    We have a third party application that accept calls using an XML RPC mechanism for calling stored procs. We send a ZIP-compressed dataset containing multiple tables with a bunch of update/delete/insert using this mechanism. On the other end, a CLR sproc decompress the data and gets the dataset. Then, the following code gets executed: using (var conn = new SqlConnection("context connection=true")) { if (conn.State == ConnectionState.Closed) conn.Open(); try { foreach (DataTable table in ds.Tables) { string columnList = ""; for (int i = 0; i < table.Columns.Count; i++) { if (i == 0) columnList = table.Columns[0].ColumnName; else columnList += "," + table.Columns[i].ColumnName; } var da = new SqlDataAdapter("SELECT " + columnList + " FROM " + table.TableName, conn); var builder = new SqlCommandBuilder(da); builder.ConflictOption = ConflictOption.OverwriteChanges; da.RowUpdating += onUpdatingRow; da.Update(ds, table.TableName); } } catch (....) { ..... } } Here's the event handler for the RowUpdating event: public static void onUpdatingRow(object sender, SqlRowUpdatingEventArgs e) { if ((e.StatementType == StatementType.Update) && (e.Command == null)) { e.Command = CreateUpdateCommand(e.Row, sender as SqlDataAdapter); e.Status = UpdateStatus.Continue; } } and the CreateUpdateCommand method: private static SqlCommand CreateUpdateCommand(DataRow row, SqlDataAdapter da) { string whereClause = ""; string setClause = ""; SqlConnection conn = da.SelectCommand.Connection; for (int i = 0; i < row.Table.Columns.Count; i++) { char quoted; if ((row.Table.Columns[i].DataType == Type.GetType("System.String")) || (row.Table.Columns[i].DataType == Type.GetType("System.DateTime"))) quoted = '\''; else quoted = ' '; string val = row[i].ToString(); if (row.Table.Columns[i].DataType == Type.GetType("System.Boolean")) val = (bool)row[i] ? "1" : "0"; bool isPrimaryKey = false; for (int j = 0; j < row.Table.PrimaryKey.Length; j++) { if (row.Table.PrimaryKey[j].ColumnName == row.Table.Columns[i].ColumnName) { if (whereClause != "") whereClause += " AND "; if (row[i] == DBNull.Value) whereClause += row.Table.Columns[i].ColumnName + "=NULL"; else whereClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; isPrimaryKey = true; break; } } /* Only values for column that is not a primary key can be modified */ if (!isPrimaryKey) { if (setClause != "") setClause += ", "; if (row[i] == DBNull.Value) setClause += row.Table.Columns[i].ColumnName + "=NULL"; else setClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; } } return new SqlCommand("UPDATE " + row.Table.TableName + " SET " + setClause + " WHERE " + whereClause, conn); } However, this is really slow when we have a lot of records. Is there a way to optimize this or an entirely different way to send lots of udpate/delete on several tables? I would really much like to use TSQL for this but can't figure a way to send a dataset to a regular sproc. Additional notes: We cannot directly access the SQLServer database. We tried without compression and it was slower.

    Read the article

  • So Singletons are bad, then what?

    - by Bobby Tables
    There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, there are still many cases where I can't see a nice alternative - and not many of the anti-Singleton discussions really provide one. Here is a real example from a major recent project I was involved in: The application was a thick client with many separate screens and components which uses huge amounts of data from a server state which isn't updated too often. This data was basically cached in a Singleton "manager" object - the dreaded "global state". The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. Is there any other approach that could be appropriate here than basically having this kind of global data manager cache object? This object doesn't officially have to be a "Singleton" of course, but it does conceptually make sense to be one. What is a nice clean alternative here?

    Read the article

  • Good resources for learning Rails?

    - by Bobby Tables
    I just finished working through Peter Cooper's "Beginning Ruby". So now I've got a reasonable grounding in the Ruby language and would like to move onto learning Rails. This question's answers give some good pointers, but I'd like to hear some specific reviews of books and online materials. I generally learn best by working through books with good practical/technical examples AND some passive reading content that breaks up the study between practical and reading sessions (this is what made "Beginning Ruby" great for me), but I'm worried that RoR is evolving fast and that any printed book I order might be obsolete by the time I get it and work through it. Is this a fair worry? Or can anyone recommend a good Rails 3 book that should be up to date at least for the next year or so? Also, I had a brief look at some of the online resources from the other questions, and Rails for Zombies seems to get a lot of praise. Has anyone here actually used it as their introductory guide to Rails? Basically I'd like to hear first-hand accounts of people who went through this "Ruby-to-Rails" learning phase recently and which materials were useful to you.

    Read the article

  • Ruby or Python?

    - by Bobby Tables
    Hi all, This question is extremely subjective and open-ended. It might even sound like something I should just research for myself and make my own decision. But I'd like to put it out there and get some thoughts from others. Long story short - I burned out with the rat race and am on a self-funded sabbatical this year. Much of it is to take a break from the corporate grind and travel around, but I also want to play around with new technologies and do some self-learning projects, to stay up to speed on programming, and well - I just love tinkering with programming, when there's no pressure! Here's the thing: I am a lifetime C/C++/Java programmer. I'm a bit of a squiggly bracket snob since I've been working with this family of languages for my entire programming career. So I'd like to learn a language which isn't so closely syntactically related to this group. What I'm basically looking for is a language which is relatively general purpose, fun to learn, has some new concepts that are different from C++/Java, and has a good community. A secondary consideration is that it has good web development frameworks. A tertiary consideration is that it's not totally academic (read: there are real world jobs out there using it). I've narrowed it down to Ruby or Python. My impression of Ruby is that it is extremely web oriented - that the only real application of it is as a server side scripting language for doing web stuff (mainly Ruby on Rails). For Python I'm not so sure. TL;DR and to put it as succinctly as possible: which of these would be better for a C++/Java guy to learn to get some new perspectives on programming? And which is more open and general purpose and applicable to a wider set of applications? I'm leaning towards Ruby at the moment, but I worry to an extent that it looks like it's used as nothing but a server side web language.

    Read the article

  • Tips On Using The Service Contracts Import Program

    - by LuciaC
    Prior to release 12.1 there was no supported way to import contracts into the EBS Service Contracts application - there were no public APIs nor contract load programs provided.  From release 12.1 onwards the 'Service Contracts Import Program' is provided to load service contracts into the application. The Service Contracts Import functionality is explained in How to Use the Service Contracts Import Program - Scope and Limitations (Doc ID 1057242.1).  This note includes an attached document which explains the program architecture, shows the Entity Relationship Diagram and details the interface table definitions. The Import program takes data from the interface tables listed below and populates the contracts schema tables:  OKS_USAGE_COUNTERS_INTERFACE OKS_SALES_CREDITS_INTERFACEOKS_NOTES_INTERFACEOKS_LINES_INTERFACEOKS_HEADERS_INTERFACEOKS_COVERED_LEVELS_INTERFACEThese interface tables must be loaded via a custom load program.The Service Contracts Import concurrent request is then submitted to create contracts from this legacy data. The parameters to run the Import program are:  Parameter Description  Mode Validate only, Import  Batch Number Batch_Id (unique id populated into the OKS_HEADERS_INTERFACE table)  Number of Workers Number of workers required (these are spawned as separate sub-requests)  Commit size Represents number of successfully processed contracts commited to database The program spawns sub-requests for the import worker(s) and the 'Service Contracts Import Report'.  The data is validated prior to import and into the Contracts tables and will report errors in the Service Contracts Import Report program output file (Import Execution Report).  Troubleshooting tips are provided in R12.1 - Common Service Contract Import Errors (Doc ID 762545.1); this document lists some, but not all, import errors.  The document will be updated over time.  Additional help is given in Debugging Tip for Service Contracts Import Errors (Doc ID 971426.1).After you successfully import contracts, you can purge the records from the interface tables by running the Service Contracts Import Purge concurrent program. Note that there is no supported way to mass delete data from the Contracts schema tables once they are populated, so data loaded by the Import program must be fully tested and verified before the program is run to load data into a Production system.A Service Contracts Import Test program has been provided which will take an existing contract in the application and load the interface tables using the data from that contract.  This can be used as an example for guidance on how to load the interface tables.  The Test program functionality is explained in How to Use the Service Contracts Test Import Program Provided in Release 12.1 (Doc ID 761209.1).  Note that the Test program has some limitations which do not apply to the full Import program and is not a supported program, it is simply a testing tool.  

    Read the article

  • Temp Table Recompiles

    - by Derek D.
    If you landed on this article, then you most likely know that temp tables can cause recompilation. This happens because temp tables are treated just like regular tables by the SQL Server Engine. When the tables (in which underlying queries rely on) change significantly, SQL Server detects this change (using auto update statistics) [...]

    Read the article

  • How will Deja-Dup operates when backing up to an external USB drive?

    - by Little Bobby Tables
    I want to set up regular backups, and deja-dup seems like a nice tool. However, I want to put my backups on an extension USB drive that I have, not on a remote network location. Naturally, this drive is not always connected. If I configure deja-dup to backup to a directory on this drive (e.g. /media/extention/backup), what would happen? Will it prompt me to connect the drive when it is missing (the desired behavior), or just fail silently? Is there some way to tweak it to do so? I can roll my own cron-based backup script that checks if this drive is mounted, but I would really prefer to use an existing, integrated tool.

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • Is there such a thing as "closure" with software work?

    - by Bobby Tables
    I burned out last year (after a decade of fulltime programming jobs) and am on a sabbatical now. With all the self-examination I've started to figure out some of the root causes of my burnout, and one of the major ones is basically this: there was never any real closure in any of the work I've ever done. It was always a case of getting into an open-ended support/maintenance grind and going stale. When I first entered the industry, I had this image of programming work being very project-based. And I expected projects to have a start, beginning, and END. And then you move on and start on something totally new and fresh. Basically I never expected that a lot (most) of software work involves supporting and maintaining the same code base for open-ended long periods of time - years and even decades. That, combined with generally having itchy feet makes me think that burnout is inevitable for me, after 2-3 years, in ANY fulltime software job. All this sounds like I probably should have been a contractor instead of a fulltimer. But when I discuss this with people, a lot of them say that even THEN you can't really escape having to go back and maintain/support the stuff you worked on, over and over (eg. Coming back on support contracts, for example). The nature of software work is simply like that. There is no project closure, unlike in many other engineering fields. So my question is - Is there ANY programming work out there which is based on short to mid term projects/stints and then moving on cleanly? And is there any particular industry domain or specialization where this kind of project work is typical?

    Read the article

  • Can't print, CUPS package corrupted and hangs on re-install

    - by Little Bobby Tables
    When I upgraded to Ubuntu 10.4 (Maverick), the upgrade process got stuck on the post-installation of the CUPS package. I had to kill processes and run several forced updates before I could finally get regular updated. Ever since I can't print - The printed file gets messed up and crashes the printer. I also can't re-install CUPS, as each time the installation hangs and I have to kill it before it completes. I tried to find a workaround for this problem, but in vain. Does anyone know how to bypass this? Or at least why can the post-installation hang, and how to re-install a problematic package? Some system specs and other hints: Dell D630 laptop running Ubuntu 10.4, Gnome desktop, standard LAN network, printing to an LPD server. Everything worked fine on 9.10. Also, the printed files themselves are not corrupted. The problem does not seem to be Evince-specific, but common to all printouts.

    Read the article

  • Looking for an old classic book about Unix command-line tools

    - by Little Bobby Tables
    I am looking for a book about the Unix command-line toolkit (sh, grep, sed, awk, cut, etc.) that I read some time ago. It was an excellent book, but I totally forgot its name. The great thing about this specific book was the running example. It showed how to implement a university bookkeeping system using only text-processing tools. You would find a student by name with grep, update grades with sed, calculate average grades with awk, attach grades to IDs with cut, and so on. If my memory serve, this book had a black cover, and was published circa 1980. Does anyone remember this book? I would appreciate any help in finding it.

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • Is this possible to join tables in doctrine ORM without using relations?

    - by piemesons
    Suppose there are two tables. Table X-- Columns: id x_value Table Y-- Columns: id x_id y_value Now I dont want to define relationship in doctrine classes and i want to retrieve some records using these two tables using a query like this: Select x_value from x, y where y.id="variable_z" and x.id=y.x_id; I m not able to figure out how to write query like this in doctrine orm EDIT: Table structures: Table 1: CREATE TABLE IF NOT EXISTS `image` ( `id` int(11) NOT NULL AUTO_INCREMENT, `random_name` varchar(255) NOT NULL, `user_id` int(11) NOT NULL, `community_id` int(11) NOT NULL, `published` varchar(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=259 ; Table 2: CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `city` varchar(20) DEFAULT NULL, `state` varchar(20) DEFAULT NULL, `school` varchar(50) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=20 ; Query I am using: $q = new Doctrine_RawSql(); $q ->select('{u.*}, {img.*}') ->from('users u LEFT JOIN image img ON u.id = img.user_id') ->addComponent('u', 'Users u') ->addComponent('img', 'u.Image img') ->where("img.community_id='$community_id' AND img.published='y' AND u.state='$state' AND u.city='$city ->orderBy('img.id DESC') ->limit($count+12) ->execute(); Error I am getting: Fatal error: Uncaught exception 'Doctrine_Exception' with message 'Couldn't find class u' in C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php:290 Stack trace: #0 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php(240): Doctrine_Table- >initDefinition() #1 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Connection.php(1127): Doctrine_Table->__construct('u', Object(Doctrine_Connection_Mysql), true) #2 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\RawSql.php(425): Doctrine_Connection- >getTable('u') #3 C:\xampp\htdocs\fanyer\doctrine\models\Image.php(33): Doctrine_RawSql- >addComponent('img', 'u.Image imga') #4 C:\xampp\htdocs\fanyer\community_images.php(31): Image->get_community_images_gallery_filter(4, 0, 'AL', 'ALBERTVILLE') #5 {main} thrown in C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php on line 290

    Read the article

  • .NET - Is there a way to programmatically fill all tables in a strongly-typed dataset?

    - by Mike Loux
    Hello, all! I have a SQL Server database for which I have created a strongly-typed DataSet (using the DataSet Designer in Visual Studio 2008), so all the adapters and select commands and whatnot were created for me by the wizard. It's a small database with largely static data, so I would like to pull the contents of this DB in its entirety into my application at startup, and then grab individual pieces of data as needed using LINQ. Rather than hard-code each adapter Fill call, I would like to see if there is a way to automate this (possibly via Reflection). So, instead of: Dim _ds As New dsTest dsTestTableAdapters.Table1TableAdapter.Fill(_ds.Table1) dsTestTableAdapters.Table2TableAdapter.Fill(_ds.Table2) <etc etc etc> I would prefer to do something like: Dim _ds As New dsTest For Each tableName As String In _ds.Tables Dim adapter as Object = <routine to grab adapter associated with the table> adapter.Fill(tableName) Next Is that even remotely doable? I have done a fair amount of searching, and I wouldn't think this would be an uncommon request, but I must be either asking the wrong question, or I'm just weird to want to do this. I will admit that I usually prefer to use unbound controls and not go with strongly-typed datasets (I prefer to write SQL directly), but my company wants to go this route, so I'm researching it. I think the idea is that as tables are added, we can just refresh the DataSet using the Designer in Visual Studio and not have to make too many underlying DB code changes. Any help at all would be most appreciated. Thanks in advance!

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • How to structure the tables of a very simple blog in MySQL?

    - by Programmer
    I want to add a very simple blog feature on one of my existing LAMP sites. It would be tied to a user's existing profile, and they would be able to simply input a title and a body for each post in their blog, and the date would be automatically set upon submission. They would be allowed to edit and delete any blog post and title at any time. The blog would be displayed from most recent to oldest, perhaps 20 posts to a page, with proper pagination above that. Other users would be able to leave comments on each post, which the blog owner would be allowed to delete, but not pre-moderate. That's basically it. Like I said, very simple. How should I structure the MySQL tables for this? I'm assuming that since there will be blog posts and comments, I would need a separate table for each, is that correct? But then what columns would I need in each table, what data structures should I use, and how should I link the two tables together (e.g. any foreign keys)? I could not find any tutorials for something like this, and what I'm looking to do is really offer my users the simplest version of a blog possible. No tags, no moderation, no images, no fancy formatting, etc. Just a simple diary-type, pure-text blog with commenting by other users.

    Read the article

  • sql statement question. Need to query 3 tables in one go!

    - by Stefan
    Hey there, I have an sql database. In this database is 3 tables I need to query. The first table has all the item info called item and the other two tables has data for votes and comments called userComment and the third for votes called userItem I currently have a function which uses this sql query to get the latest more popular (in terms of both votes and comments): $sql = "SELECT itemID, COUNT(*) AS cnt FROM ( SELECT `itemID` FROM `userItem` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY UNION ALL SELECT `itemID` FROM `userComment` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY AND `itemID` > 0 ) q GROUP BY `itemID` ORDER BY cnt DESC"; I know how to change this for either by votes alone or comments.... HOWEVER - I need to query the database to only return the itemID's of the ones which have specific conditions in only the item table these are WHERE categoryID = 'xx' AND typeID = 'xx' If the sql ninja could please help me on this one? Do I have to first return the results from the above query and the for each in the array fetched then check each against the item table and see if it fits the conditions to build a new array - or is that overkill? Thanks, Stefan

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >