Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 583/643 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Need help with simple NHibernate mapping...

    - by mplarsen
    Need help with a simple NHibernate relationship... Tables/Classes Request ------- RequestId Title … Keywords ------- RequestID (key) Keyword (key) Request mapping file <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="CR.Model" assembly="CR"> <class name="CR.Model.Request, CR table="[dbo].[Request]" lazy="true"> <id name="Id" column="[RequestID]"> <generator class="native" /> </id> <property name="RequestorID" column="[RequestorID]" /> <property name="RequestorOther" column="[RequestorOther]" /> … Keyword?? </class> </hibernate-mapping> How do I simply map multiple keywords to a request? I don't need another mapping file for the keyword class, do I? It's be great if I could not only get the associated keywords, but add them too...

    Read the article

  • Java: over-typed structures? To have many types in Object[]?

    - by HH
    Term over-type structure = a data structure that accepts different types, can be primitive or user-defined. I think ruby supports many types in structures such as tables. I tried a table with types 'String', 'char' and 'File' in Java but errs. How can I have over-typed structure in Java? How to show types in declaration? What about in initilization? Suppose a structure: INDEX VAR FILETYPE //0 -> file FILE //1 -> lineMap SizeSequence //2 -> type char //3 -> binary boolean //4 -> name String //5 -> path String Code import java.io.*; import java.util.*; public class Object { public static void print(char a) { System.out.println(a); } public static void print(String s) { System.out.println(s); } public static void main(String[] args) { Object[] d = new Object[6]; d[0] = new File("."); d[2] = 'T'; d[4] = "."; print(d[2]); print(d[4]); } } Errors Object.java:18: incompatible types found : java.io.File required: Object d[0] = new File("."); ^ Object.java:19: incompatible types found : char required: Object d[2] = 'T'; ^

    Read the article

  • Prototype or jQuery for DOM manipulation (client-side dynamic content)

    - by luiggitama
    I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.: Prototype's $('id').update(content) jQuery's jQuery('#id').html(content) BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$". I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this: var html = []; int idx = 0; ... html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction('; html[idx++] = param; html[idx++] = ')"></span>'; html[idx++] = someText; html[idx++] = '</td></tr>'; ... So here comes the question, which is better to use: // Prototype's $('myId').update(html.join('')); // or jQuery's jQuery('#myId').html(html.join('')); Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values. Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this: MyData = {div1:jQuery('#id1'), div2:$('id2'), ...}; ... div1.update('content 1'); div2.html('content 2'); So, which is the best practice?

    Read the article

  • Multiple ID's in database

    - by eric
    I have a database that contains a few tables such as person, staff, member, and supporter. The person table contains information about every staff, member, and supporter. The information it contains is name,address,email, and telephone. I also created an id that is the primary key. My issue is that I also have an primary key ID for staff, member, and supporter. For instance, in the person table is John with id 1. He is a supporter so in the supporter table is pID(for person id)to reference back to John with all his information and ID(for supporter ID). pID references to the person table and every person has an ID incremented by 1 starting at 1. supporter ID is for every supporter and also starts at 1 and is incremented by 1. Is it possible to have in the supporter table pID = 1 and supporter ID = 1? Another person may have a pID = 26 and supporter ID = 5. Or will supporter ID have to be different than the pID and be something like "sup"? So you would have pID = 1 and supporter ID = sup1 or pID = 26 and supporter ID = sup5

    Read the article

  • How to align all fields as label width grows

    - by TheCloudlessSky
    I have a form where the labels are on the left and the fields on the right. This layout works great when the labels have small amounts of text. I can easily set a min-width on the labels to ensure they all have the same distance to the fields. In the first picture below, this works as expected. A problem arises when the label's text becomes too long, it'll either overflow to the next line or push the field on the same line over to the left (as seen in picture 2). This doesn't push the other labels so it is left with a "jagged" look. Ideally, it should like to style it as picture 3 with something like the following markup: <fieldset> <label>Name</label><input type="text" /><br /> <label>Username</label><input type="text" /> </fieldset> I created a jsFiddle to show the issue. Of course, the easy cross-browser way to solve this would be to use tables. That just creates tag-hell for something that should be so simple. Note: this does not need to support IE6.

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • CSS to Make 2 Column Content Areas

    - by Joe Majewski
    I'm trying to stray away from using tables to form the layout of my content, and I can think of two alternatives that I'd like to better learn: (1) styling list items to be side-by-side, and (2) using div blocks that float onto the same line. Both of these would have their own uses for what I'm working on. I'm already using div tags to form the entire layout of my three-column template, but what I need to do now is a bit different. In case it helps, my project can be found here. In short, here's my question; how would I style a div so that the width of it is 50% of the width of the area it occupies, rather than 50% of the width of the page? As for my other question, what would be the best approach to styling list items so that they are side-by-side? I'm working on a registration script now, and instead of using a table with "Username" on the left and the input text on the right, I can use two list items. It's late and I've been working on this project of mine for about 8 hours straight now, so I apologize if I'm asking anything confusing. Feel free to ask me any questions about what I'm trying to do. Thanks, friends. :)

    Read the article

  • How important is it that models be consistent across project components?

    - by RonLugge
    I have a project with two components, a server-side component and a client-side component. For various reasons, the client-side device doesn't carry a fully copy of the database around. How important is it that my models have a 1:1 correlation between the two sides? And, to extend the question to my bigger concern, are there any time-bombs I'm going to run into down the line if they don't? I'm not talking about having different information on each side, but rather the way the information is encapsulated will vary. (Obviously, storage mechanisms will also vary) The server side will store each user, each review, each 'item' with seperate tables, and create links between them to gather data as necessary. The client side shouldn't have a complete user database, however, so rather than link against the user for gathering things like 'name', I'd store that on the review. In other words... --- Server Side --- Item: +id //Store stuff about the item User: +id +Name -Password Review: +id +itemId +rating +text +userId --- Device Side --- Item: +id +AverageRating Review: +id +rating +text +userId +name User: +id +Name //Stuff The basic idea is that certain 'critical' information gets moved one level 'up'. A user gets the list of 'items' relevant to their query, with certain review-orientation moved up (i. e. average rating). If they want more info, they query the detail view for the item, and the actual reviews get queried and added to the dataset (and displayed). If they query the actual review, the review gets queried and they pick up some additional user info along the way (maybe; I'm not sure if the user would have any use for any of the additional user information). My basic concern is that I don't wan't to glut the user's bandwidth or local storage with a huge variety of information that they just don't need, even if proper database normalizations suggests that information REALLY should be stored at a 'lower' level. I've phrased this as a fairly low-level conceptual issue because that's the level I'm trying to think / worry over, but if it matters I'm creating a PHP / MySQL server that provides data for a iOS / CoreData client.

    Read the article

  • Use where clause with Like in codeigniter

    - by user2524013
    I am working on a project. I am implementing the Search functionality in my System. I will have to show the search record from two tables base on the current use login. I have tried the following code: function searchActivity($limit,$offset,$keyword1,$keyword2,$recruiter_id) { $q=$this->db->select('*')->from('tbl_activity')->limit($limit,$offset); $this->db->join('tbl_job', 'tbl_job.job_id = tbl_activity.job_id_fk', 'left outer'); $this->db->order_by("activity_id", "ASC"); $this->db->like('job_title',$keyword1,'both'); $this->db->or_like('job_title',$keyword2,'both'); $this->db->or_like('activity_subject',$keyword1,'both'); $this->db->or_like('activity_subject',$keyword2,'both'); $this->db->or_like('activity_details',$keyword1,'both'); $this->db->or_like('activity_details',$keyword2,'both'); $this->db->where('tbl_activity.recruiter_id_fk',$recruiter_id); $ret['rows']=$q->get()->result(); return $ret; } I want to show search results based on the current user id, which is currently store in $recruiter. Thanks in advance.

    Read the article

  • "Add another item" form functionality

    - by GSTAR
    I have a form that lets a user enter their career history - it's a very simple form with only 3 fields - type (dropdown), details (textfield) and year (dropdown). Basically I want to include some dynamic functionality whereby the user can enter multiple items on the same page and then submit them all in one go. I had a search on Google and found some examples but they were all based on tables - my markup is based on DIV tags: <div class="form-fields"> <div class="row"> <label for="type">Type</label> <select id="type" name="type"> <option value="Work">Work</option> </select> </div> <div class="row"> <label for="details">Details</label> <input id="details" type="text" name="details" /> </div> <div class="row"> <label for="year">Year</label> <select id="year" name="year"> <option value="2010">2010</option> </select> </div> </div> So basically the 3 DIV tags with class "row" need to be duplicated, or to simplify things - the div "form-fields" could just be duplicated. I am also aware that the input names would have to converted to array format. Additionally each item will require a "remove" button. There will be a main submit button at the bottom which submits all the data. Anyone got an elegant solution for this?

    Read the article

  • Why would this query cause a Merge Cartesian Join in Oracle

    - by decompiled
    I have a query that was recently required to be modified. Here's the original SELECT RTRIM (position) AS "POSITION", . // Other fields . . FROM schema.table x WHERE hours > 0 AND pay = 'RGW' AND NOT EXISTS( SELECT position FROM schema.table2 y where y.position = x.position ) Here's the new version SELECT RTRIM (position) AS "POSITION", . // Other fields . . FROM schema.table x WHERE hours > 0 AND pay = 'RGW' AND NOT EXISTS( SELECT position FROM schema.table2 y where y.date = get_fiscal_year_start_date (SYSDATE) AND y.position = x.position ) The UDF get_fiscal_year_start_date() returns the fiscal year start date of the date parameter. The first query runs fine, but the second creates a merge Cartesian join. I looked at the indexes on the tables and found that position and date were both indexed. My question for you stackoverflow is why would the addition of 'y.date = get_fiscal_year_start_date (SYSDATE)' cause a merge cartesian join in Oracle 10g.

    Read the article

  • Mysql query, need suggestion or solution

    - by Xi Kam
    Can anyone help me, i have two tables and i need records from both the table //////////////////////////////++ Query 1 ++//////////////////////////////////// SELECT SUM(rec_issued) AS issed, regen_id, YEAR(issue_date) AS iYear, MONTH(issue_date) AS iMonth FROM `view_rec_issued` WHERE `regen_id` = 2 GROUP BY YEAR(issue_date) DESC, MONTH(issue_date) DESC ORDER BY issue_date ASC issed regen_id iYear iMonth 424 2 2011 3 4340 2 2011 4 4235 2 2011 5 10570 2 2012 2 4761 2 2012 3 5000 2 2012 4 3700 2 2012 5 3414 2 2012 6 3700 2 2012 7 2992 2 2012 8 995 2 2012 10 ![Result from Query 1][1] //////////////////////////////++ Query 2 ++//////////////////////////////////// SELECT SUM(total_redem) AS redemed, regen_id, YEAR(redemption_date) AS rYear, MONTH(redemption_date) AS rMonth FROM `recredem_month_wise` WHERE `regen_id` = 2 GROUP BY YEAR(redemption_date) DESC, MONTH(redemption_date) DESC order by redemption_date ASC redemed regen_id rYear rMonth 424 2 2011 3 260 2 2011 4 6523 2 2011 5 1070 2 2011 6 200 2 2011 10 500 2 2011 11 9750 2 2012 2 5000 2 2012 3 5500 2 2012 4 3803 2 2012 5 3700 2 2012 7 3000 2 2012 8 ![Result from Query 2][2] But i want it as - issed regen_id iYear iMonth redemed regen_id rYear rMonth 424 2 2011 3 424 2 2011 3 4340 2 2011 4 260 2 2011 4 4235 2 2011 5 6523 2 2011 5 NULL NULL NULL NULL 1070 2 2011 6 NULL NULL NULL NULL 200 2 2011 10 NULL NULL NULL NULL 500 2 2011 11 10570 2 2012 2 9750 2 2012 2 4761 2 2012 3 5000 2 2012 3 5000 2 2012 4 5500 2 2012 4 3700 2 2012 5 3803 2 2012 5 3414 2 2012 6 NULL NULL NULL NULL 3700 2 2012 7 3700 2 2012 7 2992 2 2012 8 3000 2 2012 8 995 2 2012 10 NULL NULL NULL NULL ![I want this output][3] In these table regen_id is unique and i need data as YEAR and MONTH, if in any table not have the records in perticular month and year it should retrieve zero or null. But in every record year and month should equal like this - iYear = rYear and iMonth = rMonth So we can merge both the fields - No need to show year and month twice iYear and rYear = year iMonth and rMonth = month Thank You Please look at this problem.

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • Saving Data to Relational Database (Entity Framework)

    - by sheefy
    I'm having a little bit of trouble saving data to a database. Basically, I have a main table that has associations to other tables (Example Below). Tbl_Listing ID UserID - Associated to ID in User Table CategoryID - Associated to ID in Category Table LevelID - Associated to ID in Level Table. Name Address Normally, it's easy for me to add data to the DB (using Entity Framework). However, I'm not sure how to add data to the fields with associations. The numerous ID fields just need to hold an int value that corresponds with the ID in the associated table. For example; when I try to access the column in the following manner I get a "Object reference not set to an instance of an object." error. Listing NewListing = new Listing(); NewListing.Tbl_User.ID = 1; NewListing.Tbl_Category.ID = 2; ... DBEntities.AddToListingSet(NewListing); DBEntities.SaveChanges(); I am using NewListing.Tbl_User.ID instead of NewListing.UserID because the UserID field is not available through intellisense. If I try and create an object for each related field I get a "The relationship between the two objects cannot be defined because they are attached to different ObjectContext objects." error. With this method, I am trying to add the object without the .ID shown above - example NewListing.User = UserObject. I know this should be simple as I just want to reference the ID from the associated table in the main Listing's table. Any help would be greatly appreciated. Thanks in advance, -S

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • localhost yes but phpmyadmin blank

    - by Giskin Leow
    WAMP people having problem with both localhost and phpmyadmin loads blank which usually the port problem. Mine is only phpmyadmin blank. sqlbuddy and phpinfo no problem. tried uninstall reinstalled wamp. tried xampp, same problem, all works well, not phpmyadmin. mysql log: 120905 8:03:08 [Note] Plugin 'FEDERATED' is disabled. 120905 8:03:08 InnoDB: The InnoDB memory heap is disabled 120905 8:03:08 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120905 8:03:08 InnoDB: Compressed tables use zlib 1.2.3 120905 8:03:09 InnoDB: Initializing buffer pool, size = 128.0M 120905 8:03:09 InnoDB: Completed initialization of buffer pool 120905 8:03:09 InnoDB: highest supported file format is Barracuda. 120905 8:03:09 InnoDB: Waiting for the background threads to start 120905 8:03:10 InnoDB: 1.1.8 started; log sequence number 1595675 120905 8:03:11 [Note] Server hostname (bind-address): '(null)'; port: 3306 120905 8:03:11 [Note] - '(null)' resolves to '::'; 120905 8:03:11 [Note] - '(null)' resolves to '0.0.0.0'; 120905 8:03:11 [Note] Server socket created on IP: '0.0.0.0'. 120905 8:03:13 [Note] Event Scheduler: Loaded 0 events 120905 8:03:13 [Note] wampmysqld: ready for connections. apache log [Wed Sep 05 08:03:09 2012] [notice] Apache/2.2.22 (Win32) PHP/5.4.3 configured -- resuming normal operations [Wed Sep 05 08:03:09 2012] [notice] Server built: May 13 2012 13:32:42 [Wed Sep 05 08:03:09 2012] [notice] Parent: Created child process 3812 [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Child process is running [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Acquired the start mutex. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting 64 worker threads. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:04:14 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:09:50 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:41:03 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/phpMyAdmin

    Read the article

  • Mysql dropping inserts with triggers

    - by user2891127
    Using mysql 5.5. I have two tables. One has a whitelist of hashes. When I insert a new row into the other table, I want to first compare the hash in the insert statement to the whitelist. If it's in the whitelist, I don't want to do the insert (less data to plow through later). The inserts are generated from another program and are text files with sql statements. I've been playing with triggers, and almost have it working: BEGIN IF (select count(md5hash) from whitelist where md5hash=new.md5hash) 0 THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Already Whitelisted'; END IF; END But there's a problem. The Signal throwing up the error stops the import. I want to skip that line, not stop the whole import. Some searching didn't find any way to silently skip the import. My next idea was to create a duplicate table definition, and redirect the insert to that dup table. But the old and new don't seem to apply to table names. Other then adding an ignore column to my table then doing a mass drop based on that column after the import, is there any way to achieve my goal?

    Read the article

  • What is the best way to embed SQL in VB.NET.

    - by Amy P
    I am looking for information on the best practices or project layout for software that uses SQL embedded inside VB.NET or C#. The software will connect to a full SQL DB. The software was written in VB6 and ported to VB.NET, we want to update it to use .NET functionality but I am not sure where to start with my research. We are using Visual Studio 2005. All database manipulations are done from VB. Update: To clarify. We are currently using SqlConnection, SqlDataAdapter, SqlDataReader to connect to the database. What I mean by embed is that the SQL stored procedures are scripted inside our VB code and then run on the db. All of our tables, stored procs, views, etc are all manipulated in the VB code. The layout of our code is quite messy. I am looking for a better architecture or pattern that we can use to organize our code. Can you recommend any books, webpages, topics that I can google, etc to help me better understand the best way for us to do this.

    Read the article

  • Looping Through Database Query

    - by DrakeNET
    I am creating a very simple script. The purpose of the script is to pull a question from the database and display all answers associated with that particular question. We are dealing with two tables here and there is a foreign key from the question database to the answer database so answers are associated with questions. Hope that is enough explanation. Here is my code. I was wondering if this is the most efficient way to complete this or is there an easier way? <html> <head> <title>Advise Me</title> <head> <body> <h1>Today's Question</h1> <?php //Establish connection to database require_once('config.php'); require_once('open_connection.php'); //Pull the "active" question from the database $todays_question = mysql_query("SELECT name, question FROM approvedQuestions WHERE status = active") or die(mysql_error()); //Variable to hold $todays_question aQID $questionID = mysql_query("SELECT commentID FROM approvedQuestions WHERE status = active") or die(mysql_error()); //Print today's question echo $todays_question; //Print comments associated with today's question $sql = "SELECT commentID FROM approvedQuestions WHERE status = active"; $result_set = mysql_query($sql); $result_num = mysql_numrows($result_set); for ($a = 0; $a < $result_num; $a++) { echo $sql; } ?> </body> </html>

    Read the article

  • Ordering by number of rows?

    - by Rob
    Alright, so I have a table outputting data from a MySQL table in a while loop. Well one of the columns it outputs isn't stored statically in the table, instead it's the sum of how many times it appears in a different MySQL table. Sorry I'm not sure this is easy to understand. Here's my code: $query="SELECT * FROM list WHERE added='$addedby' ORDER BY time DESC"; $result=mysql_query($query); while($row=mysql_fetch_array($result, MYSQL_ASSOC)){ $loghwid = $row['hwid']; $sql="SELECT * FROM logs WHERE hwid='$loghwid' AND time < now() + interval 1 hour"; $query = mysql_query($sql) OR DIE(mysql_error()); $boots = mysql_num_rows($query); //Display the table } The above is the code displaying the table. As you can see it's grabbing data from two different MySQL tables. However I want to be able to ORDER BY $boots DESC. But as its a counting of a completely different table, I have no idea of how to go about doing that. I would appreciate any help, thank you.

    Read the article

  • Is this the correct way to set up has many with multiple associations?

    - by user323763
    I'm trying to set up a new project for a music site. I'm learning ROR and am a bit confused about how to make join models/tables. Does this look right? I have users, playlists, songs, and comments. Users can have multiple playlists. Users can have multiple comments on their profile. Playlists can have multiple songs. Playlists can have comments. Songs can have comments. class CreateTables < ActiveRecord::Migration def self.up create_table :users do |t| t.string :login t.string :email t.string :firstname t.string :lastname t.timestamps end create_table :playlists do |t| t.string :title t.text :description t.timestamps end create_table :songs do |t| t.string :title t.string :artist t.string :album t.integer :duration t.string :image t.string :source t.timestamps end create_table :comments do |t| t.string :title t.text :body t.timestamps end create_table :users_playlists do |t| t.integer :user_id t.integer :playlist_id t.timestamps end create_table :playlists_songs do |t| t.integer :playlist_id t.integer :song_id t.integer :position t.timestamps end create_table :users_comments do |t| t.integer :user_id t.integer :comment_id t.timestamps end create_table :playlists_comments do |t| t.integer :playlist_id t.integer :comment_id t.timestamps end create_table :songs_comments do |t| t.integer :song_id t.integer :comment_id t.timestamps end end def self.down drop_table :playlists drop_table :comments drop_table :songs_comments drop_table :users_comments drop_table :users_playlists drop_table :users drop_table :playlists drop_table :songs drop_table :playlists end end

    Read the article

  • Separating code logic from the actual data structures. Best practices?

    - by Patrick
    I have an application that loads lots of data into memory (this is because it needs to perform some mathematical simulation on big data sets). This data comes from several database tables, that all refer to each other. The consistency rules on the data are rather complex, and looking up all the relevant data requires quite some hashes and other additional data structures on the data. Problem is that this data may also be changed interactively by the user in a dialog. When the user presses the OK button, I want to perform all the checks to see that he didn't introduce inconsistencies in the data. In practice all the data needs to be checked at once, so I cannot update my data set incrementally and perform the checks one by one. However, all the checking code work on the actual data set loaded in memory, and use the hashing and other data structures. This means I have to do the following: Take the user's changes from the dialog Apply them to the big data set Perform the checks on the big data set Undo all the changes if the checks fail I don't like this solution since other threads are also continuously using the data set, and I don't want to halt them while performing the checks. Also, the undo means that the old situation needs to be put aside, which is also not possible. An alternative is to separate the checking code from the data set (and let it work on explicitly given data, e.g. coming from the dialog) but this means that the checking code cannot use hashing and other additional data structures, because they only work on the big data set, making the checks much slower. What is a good practice to check user's changes on complex data before applying them to the 'application's' data set?

    Read the article

  • Select Query Joined on Two Fields?

    - by btollett
    I've got a few tables in an access database: ID | LocationName 1 | Location1 2 | Location2 ID | LocationID | Date | NumProductsDelivered 1 | 1 | 12/10 | 3 2 | 1 | 01/11 | 2 3 | 1 | 02/11 | 2 4 | 2 | 11/10 | 1 5 | 2 | 12/10 | 1 ID | LocationID | Date | NumEmployees | EmployeeType 1 | 1 | 12/10 | 10 | 1 (=Permanent) 2 | 1 | 12/10 | 3 | 2 (=Temporary) 3 | 1 | 12/10 | 1 | 3 (=Support) 4 | 2 | 10/10 | 1 | 1 5 | 2 | 11/10 | 2 | 1 6 | 2 | 11/10 | 1 | 2 7 | 2 | 11/10 | 1 | 3 8 | 2 | 12/10 | 2 | 1 9 | 2 | 12/10 | 1 | 3 What I want to do is pass in the LocationID as a parameter and get back something like the following table. So, if I pass in 2 as my LocationID, I should get: Date | NumProductsDelivered | NumPermanentEmployees | NumSupportEmployees 10/10 | | 1 | 11/10 | 1 | 2 | 1 12/10 | 1 | 2 | 1 It seems like this should be a pretty simple query. I really don't even need the first table except as a way to fill in the combo box on the form from which the user chooses which location they want a report for. Unfortunately, everything I've done has resulted in me getting a lot more data than I should be getting. My confusion is in how to set up the join (presumably that's what I'm looking for here) given that I want both the date and locationID to be the same for each row in the result set. Any help would be much appreciated. Thanks.

    Read the article

  • MySQL Multiple Table Join

    - by hitman001
    I have a 3 tables that I'm trying to join and get distinct results. CREATE TABLE `car` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB mysql> select * from car; +----+-------+ | id | name | +----+-------+ | 1 | acura | +----+-------+ CREATE TABLE `tires` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `tire_desc` varchar(255) DEFAULT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new_fk_constraint` (`car_id`), CONSTRAINT `new_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from tires; +----+-------------+--------+ | id | tire_desc | car_id | +----+-------------+--------+ | 1 | front_right | 1 | | 2 | front_left | 1 | +----+-------------+--------+ CREATE TABLE `lights` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `lights_desc` varchar(255) NOT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new1_fk_constraint` (`car_id`), CONSTRAINT `new1_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from lights; +----+-------------+--------+ | id | lights_desc | car_id | +----+-------------+--------+ | 1 | right_light | 1 | | 2 | left_light | 1 | +----+-------------+--------+ Here is my query. mysql> SELECT name, group_concat(tire_desc), group_concat(lights_desc) FROM car left join tires on car.id = tires.car_id left join lights on car.id = car_id group by car.id; +-------+-----------------------------------------------+-----------------------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+-----------------------------------------------+ | acura | front_right,front_right,front_left,front_left | right_light,left_light,right_light,left_light | +-------+-----------------------------------------------+-----------------------------------------------+ I get duplicate entires and this is what I would like to get. +-------+-----------------------------------------------+--------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+--------------------------------+ | acura | front_right,front_left | right_light,left_light | +-------+-----------------------------------------------+--------------------------------+ I cannot use distinct in group_concat because I might have legitimate duplicates which I would like to keep. Is there any way to do this query using joins and not using inner selects like the statement below? SELECT name, (select group_concat(tire_desc) from tires where car.id = tires.car_id), (select group_concat(lights_desc) from lights where car.id = lights.car_id) FROM car Also, if I will use inner selects, will there be any performance issues over joins?

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >