Search Results

Search found 61944 results on 2478 pages for 'text database'.

Page 487/2478 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Performance of VIEW vs. SQL statement

    - by Matt W.
    I have a query that goes something like the following: select <field list> from <table list> where <join conditions> and <condition list> and PrimaryKey in (select PrimaryKey from <table list> where <join list> and <condition list>) and PrimaryKey not in (select PrimaryKey from <table list> where <join list> and <condition list>) The sub-select queries both have multiple sub-select queries of their own that I'm not showing so as not to clutter the statement. One of the developers on my team thinks a view would be better. I disagree in that the SQL statement uses variables passed in by the program (based on the user's login Id). Are there any hard and fast rules on when a view should be used vs. using a SQL statement? What kind of performance gain issues are there in running SQL statements on their own against regular tables vs. against views. (Note that all the joins / where conditions are against indexed columns, so that shouldn't be an issue.) EDIT for clarification... Here's the query I'm working with: select obj_id from object where obj_id in( (select distinct(sec_id) from security where sec_type_id = 494 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) and sec_obj_id in ( select obj_id from object where obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) ) ) ) or (obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) and obj_id not in (select sec_obj_id from security where sec_type_id = 494) )

    Read the article

  • PHP mySQL - replace some string inside string

    - by apis17
    i want to replace ALL comma , into ,<space> in all address table in my mysql table. For example, +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1,Street Name | +----------------+----------------+ Into +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1, Street Name| +----------------+----------------+ Thanks in advance.

    Read the article

  • Detecting changes between rows with same ID

    - by Noah
    I have a table containing some names and their associated ID, along with a snapshot: snapshot, id, name I need to identify when a name has changed for an id between snapshots. For example, in the following data: 1, 0, 'MOUSE_SPEED' 1, 1, 'MOUSE_POS' 1, 2, 'KEYBOARD_STATE' 2, 0, 'MOUSE_BUTTONS' 2, 1, 'MOUSE_POS' 2, 2, 'KEYBOARD_STATE' ...the meaning of id 0 changed with snapshot 2, but the others remained the same. I'd like to construct a query that (ideally) returns: 1, 0, 'MOUSE_SPEED' 2, 0, 'MOUSE_BUTTONS' I am using PostgreSQL v8.4.2.

    Read the article

  • How can I check a type's dependents order to drop them and replace/modify the initial type?

    - by pctroll
    I tried to modify a type using the following code and it gave me the error code: 'ORA-02303'. I don't know much about Oracle or PL/SQL but I need to solve this; so I'd appreciate any further help with this. Thanks in advance. The code is just an example. But then again, I need to check its dependents first. create or replace type A as object ( x_ number, y_ varchar2(10), member procedure to_upper ); /

    Read the article

  • Syntax for "RETURNING" clause in Mysql PDO

    - by dmontain
    I'm trying to add a record, and at the same time return the id of that record added. I read it's possible to do it with a RETURNING clause. $stmt->prepare("INSERT INTO tablename (field1, field2) VALUES (:value1, :value2) RETURNING id"); but the insertion fails when I add RETURNING. There is an auto-incremented field called id in the table being added to. Can someone see anything wrong with my syntax? or maybe PDO does not support RETURNING?

    Read the article

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • Higher speed options for executing very large (20 GB) .sql file in MySQL

    - by Jonogan
    My firm was delivered a 20+ GB .sql file in reponse to a request for data from the gov't. I don't have many options for getting the data in a different format, so I need options for how to import it in a reasonable amount of time. I'm running it on a high end server (Win 2008 64bit, MySQL 5.1) using Navicat's batch execution tool. It's been running for 14 hours and shows no signs of being near completion. Does anyone know of any higher speed options for such a transaction? Or is this what I should expect given the large file size? Thanks

    Read the article

  • mysql db connection

    - by Dragster
    hi there i have been searching the web for a connection between my android simulator and a mysql db. I've fount that you can't connect directly but via a webserver. The webserver wil handle my request from my android. I fount the following code on www.helloandroid.com But i don't understand. If i run this code on the simulator nothing happens. The screen stays black. Where does Log.i land. In the android screen or in the error log or somewhere else? Can somebody help me with this code? package app.android.ticket; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.message.BasicNameValuePair; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class fetchData extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //call the method to run the data retreival getServerData(); } public static final String KEY_121 = "http://www.jorisdek.nl/android/getAllPeopleBornAfter.php"; public fetchData() { Log.e("fetchData", "Initialized ServerLink "); } private void getServerData() { InputStream is = null; String result = ""; //the year data to send ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("year","1980")); //http post try{ HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(KEY_121); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); }catch(Exception e){ Log.e("log_tag", "Error in http connection "+e.toString()); } //convert response to string try{ BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); result=sb.toString(); }catch(Exception e){ Log.e("log_tag", "Error converting result "+e.toString()); } //parse json data try{ JSONArray jArray = new JSONArray(result); for(int i=0;i<jArray.length();i++){ JSONObject json_data = jArray.getJSONObject(i); Log.i("log_tag","id: "+json_data.getInt("id")+ ", name: "+json_data.getString("name")+ ", sex: "+json_data.getInt("sex")+ ", birthyear: "+json_data.getInt("birthyear") ); } }catch(JSONException e){ Log.e("log_tag", "Error parsing data "+e.toString()); } } }

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • WPF: How to bind and update display with DataContext

    - by Am
    I'm trying to do the following thing: I have a TabControl with several tabs. Each TabControlItem.Content points to PersonDetails which is a UserControl Each BookDetails has a dependency property called IsEditMode I want a control outside of the TabControl , named ToggleEditButton, to be updated whenever the selected tab changes. I thought I could do this by changing the ToggleEditButton data context, by it doesn't seem to work (but I'm new to WPF so I might way off) The code changing the data context: private void tabControl1_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.Source is TabControl) { if (e.Source.Equals(tabControl1)) { if (tabControl1.SelectedItem is CloseableTabItem) { var tabItem = tabControl1.SelectedItem as CloseableTabItem; RibbonBook.DataContext = tabItem.Content as BookDetails; ribbonBar.SelectedTabItem = RibbonBook; } } } } The DependencyProperty under BookDetails: public static readonly DependencyProperty IsEditModeProperty = DependencyProperty.Register("IsEditMode", typeof (bool), typeof (BookDetails), new PropertyMetadata(true)); public bool IsEditMode { get { return (bool)GetValue(IsEditModeProperty); } set { SetValue(IsEditModeProperty, value); SetValue(IsViewModeProperty, !value); } } And the relevant XAML: <odc:RibbonTabItem Title="Book" Name="RibbonBook"> <odc:RibbonGroup Title="Details" Image="img/books2.png" IsDialogLauncherVisible="False"> <odc:RibbonToggleButton Content="Edit" Name="ToggleEditButton" odc:RibbonBar.MinSize="Medium" SmallImage="img/edit_16x16.png" LargeImage="img/edit_32x32.png" Click="Book_EditDetails" IsChecked="{Binding Path=IsEditMode, Mode=TwoWay}"/> ... There are two things I want to accomplish, Having the button reflect the IsEditMode for the visible tab, and have the button change the property value with no code behind (if posible) Any help would be greatly appriciated.

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • How to secure phpMyAdmin

    - by Andrei
    Hi, I have noticed that there are strange requests to my website trying to find phpmyadmin, like /phpmyadmin/ /pma/ etc. Now I have installed PMA on Ubuntu via apt and would like to access it via webaddress different from /phpmyadmin/. What can I do to change it? Thanks

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • Unable to update value of Select from AngularJs

    - by Ahmad.Masood
    I am unable to update value of select from AngularJs. Here is my code <select ng-model="family.grade" > <option ng-repeat="option in options" value='{{option.id}}'>{{option.text}}</option> </select> Here are the options which i am using to populate my select var options = [{text:'Pre-K',id:'Pre-K'}, {text:'K',id:'K'}, {text:'1',id:'1'}, {text:'2',id:'2'}, {text:'3',id:'3'}, {text:'4',id:'4'}, {text:'5',id:'5'}, {text:'6',id:'6'}, {text:'7',id:'7'}, {text:'8',id:'8'}, {text:'+',id:'+'}]; Here is mu js code. $scope.$watch("family_member.date_of_birth" ,function(newValue, oldValue){ $scope.family.grade = "1" }) When ever value of family_member.date_of_birth changes it should set they value of select to 1. But this change is not visible on UI.

    Read the article

  • input in table > td, But yet extra bottom spacing between rows! Internet Explorer

    - by phpExe
    Im using meyer css reset. But I have problem with input in a table. There in extra space between rows: <table class="table" cellpadding="0" cellspacing="0" border="0"> <tr> <td>&nbsp;</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> <td>7</td> <td>8</td> <td>9</td> <td>10</td> </tr> <tr> <td>1</td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text" class="black"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> </tr> <tr> <td>2</td> <td><input type="text" /></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text"/></td> <td><input type="text" class="black"/></td> <td><input type="text"/></td> <td><input type="text"/></td> </tr> </table> and css: .table { border-collapse: collapse; border-spacing: 0px; } .table tr { margin-bottom:0; overflow:hidden; height:25px; width: 100%; padding:0; } .table input { width:25px; height:25px; border:1px solid #000; text-align:center; } .black { background:#000; } Why there is extra bottom spacing in internet explorer (I hate ie :(()? Thanks alot

    Read the article

  • Reading directly from the Doctrine Searchable index table

    - by phidah
    I've got a Doctrine table with the Searchable behavior enabled. Whenever a record is created, an index is made in another table. I have a model called Entry and the behavior automatically created the table entry_index. My question now is: How can I - without using the search(...) methods of my model use the data from this table? I want to create a tag cloud of the words most used, and the data in the index table is exactly what I need.

    Read the article

  • In a star schema, are foreign key constraints between facts and dimensions neccessary?

    - by Garett
    I'm getting my first exposure to data warehousing, and I’m wondering is it necessary to have foreign key constraints between facts and dimensions. Are there any major downsides for not having them? I’m currently working with a relational star schema. In traditional applications I’m used to having them, but I started to wonder if they were needed in this case. I’m currently working in a SQL Server 2005 environment.

    Read the article

  • MSSQL 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in MSSQL 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >