Search Results

Search found 29093 results on 1164 pages for 'oracle mysql cluster'.

Page 572/1164 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • [Reloaded] Error while sorting filtered data from a GridView

    - by Bogdan M
    Hello guys, I have an error I cannot solve, on a ASP.NET website. One of its pages - Countries.aspx, has the following controls: a CheckBox called "CheckBoxNAME": < asp:CheckBox ID="CheckBoxNAME" runat="server" Text="Name" /> a TextBox called "TextBoxName": < asp:TextBox ID="TextBoxNAME" runat="server" Width="100%" Wrap="False"> < /asp:TextBox> a SQLDataSource called "SqlDataSourceCOUNTRIES", that selects all records from a Table with 3 columns - ID (Number, PK), NAME (Varchar2(1000)), and POPULATION (Number) called COUNTRIES < asp:SqlDataSource ID="SqlDataSourceCOUNTRIES" runat="server" ConnectionString="< %$ ConnectionStrings:myDB %> " ProviderName="< %$ ConnectionStrings:myDB.ProviderName %> " SelectCommand="SELECT COUNTRIES.ID, COUNTRIES.NAME, COUNTRIES.POPULATION FROM COUNTRIES ORDER BY COUNTRIES.NAME, COUNTRIES.ID"> < /asp:SqlDataSource> a GridView called GridViewCOUNTRIES: < asp:GridView ID="GridViewCOUNTRIES" runat="server" AllowPaging="True" AllowSorting="True" AutoGenerateColumns="False" DataSourceID="SqlDataSourceCOUNTRIES" DataKeyNames="ID" DataMember="DefaultView"> < Columns> < asp:CommandField ShowSelectButton="True" /> < asp:BoundField DataField="ID" HeaderText="Id" SortExpression="ID" /> < asp:BoundField DataField="NAME" HeaderText="Name" SortExpression="NAME" /> < asp:BoundField DataField="POPULATION" HeaderText="Population" SortExpression="POPULATION" /> < /Columns> < /asp:GridView> a Button called ButtonFilter: < asp:Button ID="ButtonFilter" runat="server" Text="Filter" onclick="ButtonFilter_Click"/> This is the onclick event: protected void ButtonFilter_Click(object sender, EventArgs e) { Response.Redirect("Countries.aspx?" + (this.CheckBoxNAME.Checked ? string.Format("NAME={0}", this.TextBoxNAME.Text) : string.Empty)); } Also, this is the main onload event of the page: protected void Page_Load(object sender, EventArgs e) { if (Page.IsPostBack == false) { if (Request.QueryString.Count != 0) { Dictionary parameters = new Dictionary(); string commandTextFormat = string.Empty; if (Request.QueryString["NAME"] != null) { if (commandTextFormat != string.Empty && commandTextFormat.EndsWith("AND") == false) { commandTextFormat += "AND"; } commandTextFormat += " (UPPER(COUNTRIES.NAME) LIKE '%' || :NAME || '%') "; parameters.Add("NAME", Request.QueryString["NAME"].ToString()); } this.SqlDataSourceCOUNTRIES.SelectCommand = string.Format("SELECT COUNTRIES.ID, COUNTRIES.NAME, COUNTRIES.POPULATION FROM COUNTRIES WHERE {0} ORDER BY COUNTRIES.NAME, COUNTRIES.ID", commandTextFormat); foreach (KeyValuePair parameter in parameters) { this.SqlDataSourceCOUNTRIES.SelectParameters.Add(parameter.Key, parameter.Value.ToUpper()); } } } } Basicly, the page displays in the GridViewCOUNTRIES all the records of table COUNTRIES. The scenario is the following: - the user checks the CheckBox; - the user types a value in the TextBox (let's say "ch"); - the user presses the Button; - the page loads displaying only the records that match the filter criteria (in this case, all the countries that have names containing "Ch"); - the user clicks on the header of the column called "Name" in order to sort the data in the GridView Then, I get the following error: ORA-01036: illegal variable name/number. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OracleClient.OracleException: ORA-01036: illegal variable name/number Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Any help is greatly appreciated, tnks. PS: I'm using ASP.NET 3.5, under Visual Studio 2008, with an OracleXE database.

    Read the article

  • BizTalk - generating schema from Oracle stored proc with table variable argument

    - by Ron Savage
    I'm trying to set up a simple example project in BizTalk that gets changes made to a table in a SQL Server db and updates a copy of that table in an Oracle db. On the SQL Server side, I have a stored proc named GetItemChanges() that returns a variable number of records. On the Oracle side, I have a stored proc named Update_Item_Region_Table() designed to take a table of records as a parameter so that it can process all the records returned from GetItemChanges() in one call. It is defined like this: create or replace type itemrec is OBJECT ( UPC VARCHAR2(15), REGION VARCHAR2(5), LONG_DESCRIPTION VARCHAR2(50), POS_DESCRIPTION VARCHAR2(30), POS_DEPT VARCHAR2(5), ITEM_SIZE VARCHAR2(10), ITEM_UOM VARCHAR2(5), BRAND VARCHAR2(10), ITEM_STATUS VARCHAR2(5), TIME_STAMP VARCHAR2(20), COSTEDBYWEIGHT INTEGER ); create or replace type tbl_of_rec is table of itemrec; create or replace PROCEDURE Update_Item_Region_table ( Item_Data tbl_of_rec ) IS errcode integer; errmsg varchar2(4000); BEGIN for recIndex in 1 .. Item_Data.COUNT loop update FL_ITEM_REGION_TEST set Region = Item_Data(recIndex).Region, Long_description = Item_Data(recIndex).Long_description, Pos_Description = Item_Data(recIndex).Pos_description, Pos_Dept = Item_Data(recIndex).Pos_dept, Item_Size = Item_Data(recIndex).Item_Size, Item_Uom = Item_Data(recIndex).Item_Uom, Brand = Item_Data(recIndex).Brand, Item_Status = Item_Data(recIndex).Item_Status, Timestamp = to_date(Item_Data(recIndex).Time_stamp, 'yyyy-mm-dd HH24:mi:ss'), CostedByWeight = Item_Data(recIndex).CostedByWeight where UPC = Item_Data(recIndex).UPC; log_message(Item_Data(recIndex).Region, '', 'Updated item ' || Item_Data(recIndex).UPC || '.'); end loop; EXCEPTION WHEN OTHERS THEN errcode := SQLCODE(); errmsg := SQLERRM(); log_message('CE', '', 'Error in Update_Item_Region_table(): Code [' || errcode || '], Msg [' || errmsg || '] ...'); END; In my BizTalk project I generate the schemas and binding information for both stored procedures. For the Oracle procedure, I specified a path for the GeneratedUserTypesAssemblyFilePath parameter to generate a DLL to contain the definition of the data types. In the Send Port on the server, I put the path to that Types DLL in the UserAssembliesLoadPath parameter. I created a map to translate the GetItemChanges() schema to the Update_Item_Region_Table() schema. When I run it the data is extracted and transformed fine but causes an exception trying to pass the data to the Oracle proc: *The adapter failed to transmit message going to send port "WcfSendPort_OracleDBBinding_HOST_DATA_Procedure_Custom" with URL "oracledb://dvotst/". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.InvalidOperationException: Custom type mapping for 'HOST_DATA.TBL_OF_REC' is not specified or is invalid.* So it is apparently not getting the information about the custom data type TBL_OF_REC into the Types DLL. Any tips on how to make this work?

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Environment

    - by Johannes
    I want to program a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both engines will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • Why does php show error for my SQL query

    - by ZincX
    UPDATE: My mistake - I made a typo. Nevermind this question. I'm using php to update a mysql database. The resultant query I'm using when i print it out on my webpage before executing is as follows: INSERT INTO perch2_content_items (itemOrder, regionID, pageID, itemRev, itemID, itemJSON, itemSearch ) SELECT MAX(itemOrder)+1, 105, 81, 11, 118, 'json', 'search' FROM perch2_content_items WHERE regionID=105 When I copy and paste this query directly into the phpmyadmin SQL interface, it works fine. The table gets updated. However, when I try to execute it using my php code as follows, it throws an error. $insertToPerch = "INSERT INTO perch2_content_items (itemOrder, regionID, pageID, itemRev, itemID, itemJSON, itemSearch ) SELECT MAX(itemOrder)+1, $regionID, $pageID, $regionRev, $newItemID, 'json', 'search' FROM perch2_content_items WHERE regionID=$regionID"; mysql_query(insertToPerch) or die(mysql_error()); The error I'm getting is: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'insertToPerch' at line 1 Can anybody help me figure out why it is failing.

    Read the article

  • Average Rating script

    - by MILESMIBALERR
    I have asked this once before but i didnt get a very clear answer. I need to know how to make a rating script for a site. I have a form that submits a rating out of ten to mysql. How would you get the average rating to be displayed from the mysql column using php? One person suggested having two tables; one for all the ratings, and one for the average rating of each page. Is there a simpler method than this?

    Read the article

  • How can fill a variable of my own created data type within Oracle PL/SQL?

    - by Frankie Simon
    In Oracle I've created a data type: TABLE of VARCHAR2(200) I want to have a variable of this type within a Stored Procedure (defined locally, not as an actual table in the DB) and fill it with data. Some online samples show how I'd use my type if it was filled and passed as a parameter to the stored procedure: SELECT column_value currVal FROM table(pMyPassedParameter) However what I want is to fill it during the PL/SQL code itself, with INSERT statements. Anyone knows the syntax of this?

    Read the article

  • Indexes and multi column primary keys

    - by David Jenings
    Went searching and didn't find the answer to this specific noob question. My apologies if I missed it. In a MySQL database I have a table with the following primary key PRIMARY KEY id (invoice, item) In my application I will also frequently be selecting on "item" by itself and less frequently on only "invoice". I'm assuming I would benefit from indexes on these columns. MySQL does not complain when I define the following: INDEX (invoice), INDEX (item), PRIMARY KEY id (invoice, item) But I don't see any evidence (using DESCRIBE -- the only way I know how to look) that separate indexes have been established for these two columns. So the question is, are the columns that make up a primary key automatically indexed individually? Also, is there a better way than DESCRIBE to explore the structure of my table?

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • mysql_connect()

    - by Jacksta
    I am trying to connect to mysql and am getting an error. I put my servers ip address in and used port 3306 whihch post should be used? <?php $connection = mysql_connect("serer.ip:port", "user", "pass") or die(mysql_error()); if ($connection) {$msg = "success";} ?> <html> <head> </head> <body> <? echo "$msg"; ?> </body> </html> Here is the error its producing Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'admin'@'server1.myserver.com' (using password: YES) in /home/admin/domains/mydomain.com.au/public_html/db_connect.php on line 3 Access denied for user 'admin'@'server1.myserver.com' (using password: YES)

    Read the article

  • Oracle: Is there a way to get the column data types for a view?

    - by rally25rs
    For a table in oracle, I can query "all_tab_columns" and get table column information, like the data type, precision, whether or not the column is nullable. In SQL Developer or TOAD, you can click on a view in the GUI and it will spit out a list of the columns that the view returns and the same set of data (data type, precision, nullable, etc). So my question is, is there a way to query this column definition for a view, the way you can for a table? How do the GUI tools do it?

    Read the article

  • LinkDemand error on webserver when using TrackSource

    - by robertpnl
    Hi, On a webserver (shared hosting provider) I published a website with a ADO.Net Framework model in use with MySql Connector 6.3.1. When I request a page, a Security Exception will be happen with this error messages: "LinkDemand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer ". This exception raised when code collect the listeners of a tracksource: public class MySqlTrace { private static TraceSource source = new TraceSource("mysql"); static MySqlTrace() { foreach (TraceListener listener in source.Listeners) // <-- Exception throw here { // ... } } } The web.config doesn't have any trace data or system.diagnostics. My question is, why will a get a LinkDemand security exception during collecting the source listeners. What can maybe be wrong in here?

    Read the article

  • Oracle: What does `(+)` do in a WHERE clause?

    - by Jonathan Lonowski
    Found the following in an Oracle-based application that we're migrating (generalized): SELECT Table1.Category1, Table1.Category2, count(*) as Total, count(Tab2.Stat) AS Stat FROM Table1, Table2 WHERE (Table1.PrimaryKey = Table2.ForeignKey(+)) GROUP BY Table1.Category1, Table1.Category2 What does (+) do in a WHERE clause? I've never seen it used like that before.

    Read the article

  • Fulltext and composite indexes and how they affect the query

    - by Brett
    Just say I had a query as below.. SELECT name,category,address,city,state FROM table WHERE MATCH(name,subcategory,category,tag1) AGAINST('education') AND city='Oakland' AND state='CA' LIMIT 0, 10; ..and I had a fulltext index as name,subcategory,category,tag1 and a composite index as city,state; is this good enough for this query? Just wondering if something extra is needed when mixing additional AND's when making use of the fulltext index with the MATCH/AGAINST. Edit: What I am trying to understand is, what happens with the additional columns that are within the query but are not indexed in the chosen index (the fulltext index), the above example being city and state. How does MySQL now find the matching rows for these since it can't use two indexes (or can it?) - so, basically, I'm trying to understand how MySQL goes about finding the data optimally for the columns NOT in the chosen fulltext index and if there is anything I can or should do to optimize the query.

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?

    - by Jay
    I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?

    Read the article

  • How to insert custom date time in oracle using java?

    - by shree
    Hi i have a column (type date).I want to insert custom date and time without using Preparedstatement .i have used String date = sf.format(Calendar.getInstance().getTime()); String query = "Insert into entryTbl(name, joinedDate, ..etc) values ("abc", to_date(date, 'yyyy/mm/dd HH:mm:ss'))"; statement.executeUpdate(query); but am getting literal doesnot match error. so even tried with "SYSDATE".Its inserting only date not time.So how to insert the datetime using java into oracle?please any one help..

    Read the article

  • mysqli prepare statment error?

    - by user310850
    Hi all, $mysqli = new mysqli("localhost", "root", "", "test"); $mysqli->query('PREPARE mid FROM "SELECT name FROM test_user WHERE id = ?"'); //$mysqli->query('PREPARE mid FROM "SELECT name FROM test_user" '); $res = $mysqli->query( 'EXECUTE mid 1;') or die(mysqli_error($mysqli)); while($resu = $res->fetch_object()) { echo '<br>' .$resu->name; } Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '1' at line 1 my php version is PHP Version 5.3.0 and mysql mysqlnd 5.0.5-dev - 081106 - $Revision: 1.3.2.27 $

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >