Search Results

Search found 1369 results on 55 pages for 'clause'.

Page 32/55 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Request all titles by actor using LINQ to Netflix OData

    - by Mark Heath
    I'm experimenting with LINQPad to use LINQ to query the Netflix OData. I'm trying to search for all films with a particular actor in. For example: from t in Titles from p in t.Cast where p.Name == "Morgan Freeman" select t.Name this results in: NotSupportedException: Can only project the last entity type in the query being translated I also tried: from p in People from t in p.TitlesActedIn where p.Name == "Morgan Freeman" select t.Name which results in the following error: NotSupportedException: The method 'Select' is not supported I've tried a few other approaches, such as using Id's in the where clause, and selecting different things, but have got nowhere.

    Read the article

  • ASP.NET Membership - Retrieve Password and PasswordSalt from Membership Table - Hash UserID

    - by Steve
    Hello, I am so close to get this project done. I need to retrieve the password and passwordSalt from my Membership table to compare it to my 'OldPasswords' table. The problem is the Membership provider does not let me use the GetPassword method because the password is hashed. And I can not retrieve it in a normal sqlConnection because the UserID is hashed also. Does anyone know how to hash the UserID so I can put it in my where clause? Or maybe there is a different way to get to that data? Any help is appreciated. Thank you, Steve

    Read the article

  • Language-agnostic term for typed things that need memory

    - by FredOverflow
    Is there an accepted general term that subsumes the concepts of variables, class instances and arrays? Basically "any typed thing that needs memory". In C++, such a thing is called an object, but I'm looking for a more language-agnostic term. § 1.8 The C++ object model 1 The constructs in a C++ program create, destroy, refer to, access, and manipulate objects. An object is a region of storage. [...] An object can have a name (Clause 3). An object has a storage duration (3.7) which influences its lifetime (3.8). An object has a type (3.9).

    Read the article

  • Automapper use in a MVVM application

    - by Echiban
    I am building a MVVM application. The model / entity (I am using NHibernate) is already done, and I am thinking of using AutoMapper to map between the ViewModel and Model. However this clause scares the jebus out of me: (from http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/01/22/automapper-the-object-object-mapper.aspx) Blockquote AutoMapper enforces that for each type map (source/destination pair), all of the properties on the destination type are matched up with something on the source type To me, the logical choice is to map from model to viewmodel, (and I'll let viewmodel manually assign to model), but the quote basically kills the idea since the viewmodel will definitely have properties that don't exist on the model. How have you been using Automapper in a MVVM app? Please help!

    Read the article

  • Linq to Entities : using ToLower() on NText fields

    - by Julien N
    I'm using SQL Server 2005, with a case sensitive database.. In a search function, I need to create a Linq To Entities (L2E) query with a "where" clause that compare several strings with the data in the database with these rules : The comparison is a "Contains" mode, not strict compare : easy as the string's Contains() method is allowed in L2E The comparison must be case insensitive : I use ToLower() on both elements to perform an insensitive comparison. All of this performs really well but I ran into the following Exception : "Argument data type ntext is invalid for argument 1 of lower function" on one of my fields. It seems that the field is a NText field and I can't perform a ToLower() on that. What could I do to be able to perform a case insensitive Contains() on that NText field ?

    Read the article

  • Read from multiple tables in vb.net data reader

    - by user225269
    I'm trying to read from two tables in mysql: Dim sqlcom As MySqlCommand = New MySqlCommand("Select * from mother, father where IDNO= '" & TextBox14.Text & "' ", sqlcon) -But I get this error: Column 'IDNO' in where clause is ambiguous Here is the whole code: Dim NoAcc As String Dim NoAccmod2 As String Dim NoPas As String Dim sqlcon As New MySqlConnection("Server=localhost; Database=school;Uid=root;Pwd=nitoryolai123$%^;") Dim sqlcom As MySqlCommand = New MySqlCommand("Select * from mother, father where IDNO= '" & TextBox14.Text & "' ", sqlcon) sqlcon.Open() Dim rdr As MySqlDataReader rdr = sqlcom.ExecuteReader If rdr.HasRows Then rdr.Read() NoAcc = rdr("IDNO") If (TextBox14.Text = NoAcc) Then TextBox7.Text = rdr("MOTHER") If (TextBox14.Text = NoAcc) Then TextBox8.Text = rdr("MOTHER_OCCUPATION") If (TextBox14.Text = NoAcc) Then TextBox10.Text = rdr("FATHER") If (TextBox14.Text = NoAcc) Then TextBox11.Text = rdr("FATHER_OCCUPATION") End If -Any suggestions that could help solve this problem? Or even other techniques on achieving the goal of reading data from two tables using data reader? This is a winform, not a web form

    Read the article

  • How to properly clean up Excel interop objects in C#

    - by HAdes
    I'm using the Excel interop in C# (ApplicationClass) and have placed the following code in my finally clause: while (System.Runtime.InteropServices.Marshal.ReleaseComObject(excelSheet) != 0) { } excelSheet = null; GC.Collect(); GC.WaitForPendingFinalizers(); Although, this kind of works the Excel.exe process is still in the background even after I close Excel. It is only released once my application is manually closed. Anyone realize what I am doing wrong, or has an alternative to ensure interop objects are properly disposed of. Thanks.

    Read the article

  • T-sql Common expression query as subquery

    - by ase69s
    I have the following query: WITH Orders(Id) AS ( SELECT DISTINCT anfrageid FROM MPHotlineAnfrageAnhang ) SELECT Id, ( SELECT CONVERT(VARCHAR(255),anfragetext) + ' | ' FROM MPHotlineAnfrageAnhang WHERE anfrageid = Id ORDER BY anfrageid, erstelltam FOR XML PATH('') ) AS Descriptions FROM Orders Its concatenates varchar values of diferents rows grouped by an id. But now i want to include it as a subquery and it gives some errors i cant solve. Simplified example of use: select descriptions from ( WITH Orders(Id) AS ( SELECT DISTINCT anfrageid FROM MPHotlineAnfrageAnhang ) SELECT Id, ( SELECT CONVERT(VARCHAR(255),anfragetext) + ' | ' FROM MPHotlineAnfrageAnhang WHERE anfrageid = Id ORDER BY anfrageid, erstelltam FOR XML PATH('') ) AS Descriptions FROM Orders ) as tx where id=100012 Errors (Aproximate translation from spanish): -Incorrect sintaxis near 'WITH'. -Incorrect sintaxis near 'WITH'. If the instruction is a common table expression or a xmlnamespaces clause, the previous instruction must end with semicolon. -Incorrect sintaxis near ')'. What im doing wrong?

    Read the article

  • Can't get SubSonic insert to work

    - by Darkwater23
    I'm trying to insert a record into a table without using the SubSonic object in a VB.Net Windows app. (It will take too long to explain why.) Dim q As New SubSonic.Query("tablename") q.QueryType = SubSonic.QueryType.Insert q.AddUpdateSetting("Description", txtDescription.Text) q.Execute() This just updates all the rows in the table. I read in one post that instead of AddUpdateSetting, I should use AddWhere, but that didn't make any sense to me. I don't need a where clause at all. Searching for all:QueryType.Insert at subsonicproject.com didn't return anything (which I thought was weird). Can anyone tell me how to fix this query? Thanks!

    Read the article

  • SQL Where question

    - by needshelp
    Hi all, I have a question about case statements and nulls in a where clause. I want to do the following: Declare @someVar int = null select column1 from TestTable t where t = case when @someVar is not null then @someVar else t end Here is the problem: Let's say @someVar is null. Let's also say that column1 from TestTable t has NULL column values. Then, my condition t = t in the case statement will always evaluate to false. I basically just want to be able to conditionally filter the column based on the value of @someVar if it's provided. Any help?

    Read the article

  • Proper way of deleting records with Codeigniter

    - by luckytaxi
    I came across another Stackoverflow post regarding Get vs Post and it made me think. With CI, my URL for deleting a record is http://domain.com/item/delete/100, which deletes record id 100 from my DB. The record_id is pulled via $this->uri->segment. In my model I do have a where clause that checks that the user is indeed the owner of that record. A user_id is stored in a session inside the DB. Is that good enough? My understanding is, POST should be used for one time modification for data and GET is for retrieving regards (e.g. viewing an item or permalink).

    Read the article

  • DELETING doubled users (MySQL)

    - by vizzdoom
    Hi I have two tables. There are users informations from two sites: p_users p_users2 There are 3726 users in first and 13717 in second. Some users in p_users2 are in p_users. I want merge this two tables to the one big table - but rows with same usernames can't be doubled. How can I do this? I tried something like this: DELETE FROM p_users2 WHERE user_id IN ( select p.user_id from p_users p join p_users2 p2 on p.username=p2.username ) After that I should receive a table with unique usernames, which I want to export and import to the first one. But when I execute my query I got error: SQL Error (1093): You can't specify target table 'p_users2' for update in FROM clause. (MYSQL)

    Read the article

  • Filtering DBNull With LINQ

    - by Steven
    Why does the following query raise the error below for a row with a NULL value for barrel when I explicitly filter out those rows in the Where clause? Dim query = From row As dbDataSet.conformalRow In dbDataSet.Tables("conformal") _ Where Not IsDBNull(row.Cal) AndAlso tiCal_drop.Text = row.Cal _ AndAlso Not IsDBNull(row.Tran) AndAlso tiTrans_drop.Text = row.Tran _ AndAlso Not IsDBNull(row.barrel) _ Select row.barrel If query.Count() > 0 Then tiBarrel_txt.Text = query(0) Run-time exception thrown : System.Data.StrongTypingException - The value for column 'barrel' in table 'conformal' is DBNull. How should my query / condition be rewritten to work as I intended?

    Read the article

  • Linq query with aggregate function OrderBy

    - by Billy Logan
    Hello everyone, I have the following LinqToEntities query, but am unsure of where or how to add the orderby clause: var results = from d in db.TBLDESIGNER join s in db.TBLDESIGN on d.ID equals s.TBLDESIGNER.ID where s.COMPLETED && d.ACTIVE let value = new { s, d} let key = new { d.ID, d.FIRST_NAME, d.LAST_NAME } group value by key into g orderby g.Key.FIRST_NAME ascending, g.Key.LAST_NAME ascending select new { ID = g.Key.ID, FirstName = g.Key.FIRST_NAME, LastName = g.Key.LAST_NAME, Count = g.Count() }; This should be sorted by First_Name ascending and then Last_Name ascending. I have tried adding ordering but It has had no effect on the result set. Could someone please provide an example of where the orderby would go assuming the query above. Thanks, Billy

    Read the article

  • Efficient Search function with Linq to SQL

    - by Bayonian
    Hi, I'm using VB.NET and Linq to SQL. I have a table with thousands of rows and growing. Right now I'm using .Contains() in the Where clause to perform the query. Below is my search function : Public Shared Function DemoSearchFunction(ByVal keyword As String) As DataTable Dim db As New BibleDataClassesDataContext() Dim query = From b In db.khmer_books _ From ch In db.khmer_chapters _ From v In db.testing_khmers _ Where v.t_v.Contains(keyword) And ch.kh_book_id = b.kh_b_id And v.t_chid = ch.kh_ch_id _ Select b.kh_b_id, b.kh_b_title, ch.kh_ch_id, ch.kh_ch_number, v.t_id, v.t_vn, v.t_v Dim dtDataTableOne = New DataTable("dtOne") dtDataTableOne.Columns.Add("bid", GetType(Integer)) dtDataTableOne.Columns.Add("btitle", GetType(String)) dtDataTableOne.Columns.Add("chid", GetType(Integer)) dtDataTableOne.Columns.Add("chn", GetType(Integer)) dtDataTableOne.Columns.Add("vid", GetType(Integer)) dtDataTableOne.Columns.Add("vn", GetType(Integer)) dtDataTableOne.Columns.Add("verse", GetType(String)) For Each r In query dtDataTableOne.Rows.Add(New Object() {r.kh_b_id, r.kh_b_title, r.kh_ch_id, r.kh_ch_number, r.t_id, r.t_vn, r.t_v}) Next Return dtDataTableOne End Function I would like to know other methods for doing efficient search using Linq to SQL. Thanks.

    Read the article

  • return from a linq where statement

    - by Vaccano
    I have the following link function MyLinqToSQLTable.Where(x => x.objectID == paramObjectID).ToList(); I most of the time you can change a linq call to be several lines by adding curly brackets around the method body. Like this: MyLinqToSQLTable.Where(x => { x.objectID == paramObjectID; }).ToList(); Problem is the implied return that was there when I just did a Boolean compare is now not done. Return (x.objectID == paramObjectID); is not accepted either. How do do this? can I do this? NOTE: I know that I can add another where clause if needed. But I would still like to know the answer to this.

    Read the article

  • Objective C selector memory managment (does this leak memory)?

    - by James Jones
    - (IBAction) someButtonCall { if(!someCondition) { someButtonCallBack = @selector(someButtonCall); [self presentModalViewController:someController animated:YES]; } else ... } //Called from someController - (void) someControllerFinished:(BOOL) ok { [self dismissModalViewControllerAnimated:YES]; if(ok) [self performSelector:someButtonCallBack]; else ... } I'm wondering if the user keeps getting into the !someCondition clause if the selector is leaked by assigning a new selector each time (the code above is hypothetical and not what i'm doing). Any help is appreciated. Thanks, James Jones

    Read the article

  • Optimizing Oracle query

    - by Omnipresent
    SELECT MAX(verification_id) FROM VERIFICATION_TABLE WHERE head = 687422 AND mbr = 23102 AND RTRIM(LTRIM(lname)) = '.iq bzw' AND TO_CHAR(dob,'MM/DD/YYYY')= '08/10/2004' AND system_code = 'M'; This query is taking 153 seconds to run. there are millions of rows in VERIFICATION_TABLE. I think query is taking long because of the functions in where clause. However, I need to do ltrim rtrim on the columns and also date has to be matched in MM/DD/YYYY format. How can I optimize this query?

    Read the article

  • SQL: Gather right hand values from a join

    - by Max Williams
    Let's say a question has many tags, via a join table called taggings. I do a join thus: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id I want to order the results according to a particular tag name, eg 'piano', so that piano is at the top, then by all the other tags in alphabetical order. Currently i'm using this order clause: ORDER BY (tags.name = 'piano') desc, tags.name Which is going completely wrong - the first results i get back aren't even tagged with 'piano' at all. I think my problem is that i need to group the tag names somehow and do my ordering test against that: i think that doing it against the straight tags.name isn't working due to the structure of the resultant join table (it does work if i just do a simple select on the tags table) but i can't get my head around how to fix it. grateful for any advice, max

    Read the article

  • Postgresql - one database for everyone, or one-database per customer

    - by user337876
    I'm working on a web-based business application where each customer will need to have their own data (think basecamphq.com type model) For scalability and ease-of-upgrades, I'd prefer to have a single database where each customer gets a filtered version of the data. The problem is how to guarantee that they stay sandboxed to their own data. Trying to enforce it in code seems like a disaster waiting to happen. I know Oracle has a way to append a where clause to every query based on a login id, but does Postgresql have anything similar? If not, is there a different design pattern I could use (like creating a view of each table for each customer that filters)? Worse case scenario, what is the performance/memory overhead of having 1000 100M databases vs having a single 1Tb database? I will need to provide backup/restore functionality on a per-customer basis which is dead-simple on a single database but quite a bit trickier if they are sharing the database with other customers.

    Read the article

  • Does DataAdapter.Fill() close its connection when an Exception is thrown?

    - by motto
    Hi, I am using ADO.NET (.NET 1.1) in a legacy app. I know that DataAdapter.Fill() opens and closes connections if the connection hasn't been opened manually before it's given to the DataAdapter. My question: Does it also close the connection if the .Fill() causes an Exception? (due to SQL Server cannot be reached, or whatever). Does it leak a connection or does it have a built-in Finally-clause to make sure the connection is being closed. Code Example: Dim cmd As New SqlCommand Dim da As New SqlDataAdapter Dim ds As New DataSet cmd.Connection = New SqlConnection(strConnection) cmd.CommandText = strSQL da.SelectCommand = cmd da.Fill(ds)

    Read the article

  • How to export more than 1MB in XML format using sqlcmd and without an input file?

    - by jon
    Hello, In SQL Server 2008, I want to export the result of a stored procedure to a file using sqlcmd utility. Now the end of my stored procedure is a select statement with a "for xml path.." clause at the end. I read on BOL that if I don't want my output truncated when reaching 1MB file size, I have to use this :XML ON command, but it should be placed on its own line, before calling the stored procedure. Does any of you experts know if it is possible to do that without specifying an input file for sqlcmd? (I'm calling sqlcmd like this: exec master..xp_cmdshell 'sqlcmd -Q"exec storedProcedureName @param1=value1, @param2=value2" -o c:\exportResults.xml -h-1 -E', but "storedProcedureName" and its parameters can change, which would mean 1 input file per passed parameters to sqlcmd) Also, it seems that I can't use bcp instead of sqlcmd because my stored procedure is creating a temporary table and performing DML statements on it? Thanks a lot

    Read the article

  • Paging enormous tables on DB2

    - by grenade
    We have a view that, without constraints, will return 90 million rows and a reporting application that needs to display paged datasets of that view. We're using nhibernate and recently noticed that its paging mechanism looks like this: select * from (select rownumber() over() as rownum, this_.COL1 as COL1_20_0_, this_.COL2 as COL2_20_0_ FROM SomeSchema.SomeView this_ WHERE this_.COL1 = 'SomeValue') as tempresult where rownum between 10 and 20 The query brings the db server to its knees. I think what's happening is that the nested query is assigning a row number to every row satisfied by the where clause before selecting the subset (rows 10 - 20). Since the nested query will return a lot of rows, the mechanism is not very efficient. I've seen lots of tips and tricks for doing this efficiently on other SQL platforms but I'm struggling to find a DB2 solution. In fact an article on IBM's own site recommends the approach that nhibernate has taken. Is there a better way?

    Read the article

  • MS SQL Server: how to optimize "like" queries?

    - by duke84
    I have a query that searches for clients using "like" with wildcard. For example: SELECT TOP (10) [t0].[CLIENTNUMBER], [t0].[FIRSTNAME], [t0].[LASTNAME], [t0].[MI], [t0].[MDOCNUMBER] FROM [dbo].[CLIENT] AS [t0] WHERE (LTRIM(RTRIM([t0].[DOCREVNO])) = '0') AND ([t0].[FIRSTNAME] LIKE '%John%') AND ([t0].[LASTNAME] LIKE '%Smith%') AND ([t0].[SSN] LIKE '%123%') AND ([t0].[CLIENTNUMBER] LIKE '%123%') AND ([t0].[MDOCNUMBER] LIKE '%123%') AND ([t0].[CLIENTINDICATOR] = 'ON') It can also use less parameters in "where" clause, for example: SELECT TOP (10) [t0].[CLIENTNUMBER], [t0].[FIRSTNAME], [t0].[LASTNAME], [t0].[MI], [t0].[MDOCNUMBER] FROM [dbo].[CLIENT] AS [t0] WHERE (LTRIM(RTRIM([t0].[DOCREVNO])) = '0') AND ([t0].[FIRSTNAME] LIKE '%John%') AND ([t0].[CLIENTINDICATOR] = 'ON') Can anybody tell what is the best way to optimize performance of such query? Maybe I need to create an index? This table can have up to 1000K records in production.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >