Search Results

Search found 6988 results on 280 pages for 'if else statement'.

Page 78/280 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • More Denali Execution Plan Warning Goodies

    - by Dave Ballantyne
    In my last blog, I showed how the execution plan in denali has been enhanced by 2 new warnings ,conversion affecting cardinality and conversion affecting seek, which are shown when a data type conversion has happened either implicitly or explicitly. That is not all though, there is more .  Also added are two warnings when performance has been affected due to memory issues. Memory spills to tempdb are a costly operation and happen when SqlServer is under memory pressure and needs to free some up. For a long time you have been able to see these as warnings in a profiler trace as a sort or hash warning event,  but now they are included right in the execution plan.  Not only that but also you can see which operator caused the spill , not just which statement.  Pretty damn handy. Another cause of performance problems relating to memory are memory grant waits.  Here is an informative write up on them,  but simply speaking , SQLServer has to allocate a certain amount of memory for each statement. If it is unable to you get a “memory grant wait”.  Once again there are other methods of analyzing these,  but the plan now shows these too. Don't worry that’s not real production code There is one other new warning that is of interest to me, “Unmatched Indexes”.  Once I find out the conditions under which that fires ill blog about it.

    Read the article

  • Using pkexec policy to run out of /opt/

    - by liberavia
    I still try to make it possible to run my app with root priveleges. Therefore I created two policies to run the application via pkexec (one for /usr/bin and one for /opt/extras... ) and added them to the setup.py: data_files=[('/usr/share/polkit-1/actions', ['data/com.ubuntu.pkexec.armorforge.policy']), ('/usr/share/polkit-1/actions', ['data/com.ubuntu.extras.pkexec.armorforge.policy']), ('/usr/bin/', ['data/armorforge-pkexec'])] ) additionally I added a startscript which uses pkexec for starting the application. It distinguishes between the two places and is used in the Exec-Statement of the desktopfile: #!/bin/sh if [ -f /opt/extras.ubuntu.com/armorforge/bin/armorforge ]; then pkexec "/opt/extras.ubuntu.com/armorforge/bin/armorforge" "$@" else pkexec `which armorforge` "$@" fi If I simply do a quickly package everything will work right. But if I package with extras option: quickly package --extras the Exec-statement will be exchanged. Even if I try to simulate the pkexec call via armorforge-pkexec It will aks for a password and then returns this: andre@andre-desktop:~/Entwicklung/Ubuntu/armorforge$ armorforge-pkexec (armorforge:10108): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Trace/breakpoint trap (core dumped) So ok, I could not trick the opt-thing. How can I make sure, that my Application will run with root priveleges out of opt. I copied the way of using pkexec from synaptic. My application is for communicating with apparmor which currently has no dbus interface. Else I need to write into /etc/apparmor.d-folder. How should I deal with the opt-build which, as far as I understand, is required to submit my application to the ubuntu software center. Thanks for any hints and/or links :-)

    Read the article

  • Creating a Predicate Builder extension method

    - by Rippo
    I have a Kendo UI Grid that I am currently allowing filtering on multiple columns. I am wondering if there is a an alternative approach removing the outer switch statement? Basically I want to able to create an extension method so I can filter on a IQueryable<T> and I want to drop the outer case statement so I don't have to switch column names. private static IQueryable<Contact> FilterContactList(FilterDescriptor filter, IQueryable<Contact> contactList) { switch (filter.Member) { case "Name": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Firstname.StartsWith(filter.Value.ToString()) || w.Lastname.StartsWith(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Firstname.Contains(filter.Value.ToString()) || w.Lastname.Contains(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).Contains( filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Firstname == filter.Value.ToString() || w.Lastname == filter.Value.ToString() || (w.Firstname + " " + w.Lastname) == filter.Value.ToString()); break; } break; case "Company": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Company.StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Company.Contains(filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Company == filter.Value.ToString()); break; } break; } return contactList; } Some additional information, I am using NHibernate Linq. Also another problem is that the "Name" column on my grid is actually "Firstname" + " " + "LastName" on my contact entity. We can also assume that all filterable columns will be strings.

    Read the article

  • Date and Time Support in SQL Server 2008

    - by Aamir Hasan
      Using the New Date and Time Data Types Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 1.       The new date and time data types in SQL Server 2008 offer increased range and precision and are ANSI SQL compatible. 2.       Separate date and time data types minimize storage space requirements for applications that need only date or time information. Moreover, the variable precision of the new time data type increases storage savings in exchange for reduced accuracy. 3.       The new data types are mostly compatible with the original date and time data types and use the same Transact-SQL functions. 4.       The datetimeoffset data type allows you to handle date and time information in global applications that use data that originates from different time zones. SELECT c.name, p.* FROM politics pJOIN country cON p.country = c.codeWHERE YEAR(Independence) < 1753ORDER BY IndependenceGO8.    Highlight the SELECT statement and click Execute ( ) to show the use of some of the date functions.T-SQLSELECT c.name AS [Country Name],        CONVERT(VARCHAR(12), p.Independence, 107) AS [Independence Date],       DATEDIFF(YEAR, p.Independence, GETDATE()) AS [Years Independent (appox)],       p.GovernmentFROM politics pJOIN country cON p.country = c.codeWHERE YEAR(Independence) < 1753ORDER BY IndependenceGO10.    Select the SET DATEFORMAT statement and click Execute ( ) to change the DATEFORMAT to day-month-year.T-SQLSET DATEFORMAT dmyGO11.    Select the DECLARE and SELECT statements and click Execute ( ) to show how the datetime and datetime2 data types interpret a date literal.T-SQLSET DATEFORMAT dmyDECLARE @dt datetime = '2008-12-05'DECLARE @dt2 datetime2 = '2008-12-05'SELECT MONTH(@dt) AS [Month-Datetime], DAY(@dt)     AS [Day-Datetime]SELECT MONTH(@dt2) AS [Month-Datetime2], DAY(@dt2)     AS [Day-Datetime2]GO12.    Highlight the DECLARE and SELECT statements and click Execute ( ) to use integer arithmetic on a datetime variable.T-SQLDECLARE @dt datetime = '2008-12-05'SELECT @dt + 1GO13.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how integer arithmetic is not allowed for datetime2 variables.T-SQLDECLARE @dt2 datetime = '2008-12-05'SELECT @dt2 + 1GO14.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how to use DATE functions to do simple arithmetic on datetime2 variables.T-SQLDECLARE @dt2 datetime2(7) = '2008-12-05'SELECT DATEADD(d, 1, @dt2)GO15.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how the GETDATE function can be used with both datetime and datetime2 data types.T-SQLDECLARE @dt datetime = GETDATE();DECLARE @dt2 datetime2(7) = GETDATE();SELECT @dt AS [GetDate-DateTime], @dt2 AS [GetDate-DateTime2]GO16.    Draw attention to the values returned for both columns and how they are equal.17.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how the SYSDATETIME function can be used with both datetime and datetime2 data types.T-SQLDECLARE @dt datetime = SYSDATETIME();DECLARE @dt2 datetime2(7) = SYSDATETIME();SELECT @dt AS [Sysdatetime-DateTime], @dt2     AS [Sysdatetime-DateTime2]GO18.    Draw attention to the values returned for both columns and how they are different.Programming Global Applications with DateTimeOffset 2.    If you have not previously created the SQLTrainingKitDB database while completing another demo in this training kit, highlight the CREATE DATABASE statement and click Execute ( ) to do so now.T-SQLCREATE DATABASE SQLTrainingKitDBGO3.    Select the USE and CREATE TABLE statements and click Execute ( ) to create table datetest in the SQLTrainingKitDB database.T-SQLUSE SQLTrainingKitDBGOCREATE TABLE datetest (  id integer IDENTITY PRIMARY KEY,  datetimecol datetimeoffset,  EnteredTZ varchar(40)); Reference:http://www.microsoft.com/downloads/details.aspx?FamilyID=E9C68E1B-1E0E-4299-B498-6AB3CA72A6D7&displaylang=en   

    Read the article

  • Telerik Releases a new Visual Entity Designer

    Love LINQ to SQL but are concerned that it is a second class citizen? Need to connect to more databases other than SQL Server? Think that the Entity Framework is too complex? Want a domain model designer for data access that is easy, yet powerful? Then the Telerik Visual Entity Designer is for you. Built on top of Telerik OpenAccess ORM, a very mature and robust product, Teleriks Visual Entity Designer is a new way to build your domain model that is very powerful and also real easy to use. How easy? Ill show you here. First Look: Using the Telerik Visual Entity Designer To get started, you need to install the Telerik OpenAccess ORM Q1 release for Visual Studio 2008 or 2010. You dont need to use any of the Telerik OpenAccess wizards, designers, or using statements. Just right click on your project and select Add|New Item from the context menu. Choose Telerik OpenAccess Domain Model from the Visual Studio project templates. (Note to existing OpenAccess users, dont run the Enable ORM wizard or any other OpenAccess menu unless you are building OpenAccess Entities.) You will then have to specify the database backend (SQL Server, SQL Azure, Oracle, MySQL, etc) and connection. After you establish your connection, select the database objects you want to add to your domain model. You can also name your model, by default it will be NameofyourdatabaseEntityDiagrams. You can click finish here if you are comfortable, or tweak some advanced settings. Many users of domain models like to add prefixes and suffixes to classes, fields, and properties as well as handle pluralization. I personally accept the defaults, however, I hate how DBAs force underscores on me, so I click on the option to remove them. You can also tweak your namespace, mapping options, and define your own code generation template to gain further control over the outputted code. This is a very powerful feature, but for now, I will just accept the defaults.   When we click finish, you can see your domain model as a file with the .rlinq extension in the Solution Explorer. You can also bring up the visual designer to view or further tweak your model by double clicking on the model in the Solution Explorer.  Time to use the model! Writing a LINQ Query Programming against the domain model is very simple using LINQ. Just set a reference to the model (line 12 of the code below) and write a standard LINQ statement (lines 14-16).  (OpenAccess users: notice the you dont need any using statements for OpenAccess or an IObjectScope, just raw LINQ against your model.) 1: using System; 2: using System.Linq; 3: //no need for anOpenAccess using statement 4:   5: namespace ConsoleApplication3 6: { 7: class Program 8: { 9: static void Main(string[] args) 10: { 11: //a reference tothe data context 12: NorthwindEntityDiagrams dat = new NorthwindEntityDiagrams(); 13: //LINQ Statement 14: var result = from c in dat.Customers 15: where c.Country == "Germany" 16: select c; 17:   18: //Print out the company name 19: foreach (var cust in result) 20: { 21: Console.WriteLine("CompanyName: " + cust.CompanyName); 22: } 23: //keep the consolewindow open 24: Console.Read(); 25: } 26: } 27: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Lines 19-24 loop through the result of our LINQ query and displays the results. Thats it! All of the super powerful features of OpenAccess are available to you to further enhance your experience, however, in most cases this is all you need. In future posts I will show how to use the Visual Designer with some other scenarios. Stay tuned. Enjoy! Technorati Tags: Telerik,OpenAccess,LINQ Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Composing Silverlight Applications With MEF

    - by PeterTweed
    Anyone who has written an application with complexity enough to warrant multiple controls on multiple pages/forms should understand the benefit of composite application development.  That is defining your application architecture that can be separated into separate pieces each with it’s own distinct purpose that can then be “composed” together into the solution. Composition can be useful in any layer of the application, from the presentation layer, the business layer, common services or data access.  Historically people have had different options to achieve composing applications from distinct well known pieces – their own version of dependency injection, containers to aid with composition like Unity, the composite application guidance for WPF and Silverlight and before that the composite application block. Microsoft has been working on another mechanism to aid composition and extension of applications for some time now – the Managed Extensibility Framework or MEF for short.  With Silverlight 4 it is part of the Silverlight environment.  MEF allows a much simplified mechanism for composition and extensibility compared to other mechanisms – which has always been the primary issue for adoption of the earlier mechanisms/frameworks. This post will guide you through the simple use of MEF for the scenario of composition of an application – using exports, imports and composition.  Steps: 1.     Create a new Silverlight 4 application. 2.     Add references to the following assemblies: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 3.     Add a new user control called LeftControl. 4.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Beige" Margin="40" >         <Button Content="Left Content" Margin="30"></Button>     </Grid> 5.     Add the following statement to the top of the LeftControl.xaml.cs file using System.ComponentModel.Composition; 6.     Add the following attribute to the LeftControl class     [Export(typeof(LeftControl))]   This attribute tells MEF that the type LeftControl will be exported – i.e. made available for other applications to import and compose into the application. 7.     Add a new user control called RightControl. 8.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Green" Margin="40"  >         <TextBlock Margin="40" Foreground="White" Text="Right Control" FontSize="16" VerticalAlignment="Center" HorizontalAlignment="Center" ></TextBlock>     </Grid> 9.     Add the following statement to the top of the RightControl.xaml.cs file using System.ComponentModel.Composition; 10.   Add the following attribute to the RightControl class     [Export(typeof(RightControl))] 11.   Add the following xaml to the LayoutRoot Grid in MainPage.xaml:         <StackPanel Orientation="Horizontal" HorizontalAlignment="Center">             <Border Name="LeftContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>             <Border Name="RightContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>         </StackPanel>   The borders will hold the controls that will be imported and composed via MEF. 12.   Add the following statement to the top of the MainPage.xaml.cs file using System.ComponentModel.Composition; 13.   Add the following properties to the MainPage class:         [Import(typeof(LeftControl))]         public LeftControl LeftUserControl { get; set; }         [Import(typeof(RightControl))]         public RightControl RightUserControl { get; set; }   This defines properties accepting LeftControl and RightControl types.  The attrributes are used to tell MEF the discovered type that should be applied to the property when composition occurs. 14.   Replace the MainPage constructore with the following code:         public MainPage()         {             InitializeComponent();             CompositionInitializer.SatisfyImports(this);             LeftContent.Child = LeftUserControl;             RightContent.Child = RightUserControl;         }   The CompositionInitializer.SatisfyImports(this) function call tells MEF to discover types related to the declared imports for this object (the MainPage object).  At that point, types matching those specified in the import defintions are discovered in the executing assembly location of the application and instantiated and assigned to the matching properties of the current object. 15.   Run the application and you will see the left control and right control types displayed in the MainPage:   Congratulations!  You have used MEF to dynamically compose user controls into a parent control in a composite application model. In the next post we will build on this topic to cover using MEF to compose Silverlight applications dynamically in download on demand scenarios – so .xap packages can be downloaded only when needed, avoiding large initial download for the main application xap. Take the Slalom Challenge at www.slalomchallenge.com!

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • Auto_increment values in InnoDB?

    - by Timmy
    I've been using InnoDB for a project, and relying on auto_increment. This is not a problem for most of the tables, but for tables with deletion, this might be an issue: AUTO_INCREMENT Handling in InnoDB particularly this part: AUTO_INCREMENT column named ai_col: After a server startup, for the first insert into a table t, InnoDB executes the equivalent of this statement: SELECT MAX(ai_col) FROM t FOR UPDATE; InnoDB increments by one the value retrieved by the statement and assigns it to the column and to the auto-increment counter for the table. This is a problem because while it ensures that within the table, the key is unique, there are foreign keys to this table where those keys are no longer unique. The mysql server does/should not restart often, but this is breaking. Are there any easy ways around this?

    Read the article

  • Emacs: selective c-auto-newline

    - by Yktula
    When c-auto-newline is set to non-nil, it re-indents the current line and inserts a carriage return and then indents the new line. However. I'm using 1TBS indent-style, which means if/else statements are made like this: if (n == 1) { exit(EXIT_SUCCESS); } else { perror("n"); } Also, I write do/while write loops like this: do { printf("%d\n", n++); } while (n < 64); As such, while I do want a newline automatically inserted after every opening brace and semicolon, I don't want newlines to be automatically inserted after an if statement or do loop is concluded with a closing brace. How can I have GNU Emacs (23.2.1, *nix) selectively insert newlines like that? Along the same lines, can I have Emacs insert an opening brace, a newline, and a closing brace on another newline, while putting the cursor in the middle of the two braces after closing parentheses following an if statement, function declaration, and the like?

    Read the article

  • Batch select with SQLAlchemy

    - by muckabout
    I have a large set of values V, some of which are likely to exist in a table T. I would like to insert into the table those which are not yet inserted. So far I have the code: for value in values: s = self.conn.execute(mytable.__table__.select(mytable.value == value)).first() if not s: to_insert.append(value) I feel like this is running slower than it should. I have a few related questions: Is there a way to construct a select statement such that you provide a list (in this case, 'values') to which sqlalchemy responds with records which match that list? Is this code overly expensive in constructing select objects? Is there a way to construct a single select statement, then parameterize at execution time?

    Read the article

  • From a language design perspective, if Javascript objects are simply associative arrays, then why ha

    - by Christopher Altman
    I was reading about objects in O'Reilly Javascript Pocket Reference and the book made the following statement. An object is a compound data type that contains any number of properties. Javascript objects are associative arrays: they associate arbitrary data values with arbitrary names. From a language design perspective, if objects are simply associative arrays, then why have objects? I appreciate the convenience of having objects in the language, but if convenience is the main purpose for adding a data type, then how do you decide what to add and what to not add in a language? A language can quickly become bloated and less valuable if it is weighed down by several overlapping methods and data types (Is this a true statement or am I missing something).

    Read the article

  • OperationalError "unable to open database file" processing query results with SQLAlchemy and SQLite3

    - by Peter
    I'm running into this little problem that I hope is just a dumb user error. It looks like some sort of a size limit with a query to a SQLite database. I managed to reproduce the issue with an in-memory DB and a simple script shown below. I can make it work by either reducing the number of records in the DB; or by reducing the size of each record; or by dropping the order_by() call. I am using Python 2.5.5 and SQLAlchemy 0.6.0 in a Cygwin environment. Thanks! #!/usr/bin/python from sqlalchemy.orm import sessionmaker import sqlalchemy import sqlalchemy.orm class Person(object): def __init__(self, name): self.name = name engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sessionmaker(bind=engine) metadata = sqlalchemy.schema.MetaData(bind=engine) person_table = sqlalchemy.Table('person', metadata, sqlalchemy.Column('id', sqlalchemy.types.Integer, primary_key=True), sqlalchemy.Column('name', sqlalchemy.types.String)) metadata.create_all(engine) sqlalchemy.orm.mapper(Person, person_table) session = Session() session.add_all([Person("012345678901234567890123456789012") for i in range(5000)]) session.commit() persons = session.query(Person).order_by(Person.name).all() print "count =", len(persons) session.close() The all() call to the query result fails with the OperationalError exception: Traceback (most recent call last): File "./stress.py", line 27, in <module> persons = session.query(Person).order_by(Person.name).all() File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1343, in all return list(self) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1451, in __iter__ return self._execute_and_instances(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1456, in _execute_and_instances mapper=self._mapper_zero_or_none()) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py", line 737, in execute clause, params or {}) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1109, in execute return Connection.executors[c](self, object, multiparams, params) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1186, in _execute_clauseelement return self.__execute_context(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1215, in __execute_context context.parameters[0], context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1284, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor, context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1282, in _cursor_execute self.dialect.do_execute(cursor, statement, parameters, context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) unable to open database file u'SELECT person.id AS person_id, person.name AS person_name \nFROM person ORDER BY person.name' ()

    Read the article

  • How to execute "eval" without writing "eval" in JavaScript

    - by Infinity
    Here's the deal, we have a big JS library that we want to compress, but YUI compressor doesn't fully compress the code if it finds an "eval" statement, out of fear that it will break something else. That's great and all, but we know exactly what is getting eval'd, so we don't want it to get conservative because there's an eval statement in MooTools JSON.decode So basically the question is, is there any alternative (maybe creative) way of writing a expression that returns the eval function? I tried a few, but no dice: window['eval'](stuff); window['e'+'val'](stuff); // stuff runs in the global scope, we need local scope this['eval'](stuff); // this.eval is not a function (new Function( "with(this) { return " + '(' + stuff + ')' + "}"))() // global scope again Any ideas? Thx

    Read the article

  • SQL Oracle LEFT JOIN and SUBQUERY error: ORA-00905: missing keyword

    - by Curro
    Hello everyone. Asking for your help on this Oracle query. It's giving me the error 2 "ORA-00905: missing keyword". It was working fine before I added the LEFT JOIN statement. Obviously it won't deliver the information as we need it without the LEFT JOIN statement. Please provide any help to know which keyword is missing in this query Thanks a lot!: DB Tables: DW.TICKETS DW.TICKET_ACTLOG Subquery table: TABLE_RESOLVERS SELECT TO_CHAR(DW.TICKETS.RESOLVED_TIMESTAMP,'YYYY-MM-DD HH24:MI:SS') AS RESOLVED_DATE, DW.TICKETS.SUBJECT, DW.TICKETS.OWNER_CORE_ID, DW.TICKETS.TICKET_NUMBER, TABLE_RESOLVERS.SUBMITTER AS RESOLVER_CORE_ID FROM DW.TICKETS LEFT JOIN (SELECT TICKET_NUMBER, SUBMITTER FROM DW.TICKET_ACTLOG WHERE TYPE = 'Final Resolution' AND (SUBMITTER = 'B02666' OR SUBMITTER = 'R66604') ORDER BY CREATE_TIMESTAMP DESC ) AS TABLE_RESOLVERS ON DW.TICKETS.TICKET_NUMBER = TABLE_RESOLVERS.TICKET_NUMBER WHERE DW.TICKETS.RESOLVED_TIMESTAMP >= to_date('05-03-2010','dd-mm-yyyy') AND DW.TICKETS.RESOLVED_TIMESTAMP < to_date('8-03-2010','dd-mm-yyyy') AND DW.TICKETS.TICKET_NUMBER LIKE 'TCK%' AND DW.TICKETS.TICKET_NUMBER IN (SELECT TICKET_NUMBER FROM DW.TICKET_ACTLOG WHERE (SUBMITTER = 'B02666' OR SUBMITTER = 'R66604') ) ORDER BY DW.TICKETS.CREATE_TIMESTAMP ASC

    Read the article

  • eventmachine on debian fails install via rubygems

    - by Max
    this has been killing me for the last 5 hours. I don't seem to be able to get eventmachine running on my debian box. here this output: $ gem install thin Building native extensions. This could take a while... ERROR: Error installing thin: ERROR: Failed to build gem native extension. /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/bin/ruby extconf.rb checking for rb_trap_immediate in ruby.h,rubysig.h... no checking for rb_thread_blocking_region()... yes checking for inotify_init() in sys/inotify.h... yes checking for writev() in sys/uio.h... yes checking for rb_wait_for_single_fd()... yes checking for rb_enable_interrupt()... yes checking for rb_time_new()... yes checking for sys/event.h... no checking for epoll_create() in sys/epoll.h... yes creating Makefile make compiling kb.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from kb.cpp:20: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from kb.cpp:20: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from kb.cpp:20: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from kb.cpp:20: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type compiling rubymain.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from rubymain.cpp:20: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from rubymain.cpp:20: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from rubymain.cpp:20: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from rubymain.cpp:20: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type compiling ssl.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from ssl.cpp:23: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from ssl.cpp:23: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from ssl.cpp:23: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from ssl.cpp:23: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type compiling cmain.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from cmain.cpp:20: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from cmain.cpp:20: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from cmain.cpp:20: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from cmain.cpp:20: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type cmain.cpp:96: warning: type qualifiers ignored on function return type cmain.cpp:107: warning: type qualifiers ignored on function return type cmain.cpp:117: warning: type qualifiers ignored on function return type cmain.cpp:127: warning: type qualifiers ignored on function return type cmain.cpp:269: warning: type qualifiers ignored on function return type cmain.cpp:279: warning: type qualifiers ignored on function return type cmain.cpp:289: warning: type qualifiers ignored on function return type cmain.cpp:299: warning: type qualifiers ignored on function return type cmain.cpp:309: warning: type qualifiers ignored on function return type cmain.cpp:329: warning: type qualifiers ignored on function return type cmain.cpp:678: warning: type qualifiers ignored on function return type compiling em.cpp cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++ cc1plus: warning: command line option "-Wimplicit-function-declaration" is valid for C/ObjC but not for C++ In file included from project.h:149, from em.cpp:23: binder.h:35: warning: type qualifiers ignored on function return type In file included from project.h:150, from em.cpp:23: em.h:84: warning: type qualifiers ignored on function return type em.h:85: warning: type qualifiers ignored on function return type em.h:86: warning: type qualifiers ignored on function return type em.h:88: warning: type qualifiers ignored on function return type em.h:89: warning: type qualifiers ignored on function return type em.h:90: warning: type qualifiers ignored on function return type em.h:91: warning: type qualifiers ignored on function return type em.h:93: warning: type qualifiers ignored on function return type em.h:99: warning: type qualifiers ignored on function return type em.h:116: warning: type qualifiers ignored on function return type em.h:125: warning: type qualifiers ignored on function return type In file included from project.h:154, from em.cpp:23: eventmachine.h:46: warning: type qualifiers ignored on function return type eventmachine.h:47: warning: type qualifiers ignored on function return type eventmachine.h:48: warning: type qualifiers ignored on function return type eventmachine.h:50: warning: type qualifiers ignored on function return type eventmachine.h:65: warning: type qualifiers ignored on function return type eventmachine.h:66: warning: type qualifiers ignored on function return type eventmachine.h:67: warning: type qualifiers ignored on function return type eventmachine.h:68: warning: type qualifiers ignored on function return type In file included from project.h:154, from em.cpp:23: eventmachine.h:103: warning: type qualifiers ignored on function return type eventmachine.h:105: warning: type qualifiers ignored on function return type eventmachine.h:108: warning: type qualifiers ignored on function return type em.cpp: In member function 'bool EventMachine_t::_RunEpollOnce()': em.cpp:578: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp:578: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp: In member function 'bool EventMachine_t::_RunSelectOnce()': em.cpp:974: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp:974: warning: 'int rb_thread_select(int, fd_set*, fd_set*, fd_set*, timeval*)' is deprecated (declared at /home/eventhub/.rvm/rubies/ruby-1.9.3-p125/include/ruby-1.9.1/ruby/intern.h:379) em.cpp: At global scope: em.cpp:1057: warning: type qualifiers ignored on function return type em.cpp:1079: warning: type qualifiers ignored on function return type em.cpp:1265: warning: type qualifiers ignored on function return type em.cpp:1338: warning: type qualifiers ignored on function return type em.cpp:1510: warning: type qualifiers ignored on function return type em.cpp:1593: warning: type qualifiers ignored on function return type em.cpp:1856: warning: type qualifiers ignored on function return type em.cpp:1982: warning: type qualifiers ignored on function return type em.cpp:2046: warning: type qualifiers ignored on function return type em.cpp:2070: warning: type qualifiers ignored on function return type em.cpp:2142: warning: type qualifiers ignored on function return type em.cpp:2361: fatal error: error writing to /tmp/ccdlOK0T.s: No space left on device compilation terminated. make: *** [em.o] Error 1 Gem files will remain installed in /home/eventhub/.rvm/gems/ruby-1.9.3-p125/gems/eventmachine-1.0.1 for inspection. Results logged to /home/eventhub/.rvm/gems/ruby-1.9.3-p125/gems/eventmachine-1.0.1/ext/gem_make.out Any thoughts? I read a lot of different ways to solve this issue, but none of them worked. Thanks

    Read the article

  • Problem -- My Android "Hello World" App Won't Say 'Hello"

    - by keith
    Hello, I hope that I have come to the right post for a beginner’s question abut Android programming. If not, please feel free to direct me to a better forum. I created a hello world application, and the system generated most of the Android language below. When running the app without the system.out statement, there is no “hello” in the emulator. Then, using the Eclipse tutorial, I read that I can add the system.out.println statement to main. Again the app runs, but there is no output. What am I not understanding here? android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" System.out.println =" Hello world!" / Thank you, Keith

    Read the article

  • pdb show different variable values than print statements

    - by martin
    Hi, everyone. I am debugging a python module with homemade c extensions. The output seems correct when I print it with 'p' in pdb. But if I use a normal print statement or pickle it, the output is wrong. What could be causing pdb to show different values than normal execution? I can even step to the print statement in debug mode, and pdb will show the correct value but the program will print the wrong one. The problem seems to happen only when I have called a certain c extension earlier. Glad to post code if that helps. Thank you.

    Read the article

  • Co-Authors Wordpress Plugin: coauthors_wp_list_authors function not working correctly

    - by rayne
    The Co-Authors Plus Plugin for Wordpress has a very annoying bug. The custom function coauthors_wp_list_authors lists authors the same way the wordpress function wp_list_authors does, but it does not include authors in the list who don't have a post of their own - if they have only entries in which they are listed as co-author but not as author, they will not be included in the list. That is of course missing a very important point. I've looked at the faulty SQL statement, but unfortunately my knowledge of advanced SQL, especially when it comes to JOINs, as well as my knowledge of the wp database structure is too limited and I remain clueless. There is a topic in the WP support forum, but unfortunately the information there is very outdated and the fix is not applicable anymore. I couldn't find any other, more current solutions on the internet. I'd be glad if somewhere here could help fix the SQL statement so it also lists co-authors who don't have posts where they're the sole author, as well as display the correct post count for all authors. Here's the entire function for reference purposes with a comment marking the SQL statement: function coauthors_wp_list_authors($args = '') { global $wpdb, $coauthors_plus; $defaults = array( 'optioncount' => false, 'exclude_admin' => true, 'show_fullname' => false, 'hide_empty' => true, 'feed' => '', 'feed_image' => '', 'feed_type' => '', 'echo' => true, 'style' => 'list', 'html' => true ); $r = wp_parse_args( $args, $defaults ); extract($r, EXTR_SKIP); $return = ''; $authors = $wpdb->get_results("SELECT ID, user_nicename from $wpdb->users " . ($exclude_admin ? "WHERE user_login <> 'admin' " : '') . "ORDER BY display_name"); $author_count = array(); # this is the SQL statement which doesn't work correctly: $query = "SELECT DISTINCT $wpdb->users.ID AS post_author, $wpdb->terms.name AS user_name, $wpdb->term_taxonomy.count AS count"; $query .= " FROM $wpdb->posts"; $query .= " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb->term_relationships.object_id)"; $query .= " INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; $query .= " INNER JOIN $wpdb->terms ON ($wpdb->term_taxonomy.term_id = $wpdb->terms.term_id)"; $query .= " INNER JOIN $wpdb->users ON ($wpdb->terms.name = $wpdb->users.user_login)"; $query .= " WHERE post_type = 'post' AND " . get_private_posts_cap_sql( 'post' ); $query .= " AND $wpdb->term_taxonomy.taxonomy = '$coauthors_plus->coauthor_taxonomy'"; $query .= " GROUP BY post_author"; foreach ((array) $wpdb->get_results($query) as $row) { $author_count[$row->post_author] = $row->count; } foreach ( (array) $authors as $author ) { $link = ''; $author = get_userdata( $author->ID ); $posts = (isset($author_count[$author->ID])) ? $author_count[$author->ID] : 0; $name = $author->display_name; if ( $show_fullname && ($author->first_name != '' && $author->last_name != '') ) $name = "$author->first_name $author->last_name"; if( !$html ) { if ( $posts == 0 ) { if ( ! $hide_empty ) $return .= $name . ', '; } else $return .= $name . ', '; continue; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= '<li>'; if ( $posts == 0 ) { if ( ! $hide_empty ) $link = $name; } else { $link = '<a href="' . get_author_posts_url($author->ID, $author->user_nicename) . '" title="' . esc_attr( sprintf(__("Posts by %s", 'co-authors-plus'), $author->display_name) ) . '">' . $name . '</a>'; if ( (! empty($feed_image)) || (! empty($feed)) ) { $link .= ' '; if (empty($feed_image)) $link .= '('; $link .= '<a href="' . get_author_feed_link($author->ID) . '"'; if ( !empty($feed) ) { $title = ' title="' . esc_attr($feed) . '"'; $alt = ' alt="' . esc_attr($feed) . '"'; $name = $feed; $link .= $title; } $link .= '>'; if ( !empty($feed_image) ) $link .= "<img src=\"" . esc_url($feed_image) . "\" style=\"border: none;\"$alt$title" . ' />'; else $link .= $name; $link .= '</a>'; if ( empty($feed_image) ) $link .= ')'; } if ( $optioncount ) $link .= ' ('. $posts . ')'; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= $link . '</li>'; else if ( ! $hide_empty ) $return .= $link . ', '; } $return = trim($return, ', '); if ( ! $echo ) return $return; echo $return; }

    Read the article

  • How to determine MS Access field size via OleDb

    - by Andy
    The existing application is in C#. During startup the application calls a virtual method to make changes to the database (for example a new revision may need to calculate a new field or something). An open OleDb connection is passed into the method. I need to change a field width. The ALTER TABLE statement is working fine. But I would like to avoid executing the ALTER TABLE statement if the field is already the appropriate size. Is there a way to determine the size of an MS Access field using the same OleDb connection?

    Read the article

  • ODBC: Mapping of literal type names in create table statements

    - by matthias-meyer
    I was wondering if data types in a a literal "create table" statement, executed over ODBC, are replaced with their database specific counterparts (platform is Windows/.Net/C#). I cannot find this feature in the ODBC docs, and there seems to be no list of literal "ODBC data types". However, I know that this works for Oracle, SQL Server and Access; the following statement is executed correctly, although the type LONGVARBINARY is no native type in all of these systems: CREATE TABLE (MYCOLUMN LONGVARBINARY) However, e.g. for Oracle the mapped native type depends on the used ODBC driver. Is this an undocumented feature? Is there a list of supported type names anywhere? Thanks!

    Read the article

  • WatiN wait for java script execution completed before click link (IE)

    - by Sendoh
    i have this page displaying statement every 2 second (using javascript), at the end of it there is a link which the user can click. with IE it will click the link before javascript generated statement is completed. when i use Link.Click() (with Firefox it will throw exception, which is fine with me) is there a way to wait for javascript execution to complete before link gets clicked (IE)? TIA EDITED // the code, pretty simple seaching for link that startwith //"Continue seaching" in the browser then click if (ln.Text.StartsWith("Continue searching", StringComparison.CurrentCultureIgnoreCase)) { ln.Click(); } Thx

    Read the article

  • Behaviour of insertion trigger when defining autoincrement in Oracle

    - by Genba
    I have been looking for a way to define an autoincrement data type in Oracle and have found these questions on Stack Overflow: Autoincrement in Oracle Autoincrement Primary key in Oracle database The way to use autoincrement types consists in defining a sequence and a trigger to make insertion transparent, where the insertion trigger looks so: create trigger mytable_trg before insert on mytable for each row when (new.id is null) begin select myseq.nextval into :new.id from dual; end; I have some doubts about the behaviour of this trigger: What does this trigger do when the supplied value of "id" is different from NULL? What does the colon before "new" mean? I want the trigger to insert the new row with the next value of the sequence as ID whatever the supplied value of "new.id" is. I imagine that the WHEN statement makes the trigger to only insert the new row if the supplied ID is NULL (and it will not insert, or will fail, otherwise). Could I just remove the WHEN statement in order for the trigger to always insert using the next value of the sequence?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >