Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 778/1121 | < Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >

  • PaaS, DBaaS and the Oracle Database Cloud Service

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As with many widely hyped areas, there is much more variation within the broad spectrum of products referred to as “Cloud” that is immediately apparent. This variation is evident in one of the key misunderstandings about the Oracle Database Cloud Service. People could be forgiven for thinking that the Database Cloud Service was a Database-as-a-Service (DBaaS), but this is actually not true. The Database Cloud Service is a Platform-as-a-Service, which presents a different user and developer interface and has a different set of qualities. A good way to think about the difference between these two varieties of Cloud offerings is that you, the customer, have to deal with things at the level of the offering, but not for anything below it. In practice, this means that you do not have to deal with hardware or system software, including installation and maintenance, for DBaaS. You also do not have much control over configuration of these options. For PaaS, you don’t have to deal with hardware, system software, or database software – and also do not have control over these levels in the stack. So you cannot modify configuration parameters for the database with the Database Cloud Service – your interface is through SQL and PL/SQL, with Application Express, included in the Database Cloud Service, or through JDBC for Java apps running in the Java Cloud Service, or through RESTful Web Services. You will notice what is not mentioned there – SQL*Net. You cannot access your Oracle Database Cloud Service by changing an entry in the TNSNames file and using SQL*Net. So the effort involved in migrating an existing Oracle Database in your data center to the Database Cloud Service may be prohibitive. The good news is that Application Express and the RESTful Web Services wizard in the Database Cloud Service allow you to develop new applications very quickly, and, of course, the provisioning of the entire Database Cloud Service takes only minutes.

    Read the article

  • Generate MERGE statements from a table

    - by Bill Graziano
    We have a requirement to build a test environment where certain tables get reset from production every night.  These are mainly lookup tables.  I played around with all kinds of fancy solutions and finally settled on a series of MERGE statements.  And being lazy I didn’t want to write them myself.  The stored procedure below will generate a MERGE statement for the table you pass it.  If you have identity values it populates those properly.  You need to have primary keys on the table for the joins to be generated properly.  The only thing hard coded is the source database.  You’ll need to update that for your environment.  We actually used a linked server in our situation. CREATE PROC dba_GenerateMergeStatement (@table NVARCHAR(128) )ASset nocount on; declare @return int;PRINT '-- ' + @table + ' -------------------------------------------------------------'--PRINT 'SET NOCOUNT ON;--'-- Set the identity insert on for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] ON; 'declare @sql varchar(max) = ''declare @list varchar(max) = '';SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + 's.' + [name] +', 'from sys.columnswhere object_id = object_id(@table)-- --------------------------------------------------------------------------------PRINT 'MERGE [dbo].[' + @table + '] AS t'PRINT 'USING (SELECT * FROM [source_database].[dbo].[' + @table + ']) as s'-- Get the join columns ----------------------------------------------------------SET @list = ''select @list = @list + 't.[' + c.COLUMN_NAME + '] = s.[' + c.COLUMN_NAME + '] AND 'from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE cwhere pk.TABLE_NAME = @tableand CONSTRAINT_TYPE = 'PRIMARY KEY'and c.TABLE_NAME = pk.TABLE_NAMEand c.CONSTRAINT_NAME = pk.CONSTRAINT_NAMESELECT @list = LEFT(@list, LEN(@list) -3)PRINT 'ON ( ' + @list + ')'-- WHEN MATCHED ------------------------------------------------------------------PRINT 'WHEN MATCHED THEN UPDATE SET'SELECT @list = '';SELECT @list = @list + ' [' + [name] + '] = s.[' + [name] +'],'from sys.columnswhere object_id = object_id(@table)-- don't update primary keysand [name] NOT IN (SELECT [column_name] from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @table and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME)-- and don't update identity columnsand columnproperty(object_id(@table), [name], 'IsIdentity ') = 0 --print @list PRINT left(@list, len(@list) -3 )-- WHEN NOT MATCHED BY TARGET ------------------------------------------------PRINT ' WHEN NOT MATCHED BY TARGET THEN';-- Get the insert listSET @list = ''SELECT @list = @list + '[' + [name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' INSERT(' + @list + ')'-- get the values listSET @list = ''SELECT @list = @list + 's.[' +[name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' VALUES(' + @list + ')'-- WHEN NOT MATCHED BY SOURCEprint 'WHEN NOT MATCHED BY SOURCE THEN DELETE; 'PRINT ''PRINT 'PRINT ''' + @table + ': '' + CAST(@@ROWCOUNT AS VARCHAR(100));';PRINT ''-- Set the identity insert OFF for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] OFF; 'PRINT ''PRINT 'GO'PRINT '';

    Read the article

  • PASS Summit 2011: Save Money Now

    - by Bill Graziano
    Register by March 31st and save $200.  On April 1st we increase the price.  On July 1st we increase it again.  We have regular price bumps all the way through to the Summit.  You can save yourself $200 if you register by Thursday. In two years of marketing for PASS and a year of finance I’ve learned a fair bit about our pricing, why we do this and how you react to it.  Let me help you save some money! Price bumps drive registrations.  We see big spikes in the two weeks prior to a price increase.  Having a deadline with a cost attached is a great motivator to get people to take action. Registering early helps you and it helps PASS.  You get the exact same Summit at a cheaper rate.  PASS gets smoother cash flow and a better idea of how many people to expect.  We also get people that are already registered that will tell their friends about the conference. This tiered pricing lets us serve those that are very price conscious.  They can register early and take advantage of these discounts.  I know there are people that pay for this conference out of their own pockets.  This is a great way for those people to reduce the cost of the conference.  (And remember for next year that our cheapest pricing starts right after the Summit and usually goes up around the first of the year.) We also get big price bumps after we announce the program and the pre-conference sessions.  If you wrote down the 50 or so best known speakers in the SQL Server community I’m guessing we’ll have nearly all of them at the conference.  We did last year.  I expect we will this year too.  We’re going to have good sessions.  Why wait?  Register today. If you want to attend a pre-conference session you can always add it to your registration later.  Pre-con prices don’t change.  It’s very easy to update your registration and add a pre-conference session later. I want as many people as possible to attend the Summit.  It’s been a great experience for me and I hope it will be for you.  And if you are going to go, do yourself a favor and save some money.  Register today!

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • connecting mysql from android with jdbc

    - by manuraphy
    hai i used the following code to connect mysql in local host from android. it only displays the actions given in catch section . i dont know whether its a connection problem or not package com.test1; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class Test1Activity extends Activity { /** Called when the activity is first created. */ String str="new"; static ResultSet rs; static PreparedStatement st; static Connection con; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final TextView tv=(TextView)findViewById(R.id.user); try { Class.forName("com.mysql.jdbc.Driver"); con=DriverManager.getConnection("jdbc:mysql://10.0.2.2:8080/example","root",""); st=con.prepareStatement("select * from country where id=1"); rs=st.executeQuery(); while(rs.next()) { str=rs.getString(2); } tv.setText(str); setContentView(tv); } catch(Exception e) { tv.setText(str); } } } when executes it displays "new" in the avd. java.lang.management.ManagementFactory.getThreadMXBean, referenced from method com.mysql.jdbc.MysqlIO.appendDeadlockStatusInformation Could not find class 'javax.naming.StringRefAddr', referenced from method com.mysql.jdbc.ConnectionPropertiesImpl$ConnectionProperty.storeTo Could not find method javax.naming.Reference.get, referenced from method com.mysql.jdbc.ConnectionPropertiesImpl$ConnectionProperty.initializeFrom can anyone suggest some solution ? and thankz in advance

    Read the article

  • Problem with Informix JDBC, MONEY and decimal separator in string literals

    - by Michal Niklas
    I have problem with JDBC application that uses MONEY data type. When I insert into MONEY column: insert into _money_test (amt) values ('123.45') I got exception: Character to numeric conversion error The same SQL works from native Windows application using ODBC driver. I live in Poland and have Polish locale and in my country comma separates decimal part of number, so I tried: insert into _money_test (amt) values ('123,45') And it worked. I checked that in PreparedStatement I must use dot separator: 123.45. And of course I can use: insert into _money_test (amt) values (123.45) But some code is "general", it imports data from csv file and it was safe to put number into string literal. How to force JDBC to use DBMONEY (or simply dot) in literals? My workstation is WinXP. I have ODBC and JDBC Informix client in version 3.50 TC5/JC5. I have set DBMONEY to just dot: DBMONEY=. EDIT: Test code in Jython: import sys import traceback from java.sql import DriverManager from java.lang import Class Class.forName("com.informix.jdbc.IfxDriver") QUERY = "insert into _money_test (amt) values ('123.45')" def test_money(driver, db_url, usr, passwd): try: print("\n\n%s\n--------------" % (driver)) db = DriverManager.getConnection(db_url, usr, passwd) c = db.createStatement() c.execute("delete from _money_test") c.execute(QUERY) rs = c.executeQuery("select amt from _money_test") while (rs.next()): print('[%s]' % (rs.getString(1))) rs.close() c.close() db.close() except: print("there were errors!") s = traceback.format_exc() sys.stderr.write("%s\n" % (s)) print(QUERY) test_money("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'informix', 'passwd') test_money("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test', 'informix', 'passwd') Results when I run money literal with dot and comma: C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123,45') com.informix.jdbc.IfxDriver -------------- [123.45] sun.jdbc.odbc.JdbcOdbcDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: [Informix][Informix ODBC Driver][Informix]Character to numeric conversion error C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123.45') com.informix.jdbc.IfxDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: Character to numeric conversion error sun.jdbc.odbc.JdbcOdbcDriver -------------- [123.45]

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • Save a PDF created with FPDF php library in a MySQL blob field

    - by Davide Gualano
    I need to create a pdf file with the fpdf library and save it in a blob field in my MySQL database. The problem is, when I try to retrieve the file from the blob field and send it to the browser for the download, the donwloaded file is corrupted and does not display correctly. The same pdf file is correctly displayed if I send it immediately to the browser without storing it in the db, so it seems some of the data gets corrupted when is inserted in the db. My code is something like this: $pdf = new MyPDF(); //class that extends FPDF and create te pdf file $content = $pdf->Output("", "S"); //return the pdf file content as string $sql = "insert into mytable(myblobfield) values('".addslashes($content)."')"; mysql_query($sql); to store the pdf, and like this: $sql = "select myblobfield from mytable where id = '1'"; $result = mysql_query($sql); $rs = mysql_fetch_assoc($result); $content = stripslashes($rs['myblobfield']); header('Content-Type: application/pdf'); header("Content-Length: ".strlen(content)); header('Content-Disposition: attachment; filename=myfile.pdf'); print $content; to send it to the browser for downloading. What am I doing wrong? If I change my code to: $pdf = new MyPDF(); $pdf->Output(); //send the pdf to the browser the file is correctly displayed, so I assume that is correctly generated and the problem is in the storing in the db. Thanks in advance.

    Read the article

  • Deleting a user > need to also delete their project, and then activities for that project? (PHP, MyS

    - by Jamie
    Hi guys, Really stuck with this... basically my system has 4 tables; users, projects, user_projects and activities. The user table has a usertype field which defines whether or not they are admin or user (by an integer)... An admin can create a project, create an acitivity for the project and assign a user (limited access user) an activity. Therefore, this setup means that an admin is never directly associated with an activity (instead a project). When my head admin user deletes an admin, I need all projects and activities (for their projects) to be deleted also. My delete script for a user is simple so far and works, but I'm having trouble on how to gain the projectID in order to know which activities to remove (associated with the projects which are about to be deleted): $userid = $_GET['userid']; $query = "DELETE FROM users WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM projects WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM userprojects WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM activities WHERE projectid=".$projectid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); Now the first three queries execute fine, obviously because the userid is being retrieved successfully. However the 4th and final query I know is wrong, because there is no projectid to be gained from anywhere, however I put it there to help understand what I am trying to get!! :D Im guessing that i would need something like 'WHERE projectid=' then something to gather the removed projects from the userid which can be related to the activities for that project(s)!! Its a simple concept but I'm having trouble...please excuse any bad code as I am learning also. Thanks for any help!

    Read the article

  • Iphone SQLite Databse with german umlauts results in NULL value

    - by Daniel
    Hi guys, I'm quite new to the Iphone development and after search for an answer for 3 hours now, I hope that you guys can give me a hand. My problem is that I have a SQLite Database with german umlauts. Looking at it with a SQLite browser tool shows me that the data is stored with german umlauts, correctly. But selecting fields with german umlauts in it results in a NULL value. I'm already using "stringWithUTF8String", so I don't get the point where the problem is placed. Here is my code: -(void) readSearchFromDatabase { searchFlag = YES; // Setup the database object sqlite3 *database; // Init the SCode Array searchSCodes = [[NSMutableArray alloc] init]; // Open the database from the users filessytem if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { NSString *wildcard =@"%"; // Setup the SQL Statement and compile it NSString *sql = [NSString stringWithFormat:@"SELECT * FROM scode WHERE ta LIKE '%@%@%@' OR descriptionde LIKE '%@%@%@' OR descriptionen LIKE '%@%@%@'", wildcard, searchBar.text, wildcard, wildcard, searchBar.text, wildcard, wildcard, searchBar.text, wildcard, wildcard, searchBar.text, wildcard]; //Creating a const char SQL Statement especially for SQLite const char *sqlStatement = [sql UTF8String]; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to the feeds array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { // Read the data from the result row NSString *aTa = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSString *aReport = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; NSString *aDescriptionDE = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; NSString *aDescriptionEN = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 3)]; //Results a NULL value NSLog(@"Output: %@", aDescriptionDE); // Create a new SCode object with the data from the database SCode *searchSCode = [[SCode alloc] initWithTa:aTa report:aReport descriptionDE:aDescriptionDE descriptionEN:aDescriptionEN]; // Add the SCode object to the SCodes Array [searchSCodes addObject:searchSCode]; [searchSCode release]; } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(database); }

    Read the article

  • Scope of Connection Object for a Website using Connection Pooling (Local or Instance)

    - by Danny
    For a web application with connection polling enabled, is it better to work with a locally scoped connection object or instance scoped connection object. I know there is probably not a big performance improvement between the two (because of the pooling) but would you say that one follows a better pattern than the other. Thanks ;) public class MyServlet extends HttpServlet { DataSource ds; public void init() throws ServletException { ds = (DataSource) getServletContext().getAttribute("DBCPool"); } protected void doGet(HttpServletRequest arg0, HttpServletResponse arg1) throws ServletException, IOException { SomeWork("SELECT * FROM A"); SomeWork("SELECT * FROM B"); } void SomeWork(String sql) { Connection conn = null; try { conn = ds.getConnection(); // execute some sql ..... } finally { if(conn != null) { conn.close(); // return to pool } } } } Or public class MyServlet extends HttpServlet { DataSource ds; Connection conn;* public void init() throws ServletException { ds = (DataSource) getServletContext().getAttribute("DBCPool"); } protected void doGet(HttpServletRequest arg0, HttpServletResponse arg1) throws ServletException, IOException { try { conn = ds.getConnection(); SomeWork("SELECT * FROM A"); SomeWork("SELECT * FROM B"); } finally { if(conn != null) { conn.close(); // return to pool } } } void SomeWork(String sql) { // execute some sql ..... } }

    Read the article

  • force delete row on django app after migration

    - by unsorted
    After a migration with south, I ended up deleting a column. Now the current data in one of my tables is screwed up and I want to delete it, but attempts to delete just result in an error: >>> d = Degree.objects.all() >>> d.delete() Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Python26\lib\site-packages\django\db\models\query.py", line 440, in d elete for i, obj in izip(xrange(CHUNK_SIZE), del_itr): File "C:\Python26\lib\site-packages\django\db\models\query.py", line 106, in _ result_iter self._fill_cache() File "C:\Python26\lib\site-packages\django\db\models\query.py", line 760, in _ fill_cache self._result_cache.append(self._iter.next()) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 269, in i terator for row in compiler.results_iter(): File "C:\Python26\lib\site-packages\django\db\models\sql\compiler.py", line 67 2, in results_iter for rows in self.execute_sql(MULTI): File "C:\Python26\lib\site-packages\django\db\models\sql\compiler.py", line 72 7, in execute_sql cursor.execute(sql, params) File "C:\Python26\lib\site-packages\django\db\backends\util.py", line 15, in e xecute return self.cursor.execute(sql, params) File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 200, in execute return Database.Cursor.execute(self, query, params) DatabaseError: no such column: students_degree.abbrev >>> Is there a simple way to just force a delete? Do I drop the table and then rerun manage.py schemamigration to recreate the table in south?

    Read the article

  • Oracle JDBC intermittent Connection Issue

    - by Lipska
    I am experiencing a very strange problem This is a very simple use of JDBC connecting to an Oracle database OS: Ubuntu Java Version: 1.5.0_16-b02 1.6.0_17-b04 Database: Oracle 11g Release 11.1.0.6.0 When I make use of the jar file JODBC14.jar it connects to the database everytime When I make use of the jar file JODBC5.jar it connects some times and other times it throws an error ( shown below) If I recompile with Java 6 and use JODBC6.jar I get the same results as JODBC5.jar I need specific features in JODB5.jar that are not available in JODBC14.jar Any ideas Error Connecting to oracle java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:74) at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:227) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:494) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:411) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:490) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:202) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:474) at java.sql.DriverManager.getConnection(DriverManager.java:525) at java.sql.DriverManager.getConnection(DriverManager.java:171) at TestConnect.main(TestConnect.java:13) Code Below is the code I am using import java.io.; import java.sql.; public class TestConnect { public static void main(String[] args) { try { System.out.println("Connecting to oracle"); Connection con=null; Class.forName("oracle.jdbc.driver.OracleDriver"); con=DriverManager.getConnection( "jdbc:oracle:thin:@172.16.48.100:1535:sample", "JOHN", "90009000"); System.out.println("Connected to oracle"); con.close(); System.out.println("Goodbye"); } catch(Exception e){e.printStackTrace();} } }

    Read the article

  • OleDbExeption Was unhandled in VB.Net

    - by ritch
    Syntax error (missing operator) in query expression '((ProductID = ?) AND ((? = 1 AND Product Name IS NULL) OR (Product Name = ?)) AND ((? = 1 AND Price IS NULL) OR (Price = ?)) AND ((? = 1 AND Quantity IS NULL) OR (Quantity = ?)))'. I need some help sorting this error out in Visual Basics.Net 2008. I am trying to update records in a MS Access Database 2008. I have it being able to update one table but the other table is just not having it. Private Sub Admin_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'Reads Users into the program from the text file (Located at Module.VB) ReadUsers() 'Connect To Access 2007 Database File con.ConnectionString = ("Provider=Microsoft.ACE.OLEDB.12.0;" & "Data Source=E:\Computing\Projects\Login\Login\bds.accdb;") con.Open() 'SQL connect 1 sql = "Select * From Clients" da = New OleDb.OleDbDataAdapter(sql, con) da.Fill(ds, "Clients") MaxRows = ds.Tables("Clients").Rows.Count intCounter = -1 'SQL connect 2 sql2 = "Select * From Products" da2 = New OleDb.OleDbDataAdapter(sql2, con) da2.Fill(ds, "Products") MaxRows2 = ds.Tables("Products").Rows.Count intCounter2 = -1 'Show Clients From Database in a ComboBox ComboBoxClients.DisplayMember = "ClientName" ComboBoxClients.ValueMember = "ClientID" ComboBoxClients.DataSource = ds.Tables("Clients") End Sub The button, the error appears on da2.update(ds, "Products") Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button4.Click Dim cb2 As New OleDb.OleDbCommandBuilder(da2) ds.Tables("Products").Rows(intCounter2).Item("Price") = ProductPriceBox.Text da2.Update(ds, "Products") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub However the code works on updating another table Private Sub UpdateButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles UpdateButton.Click 'Allows users to update records in the Database Dim cb As New OleDb.OleDbCommandBuilder(da) 'Changes the database contents with the content in the text fields ds.Tables("Clients").Rows(intCounter).Item("ClientName") = ClientNameBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientID") = ClientIDBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientAddress") = ClientAddressBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientTelephoneNumber") = ClientNumberBox.Text 'Updates the table withing the Database da.Update(ds, "Clients") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub

    Read the article

  • How do I pass a callback function to sqlite3_exec on iOS 5.1?

    - by John Doh
    I am new to both xcode/iOS/Objective-C and sqlite. I am trying to teach myself the basics - and I would like to use the sqlite3 wrapper "sqlite3_exec" for a select query. For some reason, I can't find a simple example anywhere of someone doing this. Basically, the method has a parameter (the third one) for a callback function: int sqlite3_exec( sqlite3*, /* An open database */ const char *sql, /* SQL to be evaluated */ int (*callback)(void*,int,char**,char**), /* Callback function */ void *, /* 1st argument to callback */ char **errmsg /* Error msg written here */ ); That's fine. I'm no stranger to callbacks. However, I just can't seem to get the syntax down right. I took over one of the view controllers in my iPad (iOS 5.1) xcode (4.3) project, and made the changes shown below: #import "SecondViewController.h" #import "sqlite3.h" #import "AppState.h" @interface SecondViewController () @end @implementation SecondViewController - (int)myCallback:(void *)a_parm argc:(int)argc argv:(char **)argv column:(char **)column { return 0; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. //grab questionnaire names char *sql = "select * from QST2Main order by [Name]"; char *err = nil; sqlite3 *db = [[AppState sharedManager] getgCn]; sqlite3_exec(db, sql, myCallback, nil, &err); } Essentially, I want to run a query when this view first loads, to store some data for later use. But, XCode doesn't like the "myCallback" usage at the bottom there. It says: Undeclared Use of Identifier 'myCallback.' That method is declared in the header file, and I've even tried making it static. Nothing seems to make this error go away. I know I must be doing something fundamentally wrong here, but for the life of me I can't figure out what - I can't even find other code samples in this area that could help me figure out what I'm missing. Many thanks!

    Read the article

  • django: can't adapt error when importing data from postgres database

    - by Oleg Tarasenko
    Hi, I'm having strange error with installing fixture from dumped data. I am using psycopg2, and django1.1.1 silver:probsbox oleg$ python manage.py loaddata /Users/oleg/probs.json Installing json fixture '/Users/oleg/probs' from '/Users/oleg/probs'. Problem installing fixture '/Users/oleg/probs.json': Traceback (most recent call last): File "/opt/local/lib/python2.5/site-packages/django/core/management/commands/loaddata.py", line 153, in handle obj.save() File "/opt/local/lib/python2.5/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/opt/local/lib/python2.5/site-packages/django/db/models/base.py", line 495, in save_base result = manager._insert(values, return_id=update_pk) File "/opt/local/lib/python2.5/site-packages/django/db/models/manager.py", line 177, in _insert return insert_query(self.model, values, **kwargs) File "/opt/local/lib/python2.5/site-packages/django/db/models/query.py", line 1087, in insert_query return query.execute_sql(return_id) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/subqueries.py", line 320, in execute_sql cursor = super(InsertQuery, self).execute_sql(None) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/opt/local/lib/python2.5/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) ProgrammingError: can't adapt First I've checked similar issues on internet. This one seemed to be very related: http://code.djangoproject.com/ticket/5996, as my data has many non ASCII symbols But actually I've checked my django installation and it's ok there Could you advice what is wrong

    Read the article

  • SMO ManagedComputer.ServiceInstances is empty

    - by Mark J Miller
    I am trying to use SMO (VS 2010, SQL Server 2008) to connect to SQL Server and view the server protocol configuration. I can connect and list the Services and ClientProtocols as well as the account MSSQLSERVER service is running under. However, the ServerInstances collection is empty. The only instance on the target server is the default (MSSQLSERVER), shouldn't that be in the collection? How can I get an instance of it so I can inspect the ServerProtocols collection? Here's the code I'm using: class Program { static void Main(string[] args) { //machine hosting installed sql server instance ManagedComputer host = new ManagedComputer("dev-it-db01.dev.interbankfx.lcl"); //ManagedComputer host = new ManagedComputer("MRW-IT-DTP69"); if (host.ServerInstances.Count != 0) { //why is this 0? Is it because only the DEFAULT instance exists? Console.WriteLine("/////////////// INSTANCES ////////////////"); foreach (ServerInstance inst in host.ServerInstances) { Console.WriteLine(inst.Name); } } Console.WriteLine("/////////////// SERVICES ////////////////"); // enumerate sql services (looking for MSSSQLSERVER) foreach (Service svc in host.Services) { Console.WriteLine(svc.Name); } Console.WriteLine("/////////////// DETAILS ////////////////"); // get name of MSSQLSERVER instance from user (pick from list above) Service mssqlserver = host.Services["MSSQLSERVER"]; // print service account: .\{account} == "local account", "LocalSystem", "NetworkService", {domain}\{account} == "domain account" Console.WriteLine("Service Account: {0}", mssqlserver.ServiceAccount); // get client protocols foreach (ClientProtocol cp in host.ClientProtocols) { Console.WriteLine("{0} {1} ({2})", cp.Order, cp.DisplayName, cp.IsEnabled ? "Enabled" : "Disabled"); } } } I've also tried: Urn u = new Urn("ManagedComputer[@Name=dev-it-db01.dev.interbankfx.lcl]/ServerInstance[@Name='MSSQLSERVER']/ServerProtocol[@Name='Tcp']"); ServerProtocol tcp = host.GetSmoObject(u) as ServerProtocol; if (tcp != null) { Console.WriteLine("{0}", tcp.DisplayName); } But I get an error message stating: "child expressions are not supported." Any ideas what's wrong?

    Read the article

  • Cakephp Function in mode not executing

    - by Rixius
    I have a function in my Comic Model as such: <?php class Comic extends AppModel { var $name = "Comic"; // Methods for retriving information. function testFunc(){ $mr = $this->find('all'); return $mr; } } ?> And I am calling it in my controller as such: <?php class ComicController extends AppController { var $name = "Comic"; var $uses = array('Comic'); function index() { } function view($q) { $this->set('array',$this->Comic->testFunc()); } } ?> When I try to load up the page; I get the following error: Warning (512): SQL Error: 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'testFunc' at line 1 [CORE/cake/libs/model/datasources/dbo_source.php, line 525] Query: testFunc And the SQL dump looks like this: (default) 2 queries took 1 ms Nr Query Error Affected Num. rows Took (ms) 1 DESCRIBE comics 10 10 1 2 testFunc 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'testFunc' at line 1 0 So it looks like, instead of running the testFunc() function, it is trying to run a query of "testFunc" and failing...

    Read the article

< Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >