Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 569/1698 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • how to solve run time error 'Failed to create writable database file with message 'The operation couldn’t be completed. (Cocoa error 260.)'.'?

    - by user1432045
    I am beginner of iPhone I have created database but that give run time error of Failed to create writable database file with message 'The operation couldn’t be completed my code is -(void)createdatabase { NSFileManager *fileManager=[NSFileManager defaultManager]; NSError *error; NSString *dbPath=[self getDBPath]; BOOL success=[fileManager fileExistsAtPath:dbPath]; if(!success) { NSString *defaultDBPath=[[[NSBundle mainBundle]resourcePath] stringByAppendingPathComponent:@"SQL.sqlite"]; success=[fileManager copyItemAtPath:defaultDBPath toPath:dbPath error:&error]; if(!success) { NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]); } } } give any suggestion and source code which is apply in my code

    Read the article

  • How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in vc++ application. This is the code I am using to load the file which I used to load a .bmp file BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData-AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("Unhandle Exception"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk funcion it gives me all 0,s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated. Thanks in advance.

    Read the article

  • [How-to] Make a working xml file in php (encoding properly) to export data from online database to i

    - by gotye
    Hey guys, I am trying to make an xml file to export my data from my database to my iphone. Each time I create a new post, I need to execute a php file to create a xml file containing the latest post ;) Okay so far ? :D Here is current php code ... but my nsxmlparser gives me an error code (33 - String is not started). I have no idea what kind of php functions I have to use ... <?php // Èdition du dÈbut du fichier XML $xml .= '<?xml version=\"1.0\" encoding=\"UTF-8\"?>'; $sml .= '<channel>'; $xml .= '<title>Infonul</title>'; $xml .= '<link>aaa</link>'; $xml .= '<description>aaa</description>'; // connexion a la base (mettre ‡ jour le mdp) $connect = mysql_connect('...-12','...','...'); /* selection de la base de donnÈe mysql */ mysql_select_db('...'); // selection des 20 derniËres news $res=mysql_query("SELECT u.display_name as author,p.post_date as date,p.comment_count as commentCount, p.post_content as content,p.post_title as title FROM wp_posts p, wp_users u WHERE p.post_status = 'publish' and p.post_type = 'post' and p.post_author = u.id ORDER BY p.id DESC LIMIT 0,20"); // extraction des informations et ajout au contenu while($tab=mysql_fetch_array($res)){ $title=$tab[title]; $author=$tab[author]; $content=$tab[content]; //html stuff $commentCount=$tab[commentCount]; $date=$tab[date]; $xml .= '<item>'; $xml .= '<title>'.$title.'</title>'; $xml .= '<content><![CDATA['.$content.']]></content>'; $xml .= '<date>'.$date.'</date>'; $xml .= '<author>'.$author.'</author>'; $xml .= '<commentCount>'.$commentCount.'</commentCount>'; $xml .= '</item>'; } // Èdition de la fin du fichier XML $xml .= '</channel>'; $xml = utf8_encode($xml); echo $xml; // Ècriture dans le fichier if ($fp = fopen("20news.xml",'w')) { fputs($fp,$xml); fclose($fp); } //mysql_close(); ?> I noticed a few things when i open 20news.xml in my browser : I got squares instead of single quotes ... I can't see <[CDATA[ but ]] is clearly visible ... why ?!? Thanks for any input ;) Gotye.

    Read the article

  • How to store date into Mysql database with play framework in scala?

    - by Rahul Kulhari
    I am working with play framework with scala and what am i doing : login page to login into web app sign up page to register into web app after login i want to store all databases values to user what i want to do: when user register for web app then i want to store user values into database with current time and date but my form is giving error. error: List(FormError(dates,error.required,List())),None) controllers/Application.scala object Application extends Controller { val ta:Form[Keyword] = Form( mapping( "id" -> ignored(NotAssigned:Pk[Long]), "word" -> nonEmptyText, "blog" -> nonEmptyText, "cat" -> nonEmptyText, "score"-> of[Long], "summaryId"-> nonEmptyText, "dates" -> date("yyyy-MM-dd HH:mm:ss") )(Keyword.apply)(Keyword.unapply) ) def index = Action { Ok(html.index(ta)); } def newTask= Action { implicit request => ta.bindFromRequest.fold( errors => {println(errors) BadRequest(html.index(errors))}, keywo => { Keyword.create(keywo) Ok(views.html.data(Keyword.all())) } ) } models/keyword.scala case class Keyword(id: Pk[Long],word: String,blog: String,cat: String,score: Long, summaryId: String,dates: Date ) object Keyword { val keyw = { get[Pk[Long]]("keyword.id") ~ get[String]("keyword.word")~ get[String]("keyword.blog")~ get[String]("keyword.cat")~ get[Long]("keyword.score") ~ get[String]("keyword.summaryId")~ get[Date]("keyword.dates") map { case id~blog~cat~word~score~summaryId~dates => Keyword(id,word,blog,cat,score, summaryId,dates) } } def all(): List[Keyword] = DB.withConnection { implicit c => SQL("select * from keyword").as(Keyword.keyw *) } def create(key: Keyword){DB.withConnection{implicit c=> SQL("insert into keyword values({word},{blog}, {cat}, {score},{summaryId},{dates})").on('word-> key.word,'blog->key.blog, 'cat -> key.cat, 'score-> key.score, 'summaryId -> key.summaryId, 'dates->new Date()).executeUpdate } } views/index.scala.html @(taskForm: Form[Keyword]) @import helper._ @main("Todo list") { @form(routes.Application.newTask) { @inputText(taskForm("word")) @inputText(taskForm("blog")) @inputText(taskForm("cat")) @inputText(taskForm("score")) @inputText(taskForm("summaryId")) <input type="submit"> <a href="">Go Back</a> } } please give me some idea to store date into mysql databse and date is not a field of form

    Read the article

  • how to insert into database from stored procedure in log4net?

    - by Samreen
    I have to log thread context properties like this: string logFilePath = AppDomain.CurrentDomain.BaseDirectory + "log4netconfig.xml"; FileInfo finfo = new FileInfo(logFilePath); log4net.Config.XmlConfigurator.ConfigureAndWatch(finfo); ILog logger = LogManager.GetLogger("Exception.Logging"); log4net.ThreadContext.Properties["MESSAGE"] = exception.Message; log4net.ThreadContext.Properties["MODULE"] = "module1"; log4net.ThreadContext.Properties["COMPONENT"] = "component1"; logger.Debug("test"); and the configuration file is: <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,Log4net"/> </configSections> <log4net> <logger name="Exception.Logging" level="Debug"> <appender-ref ref="AdoNetAppender"/> </logger> <appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender"> <connectionString value="Data Source=xe;User ID=test;Password=test;" /> <connectionType value="System.Data.OracleClient.OracleConnection, System.Data.OracleClient, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <bufferSize value="10000"/> <commandText value="Log_Exception_Pkg.Insert_Log" /> <commandType value="StoredProcedure" /> <parameter> <parameterName value="@p_Error_Message" /> <dbType value="String" /> <size value="255" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{MESSAGE}"/> </layout> </parameter> <parameter> <parameterName value="@p_Module" /> <dbType value="String" /> <size value="225" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{MODULE}"/> </layout> </parameter> <parameter> <parameterName value="@p_Component" /> <dbType value="String" /> <size value="225" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{COMPONENT}"/> </layout> </parameter> </appender> </log4net> </configuration> But its not inserting them in the database. How can I get that to work?

    Read the article

  • How to upload a file into database by using Servlet?

    - by user1765496
    Hi all iam working on servlets, so i need to upload a file by using servlet as follows my code. package com.limrasoft.image.servlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import javax.servlet.annotation.*; import java.sql.*; @WebServlet(name="serv1",value="/s1") public class Account extends HttpServlet{ public void doPost(HttpServletRequest req,HttpServletResponse res)throws ServletException,IOException{ try{ Class.forName("oracle.jdbc.driver.OracleDriver"); Connecection con=null; try{ con=DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:xe","system","sajid"); PrintWriter pw=res.getWriter(); res.setContentType("text/html"); String s1=req.getParameter("un"); string s2=req.getParameter("pwd"); String s3=req.getParameter("g"); String s4=req.getParameter("uf"); PreparedStatement ps=con.prepareStatement("insert into account(?,?,?,?)"); ps.setString(1,s1); ps.setString(2,s2); ps.setString(3,s3); File file=new File("+s4+"); FileInputStream fis=new FileInputStream(fis); int len=(int)file.length(); ps.setBinaryStream(4,fis,len); int c=ps.executeUpdate(); if(c==0){pw.println("<h1>Registratin fail");} else{pw.println("<h1>Registration fail");} } finally{if(con!=null)con.close();} } catch(ClassNotFoundException ce){pw.println("<h1>Registration Fail");} catch(SQLException se){pw.println("<h1>Registration Fail");} pw.flush(); pw.close(); } } I have written the above code for file upload into database, but it giving error as "HTTP Status 500 - Servlet3.java (The system cannot find the file specified)" Could you plz help me to do this code,thanks in advanse.

    Read the article

  • What is the best way to connect to an Oracle database to build my own IDE? [closed]

    - by rima
    before answer me plz thinking about the futures of these kind of program and answer me plz. I wanna get some data from oracle server like: 1-get all the function,package,procedure and etc for showing them or drop them & etc... 2-compile my *.sql files,get the result if they have problem & etc... becuz I was beginner in oracle first of all I for solve the second problem I try to connect to sqlPlus by RUN sqlplus and trace the output(I mean,I change the output stream of shell and trace what happend and handle the assigned message to customer. NOW THIS PART SUCCEED. just a little bit I have problem with get all result because the output is asynchronous.any way... [in this case I log in to oracle Server by send argument to the sqlplus by make a process in c#] after that I try to get all function,package or procedure name,but I have problem in speed!so I try to use oracle.DataAccess.dll to connect the database. now I m so confusing about: which way is correct way to build a program that work like Oracle Developer! I do not have any experience for like these program how work. If Your answer is I must use the second way follow this part plz: I search a little bit the Golden,PLedit (Benthic software),I have little bit problem how I must create the connection string?because I thinking about how I can find the host name or port number that oracle work on them?? am I need read the TNSNames.Ora file? IF your answer is I must use the first way follow this part plz: do u have any Idea for how I parse the output?because for example the result of a table is so confusing...[i can handle & program it but I really need someone experience,because the important things to me learn how such software work so nice and with quick response?] All of the has different style in output... If you are not sure Can u help me which book can help me in this way i become expert? becuz for example all the C# write just about how u can connect to DB and the DB books write how u can use this DB program,I looking for a book that give me some Idea how develop an interface for do transaction between these two.not simple send and receive data,for example how write a compiler for them. the language of book is not different for me i know C#,java,VB,sql,Oracle Thanks.

    Read the article

  • Leveraging Logical Standby Databases in Oracle 11g Data Guard

    Oracle Data Guard still offers support for the venerable logical standby database in Oracle Database 11g. This article, investigates how data warehouse and data mart environments can effectively leverage logical standby database features, but simultaneously provide a final destination when failover from a primary database is mandated during disaster recovery.

    Read the article

  • September Independent Oracle User Group (IOUG) Regional Events:

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} September 5, 2012 – Denver, CO Oracle 11g Database Upgrade Seminar Join Roy Swonger, Senior Director of software development at Oracle to learn about upgrading to Oracle Database 11g. Topics include: All the required preparatory steps Database upgrade strategies Post-upgrade performance analysis Helpful tips and common pitfalls to watch out for http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=152242&src=7598177&src=7598177&Act=4 September 6, 2012 – Salt Lake City, UT Fall Symposium 2012 Plan to join us for our annual fall event on Sept 6. They day will be filled with learning and networking with tracks focused on Applications, APEX, BI, Development and DBA Topics. This event is free for UTOUG members to attend, but please register. http://www.utoug.org/apex/f?p=972:2:6686308836668467::::P2_EVENT_ID:121 September 6, 2012 – Portland, OR Oracle’s Hands on Workshop Series focused on providing Defense-in-Depth Solutions to secure data at the source, reduce risk and simplify compliance The Oracle Database Security Workshop is a one-day hands-on session for IT Managers, IT Security Architects and Oracle DBAs who are looking for solutions to address their information protection, privacy, and accountability challenges within their Oracle database environment. Most security programs offered today fail toadequately address database security. Customers continue to be challenged tosecure information against loss and protect the integrity of sensitiveinformation like critical financial data, personally identifiable information(PII) and credit card data for PCI compliance. http://nwoug.org/content.aspx?page_id=87&club_id=165905&item_id=241082 September 11, 2012 – Montreal, QC APEXposed! For APEX aficionados – join ODTUG in Montreal, September 11-12 for APEXposed! Topics will include Dynamic Actions, Plug-ins, Tuning, and Building Mobile Apps. The cost is $399 US and early registration ends August 15th. For more information: http://www.odtugapextraining.com  September 11, 2012 – Philadelphia, PA Big Data & What are we still doing wrong with Tom Kyte Tom Kyte is a Senior Technical Architect in Oracle's Server Technology Division. Tom is the Tom behind the AskTom column in Oracle Magazine and is also the author of Expert Oracle Database Architecture (Apress, 2005/2009) among other books Abstract: Big Data The term "big data" draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. However, beyond that critical data is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. This presentation will take a look at what Big Data is and means - and Oracle's strategy for handling it Abstract: What are we still doing wrong? I've given many best practices presentations in the last 10 years. I've given many worst practices presentations in the last 10 years. I've seen some things change over the last ten years and many other things stay exactly the same. In this talk - we'll be taking a look at the good and the bad - what we do right and what we continue to do wrong over and over again. We'll look at why "Why" is probably the right initial answer to most any question. We'll look at how we get to "Know what we Know", and why that can be both a help and a hindrance. We'll peek at "Best Practices" and tie them into what I term "Worst Practices". In short, a talk on the good and the bad. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://ioug.itconvergence.com/pls/apex/f?p=207:27:3669516430980563::NO September 12, 2012- New York, NY NYOUG Fall General Meeting “Trends in Database Administration and Why the Future of Database Administration is the Vdba” http://www.nyoug.org/upcoming_events.htm#General_Meeting1 September 21, 2012 – Cleveland, OH Oracle Database 11g for Developers: What You need to know or Oracle Database 11g New Features for Developers Attendees are introduced to the new and improved features of Oracle 11g (both Oracle 11g R1 and Oracle 11g R2) that directly impact application development. Special emphasis is placed on features that reduce development time, make development simpler, improve performance, or speed deployment. Specific topics include: New SQL functions, virtual columns, result caching, XML improvements, pivot statements, JDBC improvements, and PL/SQL enhancements such as compound triggers. http://www.neooug.org/ September 24, 2012 – Ottawa, ON Introduction to Oracle Spatial The free Oracle Locator functionality, and the Oracle Spatial option which dramatically extends Locator, are very useful, but poorly understood capabilities of the database. In the afternoon we will extend into additional areas selected from: storage and performance; answering business problems with spatial queries; using Oracle Maps in OBIEE; an overview and capabilities of Oracle Topology; under the covers with GeoCoding. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:4209274028390246::NO

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • Presenting at SQLConnections!

    - by andyleonard
    Introduction This year I'm honored to present at SQLConnections in Orlando 27-30 Mar 2011! My topics are Database Design for Developers, Build Your First SSIS Package, and Introduction to Incremental Loads. Database Design for Developers This interactive session is for software developers tasked with database development. Attend and learn about patterns and anti-patterns of database development, one method for building re-executable Transact-SQL deployment scripts, a method for using SqlCmd to deploy...(read more)

    Read the article

  • New Exadata public references

    - by Javier Puerta
    The following customers are now public references for Exadata. Show your customers how other companies in their industries are leveraging Exadata to achieve their business objectives. BRITISH TELECOM - Communications - United Kingdom 2x Full Rack + 1x Quarter Rack Exadata Database Machine Oracle University Training Courses Success Story DEUTSCHE BANK - Financial Services - Germany 18x Full Rack Exadata Database Machine Warehouse for Credit Risk Reporting running on Exa Success Story OPENBAAR MINISTERIE - Public Sector - Netherlands 1x Full Rack Exadata Database Machine Datawarehouse usage Success Story ADRIATIC SLOVENICA - Insurance - Slovenia 1x Quarter Rack Exadata Database Machine running on Linux Replacing Oracle DB and Oracle Application Server Success Story More customer success stories at Oracle.com References

    Read the article

  • Oracle Fusion Middleware 12c Updates (2014/08/14)

    - by Hiro
    Oracle Fusion Middleware 12c Media Pack ?????2014/08/14 ???????????????? 1. Oracle WebLogic Server on Oracle Database Appliance Oracle WebLogic Server 12.1.2 on Oracle Database Appliance 2.9.0.0.0 ?????????????? Oracle WebLogic Server 12.1.1 on Oracle Database Appliance 2.9.0.0.0 Oracle WebLogic Server 10.3.6 on Oracle Database Appliance 2.9.0.0.0 ????????????????? ?????

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • Top 10 MySQL GUI Tools

    <b>Database Journal:</b> "Many third parties create rich applications to facilitate database management, database development and database administration. Here are ten outstanding graphical interfaces for MySQL."

    Read the article

  • TechEd 2014 Day 1

    - by John Paul Cook
    Today at TechEd 2014, many people had questions about the in-memory database features in SQL Server 2014. A common question is how an in-memory database is different from having a database on a SQL Server with an amount of ram far greater than the size of the database. In-memory or memory optimized tables have different data structures and are accessed differently using a latch free and lock free approach that greatly improves performance. This provides part of the performance improvement. The rest...(read more)

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • DB12c ??????

    - by katsumii
    ????????? Oracle Database 12c ??????????????????OTN???????????????????????????12c?????????????????????Oracle Database 12c???????????????????????????????????????????????Oracle Multitenant??????????????????????????????????????IT??????????????? - Oracle Database Application Development???????????????????????????????????? - ????·????????????·?????????·??????????????????????????·??????????Database Cloud????????????????????·?????????????????????????????·?????????????????

    Read the article

  • ??????Oracle Automatic Storage Management???·????????

    - by Yusuke.Yamamoto
    ????? ??:2010/03/01 ??:???? Oracle Database ?????? Automatic Storage Management(ASM) ? Oracle Database 10g ?????????Oracle ASM ? Oracle Database ?????????????·????????????·????????????????????????????????????????Oracle ASM ????????????????·?????????????????????????? ??????·???????????????·??????????????????????ASRU ???????ASRU ???????????? ????????? ????????????????? http://www.oracle.com/technetwork/jp/database/1005200-oracle-asm-and-tr-321865-ja.pdf

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by ErikE
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running under doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for both the web server and the domain account so they are "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • Can I use a Mac Mini as a web server and database server? What are the pros and cons?

    - by Christopher Altman
    We are a bootstrapped web start up. We have a LAMP web application that we expect relatively low to mid traffic because users need an account to log in. Our current approach is to colocate two servers, a web and mysql database server. We are planning to use Ubuntu Server 9.04. We have shopped around for dedicated servers but the price range from $900 to $1500 per month, therefore we are exploring the colocation approach. We are considering purchasing two Mac Minis (2.0GHz Intel Core 2 Duo 2 Gb RAM) because we are familiar with the machines are the prices are relatively inexpensive. What are the pros and cons of using these 'non-server' grade machines? We would install Ubuntu Sever and attach firewire external hard drives. Any advice on how to set up 'good-and-economic' web/database servers is welcomed.

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by Emtucifor
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running as doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for the web server so it is "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • Transform data to a new structure

    - by rAyt
    Hi, I've got an Access database from one of our clients and want to import this data into a new MSSQL Server 2008 database structure I designed. It's similar to the Access Database (including all the rows and so on) but I normalized the entire database. Is there any tool (microsoft tools preferred) to map the old database to my new design? thanks

    Read the article

  • How do I make these fields autopopulate from the database?

    - by dmanexe
    I have an array, which holds values like this: $items_pool = Array ( [0] => Array ( [id] => 1 [quantity] => 1 ) [1] => Array ( [id] => 2 [quantity] => 1 ) [2] => Array ( [id] => 72 [quantity] => 6 ) [3] => Array ( [id] => 4 [quantity] => 1 ) [4] => Array ( [id] => 5 [quantity] => 1 ) [5] => Array ( [id] => 7 [quantity] => 1 ) [6] => Array ( [id] => 8 [quantity] => 1 ) [7] => Array ( [id] => 9 [quantity] => 1 ) [8] => Array ( [id] => 19 [quantity] => 1 ) [9] => Array ( [id] => 20 [quantity] => 1 ) [10] => Array ( [id] => 22 [quantity] => 1 ) [11] => Array ( [id] => 29 [quantity] => 0 ) ) Next, I have a form that I am trying to populate. It loops through the item database, prints out all the possible items, and checks the ones that are already present in $items_pool. <?php foreach ($items['items_poolpackage']->result() as $item): ?> <input type="checkbox" name="measure[<?=$item->id?>][checkmark]" value="<?=$item->id?>"> <?php endforeach; ?> I know what logically I'm trying to accomplish here, but I can't figure out the programming. What I'm looking for, written loosely is something like this (not real code): <input type="checkbox" name="measure[<?=$item->id?>][checkmark]" value="<?=$item->id?>" <?php if ($items_pool['$item->id']) { echo "SELECTED"; } else { }?>> Specifically this conditional loop through the array, through all the key values (the ID) and if there's a match, the checkbox is selected. <?php if ($items_pool['$item->id']) { echo "SELECTED"; } else { }?> I understand from a loop structured like this that it may mean a lot of 'extra' processing. TL;DR - I need to echo within a loop if the item going through the loop exists within another array.

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >