Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 120/1330 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Service Catalogs for Database as a Service

    - by B R Clouse
    At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended which would have surprised me several months ago when I started researching this topic.  At that time, I thought of service catalogs as something trivial which could be explained in a few simple slides.  But while looking at all the different options and approaches available, I came to learn that designing a succinct and effective catalog is not a trivial task, and mistakes can lead to confusion and unintended side effects.  And when the room filled up, my new point of view was confirmed. In case you missed the session, or were able to attend but would like more details, I've posted a white paper that covers the topics from the session, and more.  We start with an overview of the components of a service catalog: And then look at several customer case studies of service catalogs for DBaaS.  Synthesizing those examples, we summarize the main options for defining the service categories and their levels.  We end with a template for defining Bronze | Silver | Gold service tiers for Oracle Database Services. The paper is now available here - watch for updates as we work to expand some sections and incorporate readers' feedback (hint - that includes your feedback). Visit our OTN page for additional Database Cloud collateral.

    Read the article

  • Does armcc optimizes non-volatile variables with -O0 ?

    - by Dor
    int* Register = 0x00FF0000; // Address of micro-seconds timer while(*Register != 0); Should I declare *Register as volatile while using armcc compiler and -O0 optimization ? In other words: Does -O0 optimization requires qualifying that sort of variables as volatile ? (which is probably required in -O2 optimization)

    Read the article

  • How do I encapsulate the application server from the web and database servers?

    - by SNyamathi
    So I've been doing some reading and it seems like the best practice would be to have separate database, application, and web servers. There are a few things that I've failed to understand - please feel free to recommend any reading materials that would address these topics. Database (assume MySQL) Application server communication: Does the database server do any sort of checks on the SQL commands sent / returned, or is it just a "dumb pipe" that responds to SQL commands by spitting back data? Application server (assume Tomcat) Web Server Almost the reverse here, is it the web server that is more of a pipe to the internet that forwards requests to the application server and spits back responses? I'm not wording this well, but I'm trying to ask - is it the application server that is responsible for validating data received by from requests? ex: Parsing POSTs Validating user logins Encrypting decrypting data Furthermore, how do these two servers communicate? I'm trying to keep things as flexible as possible here, so while I could write a web server in Java and use Java to communicate between the web and app server, that doesn't sound very modular. What if I want to use Python or some other language to replace the web server later on? What if I want to make a non-web facing application used in house written in C++ or something.

    Read the article

  • What is the typical maximum number of database connections for Oracle running on Windows server ?

    - by Sake
    We are maintaining a database server that serve a large number of clients. Each client typically running serveral client-applications. The total number of connections to the database server (Oracle 9i) is reaching 800 connections on peak load. The windows 2003 server is starting to run out of memory. We are now planning to move to 64bit Windows in order to gain higher memory capability. As a developer I suggest moving to multi-tier architecture with conneciton pooling, which I believe is a natural solution to this problem. However, in order to support my idea, I want the information on: what exactly is the typical number of connections allowed for Oracle database ? What is the problem when the number connections is too high ? Too much memory comsumption ? or too many sockets opened ? or too many context switching between threads ? To be a little bit specific, how could Oracle Forms application scale to thousand of users without facing this problem ? Shall Oracle RAC applied to this case ? I'm sure the answer to this question should depend on quite a number of factors, like the exact spec of the hardware being used. I'm expecting a rough estimation or some experience from the real world.

    Read the article

  • Timer a usage in msp430 in high compiler optimization mode

    - by Vishal
    Hi, I have used timer A in MSP430 with high compiler optimization, but found that my timer code is failing when high compiler optimization used. When none optimization is used code works fine. This code is used to achieve 1 ms timer tick. timeOutCNT is increamented in interrupt. Following is the code, //Disable interrupt and clear CCR0 TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0; // timer interrupt flag disabled CCTL0 = CCIE; // CCR0 interrupt enabled CCR0 = 500; TIMER_A_TACTL &= TIMER_A_TAIE; //enable timer interrupt TIMER_A_TACTL &= TIMER_A_TAIFG; //enable timer interrupt TACTL = TIMER_A_TASSEL + MC_1 + ID_3; // SMCLK, upmode timeOutCNT = 0; //timeOutCNT is increased in timer interrupt while(timeOutCNT <= 1); //delay of 1 milisecond TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0x00; // timer interrupt flag disabled Can anybody help me here to resolve this issue? Is there any other way we can use timer A so it works fine in optimization modes? Or do I have used is wrongly to achieve 1 ms interrupt? Thanks in advanced. Vishal N

    Read the article

  • My timer code is failing when IAR is configured to do max optimization

    - by Vishal
    Hi, I have used timer A in MSP430 with high compiler optimization, but found that my timer code is failing when high compiler optimization used. When none optimization is used code works fine. This code is used to achieve 1 ms timer tick. timeOutCNT is increamented in interrupt. Following is the code [Code] //Disable interrupt and clear CCR0 TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0; // timer interrupt flag disabled CCTL0 = CCIE; // CCR0 interrupt enabled CCR0 = 500; TIMER_A_TACTL &= TIMER_A_TAIE; //enable timer interrupt TIMER_A_TACTL &= TIMER_A_TAIFG; //enable timer interrupt TACTL = TIMER_A_TASSEL + MC_1 + ID_3; // SMCLK, upmode timeOutCNT = 0; //timeOutCNT is increased in timer interrupt while(timeOutCNT <= 1); //delay of 1 milisecond TIMER_A_TACTL = TIMER_A_TASSEL | // set the clock source as SMCLK TIMER_A_ID | // set the divider to 8 TACLR | // clear the timer MC_1; // continuous mode TIMER_A_TACTL &= ~TIMER_A_TAIE; // timer interrupt disabled TIMER_A_TACTL &= 0x00; // timer interrupt flag disabled [/code] Can anybody help me here to resolve this issue? Is there any other way we can use timer A so it works fine in optimization modes? Or do I have used is wrongly to achieve 1 ms interrupt? Thanks in advanced. Vishal N

    Read the article

  • What alternatives do I have if I want a distributed multi-master database?

    - by Jonas
    I will build a system where I want to reduce single-point-of-failures, and I need a database. Is there any (free) relational database systems that can handle multi-master setups good (i.e where it is easy to add and remove nodes) or is it better to go with a NoSQL-database? As what I have understood, a key-value store will handle this better. What database system do you recommend for a multi-master (cluster) setup?

    Read the article

  • What would be better, (1 database + 4 tables) or (2 databases + 2 tables each) ?

    - by griseldas
    Hi there, I would like to be advised on what would be better (in regards to performance) A) 1 DATABASE with 4 tables or B) 2 DATABASES (same server), each with 2 tables. The tables size and usage are more or less similar, so the 2 tables on Database 1 would be similar usage/size to the 2 tables on database 2 The tables could have +500,000 records and the 2 tables on each database are not related (no join queries etc between them) Thanks in advance for your comments

    Read the article

  • How can I Create a CMS using php where end user doesn't have to know anything about database. Is that possible

    - by cryptex_vinci
    In genera sense if I create a CMS using php .Where I created and named the database in my localhost. Now if I or my clent upload the CMS in any server how will it work. because there is no database in that server. And in my php code I wrote to create database. Now how can i create a CMS which can automatically create database and tables where client doesn't have to know about anything besides uploading the CMS.

    Read the article

  • Webcast Replay: Upgrading to Oracle Database 11g with Tom Kyte

    - by john.brust
    On Friday, we caught up with Tom Kyte, Oracle database expert and host of AskTom.oracle.com. Join us to hear what Oracle Database 11g Release 2 means to Tom. Learn how you can make 'change' safe and benefit from new functionality to make better use of your standby databases, capture and replay workloads, tune SQL performance, flashback for rapid recovery from human error, and more. Click to view webcast (and Q&A) replay. We also invite you to visit us our Oracle Database 11g upgrade page on otn.oracle.com.

    Read the article

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • A Database and LDAP Ice Breaker Video

    - by mark.wilcox
    I made another GoAnimate video - this time it's about using LDAP for database passwords. Since it's on the free site - I didn't want to violate any terms of agreement - so it doesn't mention Oracle explicitly. But if you wanted to actually do what the animation talks about with Oracle database - you need to configure the Oracle database to use Oracle Enterprise User Security. EUS requires OVD or OID and works with most popular LDAP servers including Active Directory and of course our newest Oracle Directory member - Directory Server Enterprise Edition (aka the former Sun directory). So - if you are looking for a simple way to explain why you might want to use LDAP passwords with your databases or maybe just a slight chuckle on a Friday afternoon have a look at the video: -- Posted via email from Virtual Identity Dialogue

    Read the article

  • HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database

    - by Jane Story
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When working with a Hyperion Profitability and Cost Management (HPCM) Standard Costing application, there can often be a requirement to check data or allocated results using reporting tools e.g Smartview. To do this, you are retrieving data directly from the Essbase databases related to your HPCM model. For information, running reports is covered in Chapter 9 of the HPCM User documentation. The aim of this blog is to provide a quick guide to finding this data for reporting in the HPCM generated Essbase database in v11.1.2.2.x of HPCM. In order to retrieve data from an HPCM generated Essbase database, it is important to understand each of the following dimensions in the Essbase database and where data is located within them: Measures dimension – identifies Measures AllocationType dimension – identifies Direct Allocation Data or Genealogy Allocation data Point Of View (POV) dimensions – there must be at least one, maximum of four. Business dimensions: Stage Business dimensions – these will be identified by the Stage prefix. Intra-Stage dimension – these will be identified by the _Intra suffix. Essbase outlines and reporting is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s02.html For additional details on reporting measures, please review this section of the documentation:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/apas03.html Reporting requirements in HPCM quite often start with identifying non balanced items in the Stage Balancing report. The following documentation link provides help with identifying some of the items within the Stage Balancing report:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/generatestagebalancing.html The following are some types of data upon which you may want to report: Stage Data: Direct Input Assigned Input Data Assigned Output Data Idle Cost/Revenue Unassigned Cost/Revenue Over Driven Cost/Revenue Direct Allocation Data Genealogy Allocation Data Stage Data Stage Data consists of: Direct Input i.e. input data, the starting point of your allocation e.g. in Stage 1 Assigned Input Data i.e. the cost/revenue received from a prior stage (i.e. stage 2 and higher). Assigned Output Data i.e. for each stage, the data that will be assigned forward is assigned post stage data. Reporting on this data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s03.html Dimension Selection Measures Direct Input: CostInput RevenueInput Assigned Input (from previous stages): CostReceivedPriorStage RevenueReceivedPriorStage Assigned Output (to subsequent stages): CostAssignedPostStage RevenueAssignedPostStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the stage business dimensions for the stage you wish to see the Stage data for. All other Dimensions NoMember Idle/Unassigned/OverDriven To view Idle, Unassigned or Overdriven Costs/Revenue, first select which stage for which you want to view this data. If multiple Stages have unassigned/idle, resolve the earliest first and re-run the calculation as differences in early stages will create unassigned/idle in later stages. Dimension Selection Measures Idle: IdleCost IdleRevenue Unassigned: UnAssignedCost UnAssignedRevenue Overdriven: OverDrivenCost OverDrivenRevenue AllocationType DirectAllocation POV One member from each POV dimension Dimensions in the Stage with Unassigned/ Idle/OverDriven Cost All the Stage Business dimensions in the Stage with Unassigned/Idle/Overdriven. Zoom in on each dimension to find the individual members to find which members have Unassigned/Idle/OverDriven data. All other Dimensions NoMember Direct Allocation Data Direct allocation data shows the data received by a destination intersection from a source intersection where a direct assignment(s) exists. Reporting on direct allocation data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s04.html You would select the following to report direct allocation data Dimension Selection Measures CostReceivedPriorStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the SOURCE stage business dimensions and the DESTINATION stage business dimensions for the direct allocations for the stage you wish to report on. All other Dimensions NoMember Genealogy Allocation Data Genealogy allocation data shows the indirect data relationships between stages. Genealogy calculations run in the HPCM Reporting database only. Reporting on genealogy data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s05.html Dimension Selection Measures CostReceivedPriorStage AllocationType GenealogyAllocation (IndirectAllocation in 11.1.2.1 and prior versions) POV One member from each POV dimension Stage Business Dimensions Any stage business dimension members from the STARTING stage in Genealogy Any stage business dimension members from the INTERMEDIATE stage(s) in Genealogy Any stage business dimension members from the ENDING stage in Genealogy All other Dimensions NoMember Notes If you still don’t see data after checking the above, please check the following Check the calculation has been run. Here are couple of indicators that might help them with that. Note the size of essbase cube before and after calculations ensure that a calculation was run against the database you are examing. Export the essbase data to a text file to confirm that some data exists. Examine the date and time on task area to see when, if any, calculations were run and what choices were used (e.g. Genealogy choices) If data does not exist in places where they are expecting, it could be that No calculations/genealogy were run No calculations were successfully run The model/data at feeder location were either absent or incompatible, resulting in no allocation e.g no driver data. Smartview Invocation from HPCM From version 11.1.2.2.350 of HPCM (this version will be GA shortly), it is possible to directly invoke Smartview from HPCM. There is guided navigation before the Smartview invocation and it is then possible to see the selected value(s) in SmartView. Click to Download HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database (Right click or option-click the link and choose "Save As..." to download this pdf file)

    Read the article

  • Oracle European Launch Event: Oracle Database In-Memory

    - by A&C Redaktion
    am 17. Juni wird Andy Mendelsohn,  Oracle Executive Vice President Database Server Technologies, als Keynote Speaker den europäischen Launch von  Oracle Database In-Memory  in Frankfurt im Radisson Blue Hotel eröffnen. Dieses Thema ist für unsere Partner und Kunden von zentraler Bedeutung. Daher ist auch die Agenda dieses Launch Events einzigartig:Neben Vorträgen von Betakunden (Postbank und Cern), dem Analysten von IDC, der auch beim HQ Launch eine Woche zuvor in Redwood Shores mit Larry Ellison auf der Bühne stehen wird, und weiteren Oracle-Experten finden auch Live Demos statt. Eine Podiumsdiskussion rundet das Programm ab. Parallel zum Event werden Presse- und Analystengespräche geführt.Mit der neuen, bahnbrechenden Oracle Database In-Memory Option profitieren Kunden von einer erheblich beschleunigten Datenbankleistung für Analytics, Data Warehousing, Reporting und Online Transaction Processing (OLTP).Das ist so revolutionär, dass wir hiermit alle unsere Partner und ihre Endkunden zu diesem herausragenden Event herzlich einladen .Hier können Sie sich und Ihre Endkunden zu dieser exklusiven Live-Veranstaltung anmelden

    Read the article

  • Correct DB details produce “Database server was not found” (Prestashop Installation)

    - by Steve
    At stage 3 of the Prestashop Installation, I enter the DB details which I know to be correct, and I receive the error: Database server was not found. Please verify the login, password, and database server name fields. The server is localhost, and I have verified the database name and username. Why can Prestashop not find the server? This occurs when choosing InnoDB and MyIsam. If I change the server from localhost to the public hostname I receive the same error.

    Read the article

  • INFORMATION INDEPTH NEWSLETTER Database Insider June Edition

    - by jgelhaus
    Top News Stories include: Oracle #1 in RDBMS Share Gartner released its 2011 worldwide RDBMS market share research based on total software revenues, Market Share: All Software Markets, Worldwide 2011, and Oracle remained first in worldwide RDBMS share in 2011. KScope12:  The Oracle Development Tools User Group Conference The Oracle Development Tools User Group (ODTUG) will hold its annual conference, known as Kscope, in San Antonio, Texas, June 24–28. We asked ODTUG's Vice President Monty Latiolais for a sneak preview—and to share strategies for getting the most out of the event. New Independent Report Endorses Oracle Database Firewall In a new KuppingerCole Product Research Note, Martin Kuppinger concludes that Oracle Database Firewall "should definitely be evaluated and is amongst the recommended products in the database security market segment."  Check out the full edition today!

    Read the article

  • AWS EC2 Oracle RDB connection to Oracle Database Instance

    - by llaszews
    Provisioning my Oracle database instance to AWS EC2 RDB was easy. Just a few clicks! However, getting my connection to my Oracle cloud database was not as easy. A couple things that are not obvious (using Oracle SQL Developer): 1. Need to set up a database security group. 2. Need to use end point for the host name. This video is the best one on the internet to explain both points: http://www.youtube.com/watch?v=ocFURuX0eEw

    Read the article

  • Webcast Replay: Lower Cost of SAP w/Oracle Database 11g

    - by john.brust
    If you missed our live webcast Lower the Cost of SAP Applications with Oracle Database 11g earlier this week, the replay is now available. Watch the free on-demand webcast in which Gerhard Kuppler, Oracle's Director of SAP Alliances, talks about the #1 database for SAP Applications. As of April, Oracle Database 11g Release 2 is available for SAP. By upgrading you can lower the cost of your SAP applications infrastructure and improve your quality of service, so we encourage you to consider the upgrade. Note: (1) Turn off pop-up blockers if the slides do not advance automatically. (2) Slides are available for download.

    Read the article

  • Best design for a memory resident tool

    - by Andrew S.
    I apologize if this tends more toward design that programming, but here goes. What design would you recommend for a database that is Memory resident Must run on windows, linux and (at a stretch) the mac Accept multiple queries simultaneously Have minimum overhead, since a search is expected to take <0.25s This program implements a domain-specific search. Think of it as a database, but one that takes advantage of domain specific information to outperform a convential database search (for example, with custom oracle indexing). We have a custom data structure for our data. Our protoype is a simple exe that constructs the database in memory each time it is run. We were thinking that perhaps this program would suffice, but augmented with sockets so it can listen for queries. This database will be static. Its contents will change infrequently. We expect queries, and the solution, to be delivered via a web service.

    Read the article

  • Healthcare Mobile Database Synchronization Demonstration

    - by Jim Connors
    Like many of you, I learn best by getting my hands dirty.  When confronted with the task of understanding a new set of products and technologies and figuring out how they might apply to a vertical industry like healthcare, I set out to create a demonstration.  The video that follows aims to show how the Oracle embedded software portfolio can be applied to a healthcare application.  The demonstration utilizes among others, Java SE Embedded, Berkeley DB, Apache Tomcat, Oracle 11gR2 and Oracle Database Mobile Server. Eric Jensen gives a great critique and description of the demo here.  To sum it up, we aim to show how live medical data can be collected on a medical device, stored in a local database, synchronized to a master database and furthermore propagated to a mobile phone (Android) application.  Come take a look!

    Read the article

  • Oracle's Director NoSQL Database Product Management talks with ODBMS.ORG

    - by thegreeneman
    I was pinged by one of my favorite database technology sites today, ODBMS.ORG - informing that Dave Segleau, the Director of Oracle NoSQL Database product management spent some time talking with their editor Roberto Zicari about the product.   Its a great interview and I highly recommend the read.  I think its important to understand the connectivity that Oracle NoSQL Database (ONDB) has with BerkeleyDB, as it says a lot about the maturity of ONDB as it relates to data integrity and reliability.  BerkeleyDB has been living the NoSQL life since the beginning of this transition embracing the right tool for the job approach to data management.  Several of the biggest names in NoSQL ( e.g. LinkedIn's Voldemort ) built their NoSQL scale-out solutions leveraging the robust BerkeleyDB storage engine under their distribution architectures.  Oracle commercializing the same via ONDB makes perfect sense given the demonstrated need for this category of technology.

    Read the article

  • MySQL for Beginners course - first steps to lowering your Database TCOs

    - by Antoinette O'Sullivan
    Thinking about lowering your Database TCO by using the MySQL Server? Don't miss the chance to get training from the source! With the newly released MySQL for Beginners class, learn how this powerful relational database management system can make your life easier and more fun! This course covers all the basics and will get you on your way, with a solid foundation. This instructor led, hands-on class covers the fundamentals of SQL and relational databases, using MySQL as a teaching tool. Send information about this course release to a friend who might be considering getting started on the world's most popular small footprint database.

    Read the article

  • Point-in-time restore of database backup?

    - by TiborKaraszi
    SQL Server 2005 added the STOPAT option for the RESTORE DATABASE command. This sounds great - we can stop at some point in time during the database backup process was running! Or? No, we can't. Here follows some tech stuff why not, and then what the option is really meant for: A database backup includes all used extents and also all log records that were produced while the backup process was running (possibly older as well, to handle open transactions). When you restore such a backup, SQL Server...(read more)

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >