Search Results

Search found 54131 results on 2166 pages for 'database project'.

Page 112/2166 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Suggestions to start a cross-platform project

    - by Gabriele
    I have a big project in my head, it should be cross-platform (Win, Max and Linux), online (Client - Server) and with 3D graphics. I would like some suggestions to start with the right things. Currently I'm a PHP/MySQL coder, I used to code in C and Pascal on DOS ages (Borland Times ;)), my C knowlegde need a refresh but it's ok. I guess C++ it's the right language. What platform and what i should use to code? I can choose from all three platforms. My idea was to use Visual Studio 2010 C++, but i'm not sure if it support Native code. What kind of libraries should i use? I guessed OpenSSL for the login, OpenGL for graphics part. For the Audio or the GUI? Any other suggestions are well accepted. I know it's a "BIG DEAL" but I have no rush and it'll be a free-time project, only for my pleasure. Thank you in advance.

    Read the article

  • Register Your Interest In Taking The Oracle Database 10g Certified Master Exam

    - by Brandye Barrington
    Due to the increasing demand for the Oracle Database 11g Certified Master exams, the 10g version of the exam is being scheduled less frequently worldwide, to reserve space for delivery of the Oracle Database 11g Certified Master Exams. Since we have received several recent requests about the Oracle Database 10g Certified Master Exam, we would like to remind you that if you would like to take this exam, please be sure to register your interest so that Oracle University can gauge interest in this exam in each region. Otherwise, we recommend preparing for the Oracle Database 11g Certified Master Exam. We recognize the effort it takes to reach this level of certification and applaud your commitment!  Register your interest  with Oracle University today so that you can get closer to completing your certification path. 

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Best scripting language for project [on hold]

    - by Dave
    This is a subjective question, but I don't know where else to ask it. I'd appreciate it if someone could direct me to an appropriate scripting language for my project. I'm a little new at this so I'd appreciate any help. The project is a website that will display a list of photo subject groups (such as "nature" "people" "sports" etc) on the home page. The photos will all be in subdirectories of the main photo directory (photos) and each subject group will represent a subdirectory in photos. For example in directory photos there might be 3 subdirectories, "nature" "people" "sports" and in each of those subdirectories there will be the actual photos. The idea is that when the website owner wants to update/add/delete a subject group all he has to do is add, delete or update a subdirectory of the photos directory. This means, I think, that I need a scripting language that can read the directories and files in the website and then send a web page with the information in it. What is the simplest and easiest scripting language to do this in? Any ideas? Thanks

    Read the article

  • Database recovery model change notification report for SQL Server

    The database recovery model plays a crucial role for the recovery of a database. With several DBAs having access to a SQL Server instance there are bound to be changes that are not communicated. In this tip we cover a monitoring solution we deployed at our company to alert the DBAs if a database recovery model is different than what it is expected. The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Applying to a company while personally working on a comparable project

    - by Developer Art
    That's going to be an unusual question but here it goes. I'm entertaining the thought to send my docs to a place which develops a large web project of a social type. Social meaning people, communities, interaction and all that usual stuff. The issue is that I myself am working on something that falls into the category of social in my private time. Now the question. Is it wise to apply there under these circumstances? I think there may be issues of intellectual ownership if I develop something similar that exists or will exist in that company's work. On the other hand, the web of full of social places (even this site is one of them), many of them utilize the same ideas and move in the same direction and it seems to work for everyone. It's hard to come up with something which hasn't been tried yet by somebody else so it's all basically reuse of the commonly available ideas and experience. What I'm working on is not a functional equivalent, it's rather largely off. There may be some intersections, but on a large scale this is not an equivalent. And whatever features might coincide, they already exist everywhere on the web anyway. Also technology stacks are entirely different so the issue with directly copying out parts of the code is probably not applicable. I plan to say it up front that I'm engaged in a personal project of mine and let them see if it represents a problem for them. What do you think? Am I making things up or is there really an issue?

    Read the article

  • open source database project

    - by Jeff V
    What is the best way to build an open source database? I would like to build a database of all vehicles and the related maintenance information (i.e Oil Weight, Quantity, Tire Pressure, Windshield wipers etc). Currently this information is fragmented or just not put on line in an open way. Once collection began I would want to import into a DB and then be able to distribute freely. Is there a process (site or group) that I can start gathering this information in a reliable and verifiable way? Is there any issues that I should watch out for?

    Read the article

  • Everything you wanted to know about private database clouds, but were afraid to ask

    - by B R Clouse
    Private Database Clouds have come into their own, and will be a prominent topic at Oracle OpenWorld this year.  In fact while most exhibits will be open from Monday through Wednesday, Private Database Clouds will be available starting Sunday afternoon all the way through Thursday evening.  In addition to the demonstration choices, numerous speaking sessions address Private Database Clouds, including a general session on Monday.  The demos and discussions will help  you chart your path to cloud computing.

    Read the article

  • Develop an android and iPhone application with shared database

    - by Bongo
    I have a great idea for smartphone application, And I want to develop an application suited for both android and iPhone. In addition I need to use spatial database for geo indexing that will be shared for both applications. I am new to this app world and I have some questions. Is there away to develop for both machines? I know java but not objective c. My guess is that I need to separate the database from the computing to support both applications. What are the best cloud computing providers with spatial database support that can host the server? Do I need 2 hosting servers or there is one server the can support the both of them? which database provider can support geo indexing and support this intergraion, I prefer providers with reasonable free quotas. Thanks.

    Read the article

  • The Future of the Database Begins Soon: Oracle Database In-Memory launch, 2014. június 10-ikén

    - by user645740
    Az Oracle adatbázis-kezelo történetében forradalmi újdonságot várunk. A Database In Memory-ról az OpenWorld-ön beszélt eloször nyilvánosan Larry Ellison. A launch webes eloadás 2014. június 10-én lesz, lehet rá regisztrálni: June 10: Oracle CEO Larry Ellison Live on the Future of Database Performance http://www.oracle.com/us/dm/sev100306382-ww-ww-lw-wi1-ev-2202435.html 10:00 a.m. PT – 11:30 a.m. PT, azaz számunkra 19:00-20:30 CET között. Az Oracle Database In-Memory valós idoben villámgyors lekérdezéseket hajt végre, nagyságrendekkel felgyorsíthatja a lekérdezéseket, és a tranzakciók is gyorsabbak lesznek, mindez az alkalmazások megváltoztatása nélkül! Oracle Database In-Memory: Powering the Real-Time Enterprise Nézze meg Ön is a launch eseményt!

    Read the article

  • Elérheto és letöltheto az Oracle Database 12c

    - by user645740
    Megjelent az Oracle Database ÚJ verziója, az Oracle Database 12c, számos innovációval, újdonsággal, új funkcióval. Az egyik legfontosabb a Multitenant funkció, ami a container database és pluggable database architektúrára épül, ami elsodlegesen az adatbázis konszolidációt és az adatbázis cloud megvalósításokat támogatja. Az Automatic Data Optimization a Heat Map segítségével az adatok automatikus tömörítését és osztályozott elhelyezését teszi lehetové (tiering). Emellett a biztonság, rendelkezésre állás és számos más területen vannak újdonságok. Az új verzió letöltheto: Linux x86-64, Solaris Sparc64, Solaris (x86-64) Oracle Technology Network. Lehet regisztrálni a launch webcastra: here.

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • Welcome to the Database Cloud CoverAge blog

    - by B R Clouse
    Welcome to the Database Cloud CoverAge blog, brought to you by Oracle's Database Cloud Architecture Team. We've spent the past few years developing best practices for database consolidation projects, how to deliver Database as a Service, and for designing and driving corporate cloud initiatives. Many of our experiences and lessons learned are available in a growing collection of collateral that you can find on our OTN page.We decided to join the blogosphere to distill key concepts into short posts that you, our readers, can digest quickly. Also, this medium allows you to comment on our posts and collateral -- to share experiences, challenge our conclusions, critique our recipes, and help us choose topics to blog about. Watch for our next posting, which will start a series on your journey into cloud computing.

    Read the article

  • User Already Exists in the Current Database - SQL Server

    - by bullpit
    I was moving a lot of databases from one SQL Server to another, and my applications were giving me errors saying "Login failed for <user>". The user was already in the database with appropriate rights to allowed objects in the database. I tried mapping the user to the database and that's when I got this message: "User Already Exists in the Current Database"... I googled and found this very useful post about orphaned users when moving databases. These are the steps you should take to fix this issue: First, make sure that this is the problem. This will lists the orphaned users: EXEC sp_change_users_login 'Report' If you already have a login id and password for this user, fix it by doing: EXEC sp_change_users_login 'Auto_Fix', 'user' If you want to create a new login id and password for this user, fix it by doing: EXEC sp_change_users_login 'Auto_Fix', 'user', 'login', 'password'

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Free eBook: 45 Database Performance Tips for Developers

    As a developer, if you need to go into the database and write queries, design tables, or determine the configuration of your SQL Server Systems, these tips should help make sure you're not unnecessarily sacrificing database performance. This eBook has 45 easy tips to improve the performance of your indexes and T-SQL queries, and hunt down problems within ORM tools and database design. Save 45% on our top SQL Server database administration tools. Together they make up the SQL DBA Bundle, which supports your core tasks and helps your day run smoothly. Download a free trial now.

    Read the article

  • Cikk az Oracle Database In-Memory elonyeirol

    - by user645740
    Megjelent egy cikk a forradalmi újdonságot jelento Oracle Database In-Memory adatbázis funkcióról a bitport.hu-n: Ugorjunk szintet a döntéshozatal gyorsaságában! címmel. A Database In-Memory legfontosabb elonyei: Az alkalmazások változatlanok, nem kell semmit megváltoztatni rajtuk. Úgyanúgy minden megtalálható a diszken, nincs semmi változás a mentésekben sem, az élet ugyanúgy megy tovább, Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 „csak" sokkal gyorsabb lesz a muködés! Pillanatok alatt bekapcsolható, és beállítható, szinte nem igényel konzultációs tevékenységet.csak azt kell kiválasztani, milyen objektumokra lépjen életbe, milyen tömörítést használjon hozzá, és milyen prioritással töltse be a memóriába az adatokat. Más gyártók részmegoldásaihoz nagy bevezetési költségek kapcsolódnak! Nem kell hozzá új infrastuktúra elem, nem kell hozzá új szerver sem Minden Oracle Database alapú rendszerhez használható: tranzakciós rendszerekhez, vegyes rendszerekhez és adattárház, üzleti analitikai, üzleti intelligencia rendszerekhez is. Oracle Database In-Memory

    Read the article

  • Automating the Backup of a SQL Server 2008 Express Database

    - by JaydPage
    Steps Involved: 1) Create a Database Backup Script. 2) Create a Scheduled Task To Run the Backup Script. 1 Create a Database Backup Script. a) Download and install SQL Server Management Studio. This is a free tool available on the Microsoft website. b) Once Management Studio is installed launch it and connect to the SQL server instance that contains the database that you want to back up. c) Right click on the database and then in the menu choose Tasks -> Back up... d) This will open up a window where you can choose your backup options, once you are happy with the options click on the "Script" button near the top and select the "Script Action to File" option. e) Save the File. 2 Create a Schedule Task to Run the Backup Script a) Open up Windows Task Scheduler. b) Create a new Task using the wizard, when asked to select a program browse to C:\Program Files\Microsoft SQL Server\100\Tools\binn\SQLCMD.exe c) There are 2 arguments that need to be set: -S \SERVER_INSTANCE_NAME  -i "PATH_OF_SQLBACKUP_SCRIPT" where SERVER_INSTANCE_NAME  is the name of the instance of SQL server that contains your database e.g. (local) and PATH_OF_SQLBACKUP_SCRIPT is the path of your backup script e.g. "C:\Program Files\Microsoft SQL Server\DatastoreBackup.sql" d) Adjust the task to run at the desired times and you are done.

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

  • Getting Oracle VM VirtualBox Ready for an Oracle Database

    Everyone wants to go virtual, but getting started with Oracle&#146;s VM VirtualBox can be tricky. James Koopmann takes a quick look at installing Oracle VM VirtualBox 3.2.4, covering some of the features you as a database administrator or database developer might run across while trying to install an operating system or Oracle database.

    Read the article

  • Database Insider - June 2014 issue now available

    - by Javier Puerta
    The June issue of the Database Insider newsletter is now available. (Full newsletter here) NEWS June 10: Oracle CEO Larry Ellison Live on the Future of Database Performance At a live webcast on June 10 at Oracle’s headquarters, Oracle CEO Larry Ellison is expected to announce the upcoming availability of Oracle Database In-Memory, which dramatically accelerates business decision-making by processing analytical queries in memory without requiring any changes to existing applications.Read More New Study Confirms Capital Expenditure Savings with Oracle Multitenant A new study finds that Oracle Multitenant, an option of Oracle Database 12c, drives significant savings in capital expenditures by enabling the consolidation of a large number of databases on the same number or fewer hardware resources.  Read More Read full newsletter here

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Copy Table to Another Database

    - by Derek Dieter
    There are few methods of copying a table to another database, depending on your situation. Same SQL Server Instance If trying to copy a table to a database that is on the same instance of SQL Server, The easiest solution is to use a SELECT INTO while using the fully qualifed database names.SELECT * INTO Database2.dbo.TargetTable FROM Database1.dbo.SourceTableThis will [...]

    Read the article

  • SQL Database Management Survey

    Win one of two $50 Amazon vouchers by entering our database management survey. We’re finding out more about how SQL database professionals are doing backup and recovery, using cloud services and more. Answer the short survey for a chance to win. Learn Agile Database Development Best PracticesAgile database development experts Sebastian Meine and Dennis Lloyd are running day-long classes designed to complement Red Gate’s SQL in the City US tour. Classes will be held in San Francisco, Chicago, Boston and Seattle. Register Now.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >