Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 165/1641 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Memory issue regarding UIImageView on IPhone 4.0 / IPad

    - by Sagar Mane
    Hello All, My Application is crashing due to low memory [ Received memory warning level 1 + 2] To trace this I have used Instrument and come with following points Test Enviorment : Single view controller added on Window When I don't use UIImageView Real Memory is used 3.66 MB When I uses UIImageView with Image having size 25 KB : Real Memory is used 4.24 MB. almost 560 KB extra when compare to w/o UIImageView and which keep on adding as I am adding more UIImageview on the view. below is sample code for adding UIImageview which I am refering UIImageView* iSplashImage = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"Default-Landscape.png"]]; iSplashImage.frame = CGRectMake(0, 0, 320, 480); [self.window addSubview:iSplashImage]; AND dealloc if(iSplashImage) { [iSplashImage release]; iSplashImage = nil; } Issue is this 560 KB is not getting release and after some time application receives low memory warning. Can anyone point out if I am missing something or doing else. As My application uses lots of Images in One session. Thanks in Advance, Sagar

    Read the article

  • JRuby 1.7.0 will not install bundler given plenty of memory

    - by user678615
    I installed jruby with rvm install jruby-1.7.0 and it ran out of memory when it tried to create the gemsets so I started by trying to install bundler with the new version and this is what I get ~>gem install bundler Error: Your application used more stack memory than the safety cap of 2048K. Specify -J-Xss####k to increase it (#### = cap size in KB). Specify -w for full StackOverflowError stack trace So I moved up the memory and I still got nothing with a huge chunk of memory ~>JRUBY_OPTS=-J-Xss1024m gem install bundler Error: Your application used more stack memory than the safety cap of 1024M. Specify -J-Xss####k to increase it (#### = cap size in KB). Specify -w for full StackOverflowError stack trace How the hell can that not be enough I run applications on less than that

    Read the article

  • resizing arrays when close to memory capacity

    - by user548928
    So I am implementing my own hashtable in java, since the built in hashtable has ridiculous memory overhead per entry. I'm making an open-addressed table with a variant of quadratic hashing, which is backed internally by two arrays, one for keys and one for values. I don't have the ability to resize though. The obvious way to do it is to create larger arrays and then hash all of the (key, value) pairs into the new arrays from the old ones. This falls apart though when my old arrays take up over 50% of my current memory, since I can't fit both the old and new arrays in memory at the same time. Is there any way to resize my hashtable in this situation Edit: the info I got for current hashtable memory overheads is from here How much memory does a Hashtable use? Also, for my current application, my values are ints, so rather than store references to Integers, I have an array of ints as my values.

    Read the article

  • Recovering Pictures & Movies from Formatted Memory Card

    - by Donotalo
    I thought I've copied all of the pictures and videos that I've taken using my digital camera Canon Digital IXUS 860 IS to my computer. Then I format the memory card. Then I found I didn't take all of the files! I don't have any other means of connecting the memory card to computer except via the camera. But the camera doesn't show it as a removable device directly in my computer so programs like Glary Utilities and PC Inspector didn't find the drive. I didn't take any picture after I formatted it. Is there any free software that can help me to get the pictures and videos? My memory card is an 4 GB SDHC card. Thanks.

    Read the article

  • kvm memory changes via virsh not propagating to vm

    - by kevintmckay
    Hi I just started using kvm on rhel6 and after creating a vm I tried to increase the memory but the changes I amde in the xml file do not propogate to vm, even after bouncing vm and restarting libvert? [root@kvm01 qemu]# virsh dominfo dev-kvm01 Id: 2 Name: dev-kvm01 UUID: 9b2bf581-2807-3116-b176-60e9c0559943 OS Type: hvm State: running CPU(s): 2 CPU time: 1975.3s Max memory: 7864320 kB Used memory: 7864320 kB Persistent: yes Autostart: disable Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c47,c760 (enforcing) [iknowmed@dev-kvm01 ~]$ free total used free shared buffers cached Mem: 3632284 3614508 17776 0 3980 3491676 -/+ buffers/cache: 118852 3513432 Swap: 5668856 0 5668856

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • Perl script and out of memory errors

    - by Kevin
    We have a midsized server with 48GB of RAM and are attempting to import a list of around 100,000 opt-in email subscribers to a new list management system written in Perl. From my understanding, Perl doesn't have imposed memory limits like PHP, and yet we are continuously getting internal server errors when attempting to do the import. When investigating the error logs, we see that the script ran out of memory. Since perl doesn't have a setting to limit the memory usage (as far as I can tell) why are we getting these errors? I doubt a small import like this is consuming 48GB of ram. We have compromised and split the list into chunks of 10,000, but would like to figure out the root cause for future fixes. This is a CentOS machine with Litespeed as the web server.

    Read the article

  • Classic ASP on large memory server

    - by Steve Evans
    I have a client with a large ASP app that apparently is fairly memory intensive. I’m helping them migrate to new hardware they have running Win2k8 R2. They have 4 physical servers with 32gb of RAM each. I’m making the assumption that ASP apps run as a x32 process. So I see that we have two options: On the application pool enable web gardens. Use the physical servers as VM hosts and split the box into say 4 web servers each. Any thoughts on which path will provide us better performance? I’m just not really sure how ASP will handle a machine with lots of memory, and I’m worried it won’t really be able to address the memory well. (you can ignore all the obvious stuff like increased maintenance of 16 web servers vs 4, or the flexibility virtualization gets us over physical servers, etc)

    Read the article

  • Plesk on Windows Server 2008 memory usage

    - by Thomas
    I have a server running IIS and Plesk. There is a problem every day now in which the memory usage of PleskControlPanel.exe and w3wp.exe slowly increases. And at some point all of the sites hosted on the server go down. Restarting Plesk through its panel will reset its memory usage for a while, and restarting IIS will make it last even longer, but there is still the constant memory climbing problem. I cannot see anything by looking through the event viewer. Has anyone ever seen anything like this with Plesk on Windows? Thanks for any help.

    Read the article

  • SQL Server not releasing Memory

    - by noob2487
    I am using SQL Server 2005. I am running a job which processes around 100 K records. Job runs fine, it takes are 45 mins to execute, which is good. But after that job is processed, I can see instance of SQL Server 2005 still there with around 900 MB of Memory. I waited for around 2 hrs but that memory was not released. Is there any process which takes care of memory here, something like GC (unpredictable) Or am I doing something wrong???

    Read the article

  • Modifying kernel shared memory settings on a lion install

    - by andrewjl
    What's the location of the sysctl.conf file on lion? In Snow Leopard it was in /etc/sysctl.conf but now that folder doesn't contain it anymore. Searching for the file in spotlight yields no results. Have the shared memory settings been moved to a different conf file? What is it's name? EDIT I am trying to modify the kernel shared memory settings of the machine. When I didn't find the sysctl.conf file in the right place, I created my own with the recommended settings and put into /etc directory. However running sysctl -a still shows me that the old memory settings are in place. How do I go about modifying these on a lion install?

    Read the article

  • Media center consumes all available memory when attempting to play music off of a server

    - by RCIX
    I have Windows 7 Ultimate, and recently, when i try to play a song off of my Twonky Media Server/Windows Media Connect (based on an HP WHS with an Atom), it plays choppily. When i open Resource Monitor, it shows that after ordering the music to play, memory usage rapidly spikes to consume most, if not all, of the available memory on my system (excluding a couple hundred megabytes in standby). Why does it do this and is there anything i can do to stop it? Edit: it happens when I attempt to browse the server's music, not just when i play music. Edit 2: the "ehshell" process is what consumes the memory, appears to me something specific to media center. Moreover, the ehshell process doesn't die in this case. Edit 3: It only happens when browsing my Twonky library, and not my Windows Media Connect.

    Read the article

  • Maximum memory allocation for 32bit linux kernel

    - by LedZeppelin
    I was reading this article that talks about how maximum amount of ram dedicated for kernel usage in 32 bit windows is 2GB even when the total amount of ram is 4GB. http://www.brianmadden.com/blogs/brianmadden/archive/2004/02/19/the-4gb-windows-memory-limit-what-does-it-really-mean.aspx\ Is this the same for 32bit linux environments like 32-bit ubuntu 10.04? IE is the max kernel allocation 2GB ram even if the total main memory 4GB? If you increase the total amount of memory to 64GB of ram by recompiling the kernel with the PAE option enabled, what is the maximum amount of ram you can dedicate for kernel usage? Is it still 2GB? Or can you increase it?

    Read the article

  • Tuning MySQL to consume less memory

    - by Alex
    I have a VM which has 2GB Ram, (full specs) And I am setting up a site which has one table in particular with over a million records. There's little or no usage of this particular database (perhaps once or twice a day) but simply running mysql grinds the whole server to a halt. I've looked through the top results but nothing is really denting the CPU however the memory seems to be the issue. The site isnt even live of taking requests yet. the memory situation looks like this: # free -m total used free shared buffers cached Mem: 2006 1880 126 0 3 53 -/+ buffers/cache: 1823 183 Swap: 2047 345 1702 Are there any good pointers to tune mysql to stop hogging the system memory? Thanks very much EDIT: (requested by 8bit): http://tny.cz/b41a0b12

    Read the article

  • innodb memory usage mysql

    - by Tiddo
    I have a small vps, with only 256mb of ram, with maximum burst up to 512mb. When I configure my vps without innodb, it only uses 130 mb of ram, so that is no problem for me. But when I turn on innodb, The memory usage grows to about 300-400 mb. Is it possible to run innodb such that I won't exceed the 256mb? preferably I don't want to use more than 100mb for innodb. I already came across some sites which said I could limit the memory usage, but if I limit it to only 100mb will the db run well enough? (compared to for example the MyISAM storage engine) If 100mb is to little memory for innodb, can you recommend me any other storage engine which supports transactions?

    Read the article

  • Asus P6X58-E WS motherboard memory limits

    - by Arsen Zahray
    I've just ordered motherboard Asus P6X58-E WS, and now I'm, looking for memory for it. It has 6 slots, but the strange thing is, that in specifications it says that it is limited to 24G of memory . I'm planning on using 8G Kingston KVR1333D3D4R9S/8G sticks with it. Using those sticks, I'll be teoretically limited to 48G. Does the 24G limitation mean, that even if I install all 6 sticks, I still will be limited by 24G of possible memory? Sorry if the question seems dumb, but I never faced such a limitation before

    Read the article

  • Dropping Cached Memory on FreeBSD

    - by user1066698
    i use FreeNAS server which is built on OS version FreeBSD 8.2-RELEASE-p6. I use ZFS file system with 13TB HDD on my 8GB physical ram installed box. It almost uses all of RAM installed while proccessing some request. However, it still uses same amount of memory on idle times. So this is becoming a problem sometimes. On my centos web server; i use following command to drop cached memory with a cronjob; sync; echo 3 > /proc/sys/vm/drop_caches However, this command does not work on my Freenas server. How can i drop cached memory on my FreeNAS box which is built on FreeBSD 8.2 Thank you

    Read the article

  • debian out of memory error server crash

    - by user42700
    hi, the server keeps crashing due to apache, is there any way i can stop this, the server has 2GB swap space and 3GB ram May 25 03:33:41 server kernel: [ 3513.200719] [<c015959c>] out_of_memory+0x14e/0x17f May 25 03:33:41 server kernel: [ 3513.211491] Out of memory: kill process 2936 (apache2) score 87364 or a child May 25 04:35:30 server kernel: [ 7239.936995] [<c015959c>] out_of_memory+0x14e/0x17f May 25 04:35:30 server kernel: [ 7239.948878] Out of memory: kill process 2936 (apache2) score 88236 or a child May 25 05:42:57 server kernel: [11210.572510] [<c015959c>] out_of_memory+0x14e/0x17f May 25 08:13:23 server kernel: [ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 0000000000100000

    Read the article

  • SQL SERVER – Get All the Information of Database using sys.databases

    - by pinaldave
    Earlier I wrote blog article SQL SERVER – Finding Last Backup Time for All Database. In the response of this article I have received very interesting script from SQL Server Expert Matteo as a comment in the blog. He has written script using sys.databases which provides plenty of the information about database. I suggest you can run this on your database and know unknown of your databases as well. SELECT database_id, CONVERT(VARCHAR(25), DB.name) AS dbName, CONVERT(VARCHAR(10), DATABASEPROPERTYEX(name, 'status')) AS [Status], state_desc, (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS DataFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'rows') AS [Data MB], (SELECT COUNT(1) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS LogFiles, (SELECT SUM((size*8)/1024) FROM sys.master_files WHERE DB_NAME(database_id) = DB.name AND type_desc = 'log') AS [Log MB], user_access_desc AS [User access], recovery_model_desc AS [Recovery model], CASE compatibility_level WHEN 60 THEN '60 (SQL Server 6.0)' WHEN 65 THEN '65 (SQL Server 6.5)' WHEN 70 THEN '70 (SQL Server 7.0)' WHEN 80 THEN '80 (SQL Server 2000)' WHEN 90 THEN '90 (SQL Server 2005)' WHEN 100 THEN '100 (SQL Server 2008)' END AS [compatibility level], CONVERT(VARCHAR(20), create_date, 103) + ' ' + CONVERT(VARCHAR(20), create_date, 108) AS [Creation date], -- last backup ISNULL((SELECT TOP 1 CASE TYPE WHEN 'D' THEN 'Full' WHEN 'I' THEN 'Differential' WHEN 'L' THEN 'Transaction log' END + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),Backup_finish_date))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), backup_start_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_start_date, 108) + ' – ' + CONVERT(VARCHAR(20), backup_finish_date, 103) + ' ' + CONVERT(VARCHAR(20), backup_finish_date, 108) + ' (' + CAST(DATEDIFF(second, BK.backup_start_date, BK.backup_finish_date) AS VARCHAR(4)) + ' ' + 'seconds)' FROM msdb..backupset BK WHERE BK.database_name = DB.name ORDER BY backup_set_id DESC),'-') AS [Last backup], CASE WHEN is_fulltext_enabled = 1 THEN 'Fulltext enabled' ELSE '' END AS [fulltext], CASE WHEN is_auto_close_on = 1 THEN 'autoclose' ELSE '' END AS [autoclose], page_verify_option_desc AS [page verify option], CASE WHEN is_read_only = 1 THEN 'read only' ELSE '' END AS [read only], CASE WHEN is_auto_shrink_on = 1 THEN 'autoshrink' ELSE '' END AS [autoshrink], CASE WHEN is_auto_create_stats_on = 1 THEN 'auto create statistics' ELSE '' END AS [auto create statistics], CASE WHEN is_auto_update_stats_on = 1 THEN 'auto update statistics' ELSE '' END AS [auto update statistics], CASE WHEN is_in_standby = 1 THEN 'standby' ELSE '' END AS [standby], CASE WHEN is_cleanly_shutdown = 1 THEN 'cleanly shutdown' ELSE '' END AS [cleanly shutdown] FROM sys.databases DB ORDER BY dbName, [Last backup] DESC, NAME Please let me know if you find this information useful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – Bookmark – Deprecated Database Engine Features in SQL Server 2008

    - by pinaldave
    When anybody asked me if any specific feature is available in SQL Server 2008 or if any feature will be disabled in future versions of SQL Server, I always point everybody to following list where all the deprecated database engine features are listed. Deprecated Database Engine Features in SQL Server 2008 R2 Deprecated Database Engine Features in SQL Server 2008 This list is quite helpful and everybody should refer it once. This list has many important details. For example, it suggests “80 compatibility level and upgrade from version 80.” will not be supported in next version of SQL Server. If you are using SQL Server 2000 still today (by any chance) you will be not able to upgrade that to next version of SQL Server directly. It is very important to note that if you are using any feature of SQL Server in compatibility mode and if you find them in the list above. You need to start working on the replacement suggested in article. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Bookmark, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Free NOSQL database for use with C# client [closed]

    - by Mitten
    I've never used NOSQL databases before, but so far it seems like the best data storage solution for my project. I am going to implement a datamining application. The data I would like to mine is thousands of documents which cannot be imported into datamining applications. To make to import easier and faster (than importing thousands of documents) I am planning to import these documents into a NOSQL database first and when import NOSQL database into datamining software. At the very least once I have all the data in NOSQL database I should be able to code simplest datamining logic myself. Am I correct that NOSQL databases allow to creates records of data, but they don't mandate all the records to adhere to the same data schema (same column names/types in a classic table oriended SQL databases)? I think for each document I would create a row/entry/object (not sure what is the correct term is in use in NOSQL world) which would be a string id, few (columns) with unstructured text data, and a dozens of columns mostly of datetime and integer types. From its name NOSQL does not support SQL query syntax, but it support locating the object(row/entry?) by its unique id. Does NOSQL support qyuering objects using property=value syntax? Unfortunately most of free NOSQL db only support Java/C++ clients, which free NOSQL db would you recommend for a C# programmer?

    Read the article

  • SQLAuthority News – Mark the Date: October 16, 2013 – Introducing NuoDB Blackbirds: THE Distributed Database

    - by Pinal Dave
    I am very excited to announce first on this blog about the release of NuoDB Blackbirds (NuoDB Release 2.0). NuoDB is my favorite application to work with data now a days. They are increasingly gaining market share as well as brining out new features with their every new release. I was very excited when I learned that NuoDB is releasing their flagship release of 2.0 on October 16, 2013. Interesting enough I will be in USA while this release happens and I will be watching it live during my day time. Even though if I had to stay up the entire night to just watch this release, I would do it. Here is the details of the announcements: Introducing NuoDB Blackbirds: THE Distributed Database Date: October 16, 2013 Time: 1:00 PM EDT Location: Online Registration Link What is the best DBMS architecture to handle today’s and tomorrow’s evolving needs? The days of shared disk are over. The times are “a-changin” and IT infrastructure has to change with them. Join NuoDB live for the introduction of our latest major product release, NuoDB Blackbirds, and take a look at why the NuoDB distributed database architecture is the only answer for customers like Fathom Voice, a leading provider of Voice Over IP (VoIP). NuoDB CEO, Barry Morris, welcomes Cameron Weeks, CEO of Fathom Voice to discuss how his company is using DBMS to break away from the pack and become the hottest player in VoIP. The webcast will include demonstrations of a single, logical database running in multiple geographies and a live Q&A. If due to any reason, you cannot watch it live, do not worry at all, just register at this Registration Link, as after the event you will get the link to watch the event on-demand. You can watch the launch event at any time if you have registered for the launch. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • Materialized View does not import properly when importing on a second instance of a database

    - by marinus
    When I import a database with materialized view mv_mt in just one database (Oracle) everything is ok. create materialized view mv_mt refresh complete next trunc( sysdate ) + 1 as SELECT sysdate, media_type.* from media_type; But when I try to import the same database to a copy in another schema I get the following errors: IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=438,WHAT='dbms_refresh.refresh(''"ALEXANDRA"" "."MV_MT"'');',NEXT_DATE=TO_DATE('2012-07-02:14:22:36','YYYY-MM-DD:HH24:MI:" "SS'),INTERVAL='sysdate + 1 / 24 / 60 / 6 ',NO_PARSE=TRUE); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_JOB", line 100 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23421: "BEGIN dbms_refresh.make('"ALEXANDRA"."MV_MT"',list=null,next_date=null," "interval=null,implicit_destroy=TRUE,lax=FALSE,job=438,rollback_seg=NUL" "L,push_deferred_rpc=TRUE,refresh_after_errors=FALSE,purge_option = 1,par" "allelism = 0,heap_size = 0); END;" IMP-00003: ORACLE error 23421 encountered ORA-23421: job number 438 is not a job in the job queue ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 ORA-06512: at "SYS.DBMS_IJOB", line 793 ORA-06512: at "SYS.DBMS_REFRESH", line 86 ORA-06512: at "SYS.DBMS_REFRESH", line 62 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23410: "BEGIN dbms_refresh.add(name='"ALEXANDRA"."MV_MT"',list='"ALEXANDRA"."MV" "_MT"',siteid=0,export_db='ORCL01'); END;" IMP-00003: ORACLE error 23410 encountered ORA-23410: materialized view "ALEXANDRA"."MV_MT" is already in a refresh group ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.DBMS_IREFRESH", line 484 ORA-06512: at "SYS.DBMS_REFRESH", line 140 ORA-06512: at "SYS.DBMS_REFRESH", line 125 ORA-06512: at line 1 Anyone any ideas? Regards, Marinus

    Read the article

  • Database Backup History From MSDB in a pivot table

    - by steveh99999
    I knocked up a nice little query to display backup history for each database in a pivot table format.I wanted to display the most recent full, differential, and transaction log backup for each database. Here's the SQL :-WITH backupCTE AS (SELECT name, recovery_model_desc, d AS 'Last Full Backup', i AS 'Last Differential Backup', l AS 'Last Tlog Backup' FROM ( SELECT db.name, db.recovery_model_desc,type, backup_finish_date FROM master.sys.databases db LEFT OUTER JOIN msdb.dbo.backupset a ON a.database_name = db.name WHERE db.state_desc = 'ONLINE' ) AS Sourcetable   PIVOT (MAX (backup_finish_date) FOR type IN (D,I,L) ) AS MostRecentBackup ) SELECT * FROM backupCTE Gives output such as this :-  With this query, I can then build up some straightforward queries to ensure backups are scheduled and running as expected -For example, the following logic can be used ;-  - WHERE [Last Full Backup] IS NULL) - ie database has never been backed up.. - WHERE [Last Tlog Backup] < DATEDIFF(mm,GETDATE(),-60) AND recovery_model_desc <> 'SIMPLE') - transction log not backed up in last 60 minutes. - WHERE [Last Full Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Differential Backup] < [Last Full Backup]) -- no backup in last day.- WHERE [Last Differential Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Full Backup] < DATEDIFF(dd,GETDATE(),-8) ) -- no differential backup in last day when last full backup is over 8 days old.   

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >