Search Results

Search found 64086 results on 2564 pages for 'algebraic data types'.

Page 16/2564 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Core Data grouping data in table

    - by OscarTheGrouch
    I am using core data trying to create a simple database app, I have an entity called "Game" which has a "creator". I have basically used the iPhone table view template and replaced the names. I have the games listed by creator. Currently the tableview looks like this... Chris Ryder Chris Ryder Chris Ryder Chris Ryder Dan Grimaldi Dan Grimaldi Dan Grimaldi Scott Ricardo Tim Thermos Tim Thermos I am trying to group the tableview, so that each creator has only one cell in the tableview and is listed once and only once like this... Chris Ryder Dan Grimaldi Scott Ricardo Tim Thermos any help or suggestions would be greatly appreciated.

    Read the article

  • Insert new relationship data in core data

    - by michael
    Hoping someone can shed some light on what I might be doing wrong here. Trying to add an "event" to a list of events, represented by a one to many (inverse) relationship: MyEvents <--- Event This is the code: MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSLog(@"MYEVENTS: %@", myEvents); NSLog(@"EVENT: %@", event); [myEvents addEventObject:event]; My context is fine, and both the myEvents and event print perfectly valid information. When I try to add the event that I have (which is passed into this view controller, having been retrieved from core data previously) with this code [myEvents addEventObject:event]; It falls over with *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance MyEvents and Event are just the default generated code. MyEvents contains only the relationship to event. Thanks.

    Read the article

  • Maintaining integrity of Core Data Entities with many incoming one-to-many relationships

    - by Henry Cooke
    Hi all. I have a Core Data store which contains a number of MediaItem entities that describe, well, media items. I also have NewsItems, which have one-to-many relationships to a number of MediaItems. So far so good. However, I also have PlayerItems and GalleryItems which also have one-to-many relationships to MediaItems. So MediaItems are shared across entities. Given that many entities may have one-to-many relationships, how can I set up reciprocal relationships from a MediaItem to all (1 or more) of the entities which have relationships to it and, furthermore, how can I implement rules to delete MediaItems when the number of those reciprocal relationships drops to 0?

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • Implementation of "Automatic Lightweight Migration" for Core Data (iPhone)

    - by RickiG
    Hi I would like to make my app able to do an automatic lightweight migration when I add new attributes to my core data model. In the guide from Apple this is the only info on the subject I could find: Automatic Lightweight Migration To request automatic lightweight migration, you set appropriate flags in the options dictionary you pass in addPersistentStoreWithType:configuration:URL:options:error:. You need to set values corresponding to both the NSMigratePersistentStoresAutomaticallyOption and the NSInferMappingModelAutomaticallyOption keys to YES: NSError *error; NSURL *storeURL = <#The URL of a persistent store#>; NSPersistentStoreCoordinator *psc = <#The coordinator#>; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; if (![psc addPersistentStoreWithType:<#Store type#> configuration:<#Configuration or nil#> URL:storeURL options:options error:&error]) { // Handle the error. } My NSPersistentStoreCoordinator is initialized in this way: - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"FC.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } I am having trouble seeing where and how I should add the Apple code to get the Automatic Lightweight Migration working?

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • Spring Data Neo4J @Indexed(unique = true) not working

    - by Markus Lamm
    I'm new to Neo4J and I have, probably an easy question. There're NodeEntitys in my application, a property (name) is annotated with @Indexed(unique = true) to achieve the uniqueness like I do in JPA with @Column(unique = true). My problem is, that when I persist an entity with a name that already exists in my graph, it works fine anyway. But I expected some kind of exception here...?! Here' s an overview over basic my code: @NodeEntity public abstract class BaseEntity implements Identifiable { @GraphId private Long entityId; ... } public class Role extends BaseEntity { @Indexed(unique = true) private String name; ... } public interface RoleRepository extends GraphRepository<Role> { Role findByName(String name); } @Service public class RoleServiceImpl extends BaseEntityServiceImpl<Role> implements { private RoleRepository repository; @Override @Transactional public T save(final T entity) { return getRepository().save(entity); } } And this is my test: @Test public void testNameUniqueIndex() { final List<Role> roles = Lists.newLinkedList(service.findAll()); final String existingName = roles.get(0).getName(); Role newRole = new Role.Builder(existingName).build(); newRole = service.save(newRole); } That's the point where I expect something to go wrong! How can I ensure the uniqueness of a property, without checking it for myself?? THANKS IN ADVANCE FOR ANY IDEAS!! P.S.: I'm using neo4j 1.8.M07, spring-data-neo4j 2.1.0.BUILD-SNAPSHOT and Spring 3.1.2.RELEASE.

    Read the article

  • C# 4: Real-World Example of Dynamic Types

    - by routeNpingme
    I think I have my brain halfway wrapped around the Dynamic Types concept in C# 4, but can't for the life of me figure out a scenario where I'd actually want to use it. I'm sure there are many, but I'm just having trouble making the connection as to how I could engineer a solution that is better solved with dynamics as opposed to interfaces, dependency injection, etc. So, what's a real-world application scenario where dynamic type usage is appropriate?

    Read the article

  • Silverlight 4 Data Binding with anonymous types.

    - by Anthony
    Does anyone know if you can use data binding with anonymous types in Silverlight 4? I know you can't in previous versions of silverlight, you can only databind to public class properties and anonymous type properties are internal. Just wondering if anyone has tried it in silverlight 4? Thanks in advanced

    Read the article

  • Strange error inserting new relationship data in core-data

    - by michael
    My app will allow users to create a personalised list of events from a large list of events. I have a table view which simply displays these events, tapping on one of them takes the user to the event details view, which has a button "add to my events". In this detailed view I own the original event object, retrieved via an NSFetchedResultsController and passed to the detailed view (via a table cell, the same as the core data recipes sample). I have no trouble retrieving/displaying information from this "event". I am then trying to add it to the list of MyEvents represented by a one to many (inverse) relationship: This code: NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; [myEvents addEventObject:event];//ERROR And this code (suggested below): //would this add to or overwrite the "list" i am attempting to maintain NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSMutableSet *myEvent = [myEvents mutableSetValueForKey:@"event"]; [myEvent addObject:event]; //ERROR Bot produce (at the line indicated by //ERROR): *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance Seems I may have missed something fundamental. I cant glean any more information through the use of debugging tools, with my knowledge of them. 1) Is this a valid way to compile and store an editable list like this? 2) Is there a better way? 3) What could possibly be the deallocated instance in error? -- I have now modified the Event entity to have a to-many relationship called "myEvents" which referrers to itself. I can add Events to this fine, and logging the object shows the correct memory addresses appearing for the relationship after a [event addMyEventObject:event];. The same failure happens right after this however. I am still at a loss to understand what is going wrong. This is the backtrace #0 0x01f753a7 in ___forwarding___ () #1 0x01f516c2 in __forwarding_prep_0___ () #2 0x01c5aa8f in -[NSFetchedResultsController(PrivateMethods) _preprocessUpdatedObjects:insertsInfo:deletesInfo:updatesInfo:sectionsWithDeletes:newSectionNames:treatAsRefreshes:] () #3 0x01c5d63b in -[NSFetchedResultsController(PrivateMethods) _managedObjectContextDidChange:] () #4 0x0002e63a in _nsnote_callback () #5 0x01f40005 in _CFXNotificationPostNotification () #6 0x0002bef0 in -[NSNotificationCenter postNotificationName:object:userInfo:] () #7 0x01bbe17d in -[NSManagedObjectContext(_NSInternalNotificationHandling) _postObjectsDidChangeNotificationWithUserInfo:] () #8 0x01c1d763 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _createAndPostChangeNotification:withDeletions:withUpdates:withRefreshes:] () #9 0x01ba25ea in -[NSManagedObjectContext(_NSInternalChangeProcessing) _processRecentChanges:] () #10 0x01bdfb3a in -[NSManagedObjectContext processPendingChanges] () #11 0x01bd0957 in _performRunLoopAction () #12 0x01f4d252 in __CFRunLoopDoObservers () #13 0x01f4c65f in CFRunLoopRunSpecific () #14 0x01f4bc48 in CFRunLoopRunInMode () #15 0x0273878d in GSEventRunModal () #16 0x02738852 in GSEventRun () #17 0x002ba003 in UIApplicationMain () Cheers.

    Read the article

  • Unpacking tuple types in Scala

    - by jpalecek
    I was just wondering, can I decompose a tuple type into its components' types in Scala? I mean, something like this trait Container { type Element } trait AssociativeContainer extends Container { type Element <: (Unit, Unit) def get(x : Element#First) : Element#Second }

    Read the article

  • Design Patterns : Question about "Types"

    - by contactmatt
    Would someone please explain to me what the below paragraph means? This is a snippet from "Design Patterns: Elements of Reusable OO software" Part of an object's interface may be characterized by one type, and other parts by other types. Two objects of the same type need only share parts of their interfaces. Interfaces can contain other interfaces as subsets. - Design Patterns - Elements of Reusable OO software, pg 13

    Read the article

  • Determining if types alias to the same underlying type in C++

    - by emchristiansen
    I'd like to write a templated function which changes its behavior depending on template class types passed in. To do this, I'd like to determine the type passed in. For example, something like this: template <class T> void foo() { if (T == int) { // Sadly, this sort of comparison doesn't work printf("Template parameter was int\n"); } else if (T == char) { printf("Template parameter was char\n"); } } Is this possible?

    Read the article

  • Describing Types question

    - by user288245
    I have a bunch of types (eg. LargePlane, SmallPlane) that could be in this collection i've made, how do i print like LargePlane? I've tried like typeOf() and stuff but it doesn't work. Within like a toString()? So when i output the collection it states what type it is.

    Read the article

  • Sensitive Data Storage - Best Practices

    - by Kenneth
    I recently started working on a personal project where I was connecting to a database using Java. This got me thinking. I have to provide the login information for a database account on the DB server in order to access the database. But if I hard code it in then it would be possible for someone to decompile the program and extract that login info. If I store it in an external setup file then the same problem exists only it would be even easier for them to get it. I could encrypt the data before storing it in either place but it seems like that's not really a fail safe either and I'm no encryption expert by any means. So what are some best practices for storing sensitive setup data for a program?

    Read the article

  • The Best Ways to Lock Down Your Multi-User Computer

    - by Lori Kaufman
    Whether you’re sharing a computer with other family members or friends at home, or securing computers in a corporate environment, there may be many reasons why you need to protect the programs, data, and settings on the computers. This article presents multiple ways of locking down a Windows 7 computer, depending on the type of usage being employed by the users. You may need to use a combination of several of the following methods to protect your programs, data, and settings. How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • WCF/ADO.NET Data Services - Could not load type 'System.Data.Services.Providers.IDataServiceUpdatePr

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When you try accessing ListData.svc, do you get the following error? Could not load type 'System.Data.Services.Providers.IDataServiceUpdateProvider' from assembly 'System.Data.Services, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Well, if you followed the instructions in Chapter 1 of my book to build your VM, you wouldn’t run into the above issue. But if you do, you need to install  -   For Windows Vista and Windows 2008 - http://www.microsoft.com/downloads/details.aspx?familyid=4B710B89-8576-46CF-A4BF-331A9306D555&displaylang=en For Windows 7 and Windows 2008 R2 - http://www.microsoft.com/downloads/details.aspx?familyid=79d7f6f8-d6e9-4b8c-8640-17f89452148e&displaylang=en Remember to: a) Install the x64 version, and b) Do an IISReset before trying again. Comment on the article ....

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Passing data from one database to another database table (Access) (C#)

    - by SAMIR BHOGAYTA
    string conString = "Provider=Microsoft.Jet.OLEDB.4.0 ;Data Source=Backup.mdb;Jet OLEDB:Database Password=12345"; OleDbConnection dbconn = new OleDbConnection(); OleDbDataAdapter dAdapter = new OleDbDataAdapter(); OleDbCommand dbcommand = new OleDbCommand(); try { if (dbconn.State == ConnectionState.Closed) dbconn.Open(); string selQuery = "INSERT INTO [Master] SELECT * FROM [MS Access;DATABASE="+ "\\Data.mdb" + ";].[Master]"; dbcommand.CommandText = selQuery; dbcommand.CommandType = CommandType.Text; dbcommand.Connection = dbconn; int result = dbcommand.ExecuteNonQuery(); } catch(Exception ex) {}

    Read the article

  • SQL SERVER – Integration Services Balanced Data Distributor – SSIS Balanced Data Distributor

    - by pinaldave
    Microsoft SSIS Balanced Data Distributor (BDD) is a new SSIS transform. This transform takes a single input and distributes the incoming rows to one or more outputs uniformly via multithreading. The transform takes one pipeline buffer worth of rows at a time and moves it to the next output in a round robin fashion. It’s balanced and synchronous so if one of the downstream transforms or destinations is slower than the others, the rest of the pipeline will stall so this transform works best if all of the outputs have identical transforms and destinations. Download SQL Server Integration Services Balanced Data Distributor Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Yes, you can benefit from both data and backup compression

    - by AaronBertrand
    Earlier today, MSSQLTips posted a backup compression tip by Thomas LaRock ( blog | twitter ). In that article, Tom states: "If you are already compressing data then you will not see much benefit from backup compression." I don't want to argue with a rock star, and I will concede that he may be right in some scenarios. Nonetheless, I tweeted that "it depends;" Thomas then asked for "an example where you have data comp and you also see a large benefit from backup comp?" My initial reaction came about...(read more)

    Read the article

  • Update/Insert With ADF Web Service Data Control

    - by shay.shmeltzer
    The Web service data control (WSDC) in ADF is a powerful feature that allows you to easily build a UI on top of WS interfaces exposed by other systems. However when you drag a WSDC to a page you usually get a set of output components where the data is shown. So how would you actually do an update operation on those values? The answer is that you need a call to another method in your WSDC that does the update - but what if you want to pass to it the actual values that you get from the get method you invoked before? Here is a demo showing how to do that: The two tricks that are shown here are: Changing the properties of items in the DC to be updateable - this gives you inputText fields instead of outputText fields. And passing the currentRow.dataProvider to the update method (and choosing the right iterator for this).

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >