Search Results

Search found 61393 results on 2456 pages for 'data integration'.

Page 20/2456 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Maintaining integrity of Core Data Entities with many incoming one-to-many relationships

    - by Henry Cooke
    Hi all. I have a Core Data store which contains a number of MediaItem entities that describe, well, media items. I also have NewsItems, which have one-to-many relationships to a number of MediaItems. So far so good. However, I also have PlayerItems and GalleryItems which also have one-to-many relationships to MediaItems. So MediaItems are shared across entities. Given that many entities may have one-to-many relationships, how can I set up reciprocal relationships from a MediaItem to all (1 or more) of the entities which have relationships to it and, furthermore, how can I implement rules to delete MediaItems when the number of those reciprocal relationships drops to 0?

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • Implementation of "Automatic Lightweight Migration" for Core Data (iPhone)

    - by RickiG
    Hi I would like to make my app able to do an automatic lightweight migration when I add new attributes to my core data model. In the guide from Apple this is the only info on the subject I could find: Automatic Lightweight Migration To request automatic lightweight migration, you set appropriate flags in the options dictionary you pass in addPersistentStoreWithType:configuration:URL:options:error:. You need to set values corresponding to both the NSMigratePersistentStoresAutomaticallyOption and the NSInferMappingModelAutomaticallyOption keys to YES: NSError *error; NSURL *storeURL = <#The URL of a persistent store#>; NSPersistentStoreCoordinator *psc = <#The coordinator#>; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; if (![psc addPersistentStoreWithType:<#Store type#> configuration:<#Configuration or nil#> URL:storeURL options:options error:&error]) { // Handle the error. } My NSPersistentStoreCoordinator is initialized in this way: - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"FC.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } I am having trouble seeing where and how I should add the Apple code to get the Automatic Lightweight Migration working?

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • Spring Data Neo4J @Indexed(unique = true) not working

    - by Markus Lamm
    I'm new to Neo4J and I have, probably an easy question. There're NodeEntitys in my application, a property (name) is annotated with @Indexed(unique = true) to achieve the uniqueness like I do in JPA with @Column(unique = true). My problem is, that when I persist an entity with a name that already exists in my graph, it works fine anyway. But I expected some kind of exception here...?! Here' s an overview over basic my code: @NodeEntity public abstract class BaseEntity implements Identifiable { @GraphId private Long entityId; ... } public class Role extends BaseEntity { @Indexed(unique = true) private String name; ... } public interface RoleRepository extends GraphRepository<Role> { Role findByName(String name); } @Service public class RoleServiceImpl extends BaseEntityServiceImpl<Role> implements { private RoleRepository repository; @Override @Transactional public T save(final T entity) { return getRepository().save(entity); } } And this is my test: @Test public void testNameUniqueIndex() { final List<Role> roles = Lists.newLinkedList(service.findAll()); final String existingName = roles.get(0).getName(); Role newRole = new Role.Builder(existingName).build(); newRole = service.save(newRole); } That's the point where I expect something to go wrong! How can I ensure the uniqueness of a property, without checking it for myself?? THANKS IN ADVANCE FOR ANY IDEAS!! P.S.: I'm using neo4j 1.8.M07, spring-data-neo4j 2.1.0.BUILD-SNAPSHOT and Spring 3.1.2.RELEASE.

    Read the article

  • Automated unit testing, integration testing or acceptance testing

    - by bjarkef
    TDD and unit testing seems to be the big rave at the moment. But it is really that useful compared to other forms of automated testing? Intuitively I would guess that automated integration testing is way more useful than unit testing. In my experience the most bugs seems to be in the interaction between modules, and not so much the actual (usual limited) logic of each unit. Also regressions often happened because of changing interfaces between modules (and changed pre and post-conditions.) Am I misunderstanding something, or why are unit testing getting so much focus compared to integration testing? It is simply because it is assumed that integration testing is something you have, and unit testing is the next thing we need to learn to apply as developers? Or maybe unit testing simply yields the highest gain compared to the complexity of automating it? What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why? If you had to pick just one form of testing to be automated on your next project, which would it be? Thanks in advance.

    Read the article

  • Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge

    - by Demed L'Her
    session ID: CON8601 - when: Monday, Oct. 1, 10:45am-11:45am - where: Moscone South 102 "Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge" is the name of the session I will be delivering at Oracle OpenWorld this year. I'm usually going for more subdued titles but decided to remove the gloves this year, at the risk of sounding arrogant! While we have a number of worthy competitors in various areas of integration no one can really compete with the breadth and reliability of Oracle SOA Suite. This session is primarily intended for people who are not yet familiar with Oracle SOA Suite (i.e. if you are an existing customer your time might be better spent at some of the other sessions we have on the topic). I will provide an overview of Oracle SOA Suite, the customers using it and the types of challenges they are solving with it: from integrating Oracle Applications (E-Business Suite, Siebel, PeopleSoft, RightNow, Taleo etc.) to third-party applications (did you know that over a third of our customers actually use us to integrate SAP?), mainframes and a variety of technologies. We will talk about some emerging trends and problems that our users are solving with the product: cloud integration, B2B consolidation and mobile-enablement. I will also briefly touch upon the exciting projects we are doing with Oracle Event Processing, in the domain of "Fast Data" and "Big Data". Last but not least, I will be joined on stage by Venktesh Maudgalya, Director at Electronic Arts. Venktesh will bring his customer perspective and explain how EA leveraged Oracle SOA Suite to implement iHub, the massive integration hub that interconnects all their applications (E-BusinessSuite, Hyperion, Demantra, Peoplesoft, Salesforce.com, Kronos, Teradata, GXS etc.) and carries 3/4 of their revenue flows. I just picked up my badge and will be kicking off the festivities tomorrow talking to partners in a pre-OOW briefing at the Oracle Headquarters - see you next week! PS: if you're going to tweet about Oracle SOA Suite next week please make sure to use the #oraclesoa and #oow hashtags so that we can track and amplify your tweets!

    Read the article

  • Out-of-the-Box Integration Links Primavera Solutions with PeopleSoft Projects Applications

    - by Sylvie MacKenzie, PMP
    In a move that brings best-in-class enterprise project portfolio management to Oracle’s PeopleSoft enterprise resource planning customers, Oracle announced the integration of Oracle’s PeopleSoft projects applications and Oracle’s Primavera P6 Enterprise Project Portfolio Management. The combination of PeopleSoft financial controls and Primavera portfolio management capabilities brings greater oversight of end-to-end processes to help organizations improve the planning and execution efforts needed to deliver projects on time and within budget. “As an organization with many high-value, project-driven initiatives, we are very pleased to see Oracle’s investment in this important integration,” says Janardhanan Sankar, senior vice president for technology and quality at ITC Infotech India Ltd. Oracle’s PeopleSoft projects applications enable project-centric organizations and departments to establish core operational processes for full project lifecycle management across operations and finance. The integration with Primavera P6 Enterprise Project Portfolio Management means organizations can eliminate costly and difficult-to-maintain proprietary integrations. Organizations can also standardize on the Oracle technologies to Align back-office budgets and costs with project operations to help ensure accurate forecasting of costs, resources, and schedules Provide an accurate single source of truth to financial managers and analysts using Oracle’s PeopleSoft projects applications, and to project managers using Primavera P6 Enterprise Project Portfolio Management  Enhance project collaboration and execution by having all users utilizing common solutions to communicate, plan, and deliver projects “By bringing together Oracle’s PeopleSoft projects applications and Oracle’s Primavera P6 Enterprise Project Portfolio Management, we are able to provide customers with the infrastructure they need to achieve a single source of truth on the projects they are managing,” says Paco Aubrejuan, Oracle’s group vice president and general manager, PeopleSoft. “This real-time visibility drives profitability, increases productivity, and improves operations.” For more information, view the on-demand Webcast, “Bridging Business Processes for Optimal Portfolio Performance,” or read about the new integration.

    Read the article

  • Podcast Show Notes: DevOps and Continuous Integration

    - by Bob Rhubart
    The topic on the table for the latest OTN ArchBeat series of programs is DevOps. The panelists for this conversation are three gentlemen who, by no small coincidence, have some expertise on the topic. The Panel Tim Hall is the Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture. Robert Wunderlich is Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack. Peter Belknap is director of product management for SOA Integration at Oracle. The Conversation Listen to Part 1: DevOps and Continuous Integration: The Basics The panel discusses why DevOps matters and how it changes development methodologies and organizational structure. Listen to Part 2: DevOps and the Evolution of Enterprise IT (Dec 20) The panel discusses where DevOps fits with cloud computing and other evolutionary changes in enteprise IT. Listen to Part 3: Tech, Talk, and Territorialism (Dec 27) The panel closes out the discusses with a look at DevOps-friendly technologies and the impact of DevOps on communication and governance. Coming Soon Early in the new year OTN ArchBeat will bring you an update on the Oracle Cloud, with details on how you can test drive the Java and Database cloud services. Stay tuned: RSS

    Read the article

  • CruiseControl [.Net] vs TeamCity for continuous integration?

    - by zappan
    i would like to ask you which automated build environment you consider better, based on practical experience. i'm planning to do some .Net and some Java development, so i would like to have a tool that supports both these platforms. i've been reading around and found out about CruiseControl.NET, used on stackoverflow development, and TeamCity with its support for build agents on different OS-platforms and based on different programming languages. so, if you have some practical experience on both of those, which one you prefer and why. currently, i'm mostly interested in the ease of use and management of the tool, much less in the fact that CC is open source, and TC is a subject to licensing at some point when you have much projects to run (because, i need it for a small amount of projects). also, if there is some other tool that meets the above-mentioned and you believe it's worth a recommendation - feel free to include it in the discussion.

    Read the article

  • twitter integration in iphone application using mgtwitter

    - by Filthy Night
    Hi All, i want to integrate twitter using mgtwitter in my application, but i can only login and logout using this. i mean i am newbie so i dont know much about it yet, so i searched google for it but what i comes across is only login process only. Can somebody share worthy resource for it, or can also suggest me what to do, any help will be appreciated in this regard Thanks

    Read the article

  • Grails and Flex build process integration

    - by Dan
    I plan to use Grails and Flex in my next project. I would like to use grails command line to construct my project. This should include the flex part as well, compiling swf, executing FlexUnits etc. I would like to compile and add swf file to war when I do “grails war”. How can I accomplish this?

    Read the article

  • Payment Gateway Integration asp.net c# 2.0

    - by mvkotekar
    Myself mendy, i am designing web application, The business required to integrate payment gateway. I searched on site regarding the flow but i could not get much info on MSDN. I want to make payment gateway using SSL and 3rd party merchant account. How can i do it ? some info regarding payment gateway could guid me to start developing the component.

    Read the article

  • Jabber integration into site and multiple domains

    - by Jamie
    Hi all, I'm creating a centralised web product, i.e. a customer gets assigned a personal domain, for example company1.example.com or company2.example.com and can then use our service. I'm planning to integrate a jabber service into the website. I have already found a decent jabber client library which I can use for the site. I know that jabberd2 uses mysql, which is perfect because I want to use the web interface to add users, delete users, view message logs etc. However, my problem is when I have two companies or more. I would like a jabber server which can host multiple domains, i.e. jabber.company1.example.com, jabber.company2.example.com Do you have any experience of this? And do you know of a good jabber server which will do this? Any help would be hugely appreciated!

    Read the article

  • Integration Test Example With Rhino Mocks

    - by guazz
    I have two classes. I would like to verify that the properties are called on one of the classes. public classA { public IBInterface Foo {get;set;} public LoadData() { Foo.Save(1.23456, 1.23456); } } public classB : IBInterface { public decimal ApplePrice {get; set;} public decimal OrangePrice {get; set;} public void Save(decimal param1, decimal param2) { this.ApplePrice = param1; this.OrangePrice = param2; } } I would like to use Rhino Mocks(AAA syntax) to verify that ApplePrice and OrangePrice were set correctly. I assume I should begin like so but how do I verify that ApplePrice and OrangePrice have been set? var mockInterfaceB = mockery.DynamicMock(); ClassA a = new ClassA(); a.Foo = mockInterfaceB; a.LoadData();

    Read the article

  • Continuous Integration or Publishing System

    - by Chris M
    I've been testing Hudson for a few days now and for PHP projects it seems ok; What I need though is a system that will allow me to publish an SVN tag to an FTP folder without adding a pile of rubbish on the end; hudson gets overexcited and adds a pile of folders to the export. Are their any other decent systems; it needs to be simple to use as this will eventually fall under the control of 'admins' in order to meet sox compliance (aka the man with the gun cant pull the trigger). Basic Requirements: Must be free and downloadable to a server (FTP's internal only here) Needs to be able to interact with SVN Needs to be able to publish to FTP Needs matrix login permissions (or AD if its something that can go on IIS) Needs to be auditable (logging) Tell me if im not being clear enough here; thanks. Chris

    Read the article

  • Strange error inserting new relationship data in core-data

    - by michael
    My app will allow users to create a personalised list of events from a large list of events. I have a table view which simply displays these events, tapping on one of them takes the user to the event details view, which has a button "add to my events". In this detailed view I own the original event object, retrieved via an NSFetchedResultsController and passed to the detailed view (via a table cell, the same as the core data recipes sample). I have no trouble retrieving/displaying information from this "event". I am then trying to add it to the list of MyEvents represented by a one to many (inverse) relationship: This code: NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; [myEvents addEventObject:event];//ERROR And this code (suggested below): //would this add to or overwrite the "list" i am attempting to maintain NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSMutableSet *myEvent = [myEvents mutableSetValueForKey:@"event"]; [myEvent addObject:event]; //ERROR Bot produce (at the line indicated by //ERROR): *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance Seems I may have missed something fundamental. I cant glean any more information through the use of debugging tools, with my knowledge of them. 1) Is this a valid way to compile and store an editable list like this? 2) Is there a better way? 3) What could possibly be the deallocated instance in error? -- I have now modified the Event entity to have a to-many relationship called "myEvents" which referrers to itself. I can add Events to this fine, and logging the object shows the correct memory addresses appearing for the relationship after a [event addMyEventObject:event];. The same failure happens right after this however. I am still at a loss to understand what is going wrong. This is the backtrace #0 0x01f753a7 in ___forwarding___ () #1 0x01f516c2 in __forwarding_prep_0___ () #2 0x01c5aa8f in -[NSFetchedResultsController(PrivateMethods) _preprocessUpdatedObjects:insertsInfo:deletesInfo:updatesInfo:sectionsWithDeletes:newSectionNames:treatAsRefreshes:] () #3 0x01c5d63b in -[NSFetchedResultsController(PrivateMethods) _managedObjectContextDidChange:] () #4 0x0002e63a in _nsnote_callback () #5 0x01f40005 in _CFXNotificationPostNotification () #6 0x0002bef0 in -[NSNotificationCenter postNotificationName:object:userInfo:] () #7 0x01bbe17d in -[NSManagedObjectContext(_NSInternalNotificationHandling) _postObjectsDidChangeNotificationWithUserInfo:] () #8 0x01c1d763 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _createAndPostChangeNotification:withDeletions:withUpdates:withRefreshes:] () #9 0x01ba25ea in -[NSManagedObjectContext(_NSInternalChangeProcessing) _processRecentChanges:] () #10 0x01bdfb3a in -[NSManagedObjectContext processPendingChanges] () #11 0x01bd0957 in _performRunLoopAction () #12 0x01f4d252 in __CFRunLoopDoObservers () #13 0x01f4c65f in CFRunLoopRunSpecific () #14 0x01f4bc48 in CFRunLoopRunInMode () #15 0x0273878d in GSEventRunModal () #16 0x02738852 in GSEventRun () #17 0x002ba003 in UIApplicationMain () Cheers.

    Read the article

  • symfony/zend integration - blank screen

    - by user142176
    Hi, I need to use ZendAMF on a symfony project and I'm currently working on integrating the two. I have a frontend app with two modules, one of which is 'gateway' - the AMF gateway. In my frontend app config, I have the following in the configure function: // load symfony autoloading first parent::initialize(); // Integrate Zend Framework require_once('[MY PATH TO ZEND]\Loader.php'); spl_autoload_register(array('Zend_Loader', 'autoload')); The executeIndex function my the gateway actions.class.php looks like this // No Layout $this->setLayout(false); // Set MIME Type $this->getResponse()->setContentType('application/x-amf; charset='.sfConfig::get('sf_charset')); // Disable cause this is a non-html page sfConfig::set('sf_web_debug', false); // Create AMF Server $server = new Zend_Amf_Server(); $server->setClass('MYCLASS'); echo $server->handle(); return sfView::NONE; Now when I try to visit the url for the gateway module, or even the other module which was working perfectly fine until this attempt, I only see a blank screen, with not even the symfony dev bar loaded. Oddly enough, my symfony logs are not being updated as well, which suggests that Synfony is not even being 'reached'. So presumably the error has something to do with Zend, but I have no idea how to figure out what the error could be. One thing I do know for sure is that this is not a file path error, because if I change the path in the following line (a part of frontendConfiguration as shown above), I get a Zend_Amf_Server not found error. So the path must be correct. Also if I comment out this very same line, the second module resumes to normality, and my gateway broadcasts a blank x-amf stream. spl_autoload_register(array('Zend_Loader', 'autoload')); Does anyone have any tips on how I could attach this problem? Thanks P.S. I'm currently running an older version of Zend, which is why I am using Zend_Loader instead of Zend_autoLoader (I think). But I've tried switching to the new lib, but the error still remains. So it's not a version problem as well.

    Read the article

  • Build Pipelining and Continuous Integration with Maven and Hudson

    - by Brandon
    Currently the my team is considering splitting our single CI build process into a more streamlined multi-stage process to speed up basic build feedback and isolate different ci concerns. The idea we had was to have each stage exist in Hudson as a different build with the correct maven goal or maven plugin execution, then chain them together using the post-build hooks of Hudson. However to my knowledge, Maven as a build tool mandates that any lifecycle phase which is performed automatically builds every preceding lifecycle phase. This presents a number of problems the most significant of which is that maven is recreating the build resources with each distinct call and not using those of the previous stage. This not only breaks the consistency of the build lifecycle but has much more unnecessary processing overhead. Is there a way to accomplish pipelining with CI using Maven? Assuming there is, is there a way to let Hudson know to use those resources built from the previous stage in the next one?

    Read the article

  • Continuous Integration of Git on Windows

    - by Duncan
    Hey All, assuming I'm running a small shop (3 devs) and using a Windows 7 machine as a centralised Git and IIS server what is the easiest way to get CI up and running? This must be locally hosted CI (no github, no remote servers). I'm doing C# .Net development with Visual Studio 2008. Any help on getting this running with the minimum of effort and the nicest possible UI would be extremely helpful. Thanks!

    Read the article

  • Continuous integration with multiple branch development

    - by ryanprayogo
    In the project that I'm working on, we are using SVN with 'Stable Trunk' strategy. What that means is that for each bug that is found, QA opens a bug ticket and assigns it to a developer. Then, a developer fixes that bug and checks it in a branch (off trunk, let's call this the bug branch) and that branch will only contain fixes for that particular bug ticket When we decided to do a release, for each bug fixes that we want to release to the customer, a developer will merge all the fixes from several bug branch to trunk and proceed with the normal QA cycle. The problem is that we use trunk as the codebase for our CI job (Hudson, specifically), and therefore, for all commits to the bug branch, it will miss the daily build until it gets merged to trunk when we decided to release the new version of the software. Obviously, that defeats the purpose of having CI. What is the proper way to fix this issue?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >