Search Results

Search found 1748 results on 70 pages for 'branch prediction'.

Page 30/70 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Cutting Subscriber Churn with Media Intelligence

    - by Oracle M&E
    There's lots of talk in media and entertainment companies about using "big data".  But it's often hard to see through the hype and understand how big data brings benefits in the real world.  How about being able to predict with 92% accuracy which subscribers intend to cancel their subscription - and put in place a renewal strategy to dramatically reduce that churn?  That's what Belgian media company De Persgroep has achieved with Oracle's Media Intelligence solution.  "One of the areas in which we're able to achieve beautiful results using big data is the churn prediction," De Persgroep's CIO Luc Verbist explains in a new Oracle video.  "Based on all the data that we collect on websites and all your behavior, payment behavior and so on, we're able to make a prediction model, which, with an accuracy of 92 percent, is able to predict that you probably won't renew your newspaper, anymore. So our approach to renewal is completely different to the people in that segment than towards the other people. And this has brought us a lot of value and a lot of customers who didn't stop their newspaper where else they would have done so." De Persgroep is using Oracle's Big Data Appliance, along with software from Oracle partner NGDATA to build up a detailed "DNA profile" of each individual customer, based on every interaction, in real time.  This means that any change in behavior - a drop in content consumption, a late subscription payment, a negative social media comment - is captured.  Applying advanced data modeling techniques automatically converts those raw interactions into data with real business meaning - like that customer's risk of churning. The very same data profile - comprising hundreds if individual dimensions - can simultaneously drive targeted marketing campaigns - informing audience about new content that's most relevant and encouraging them to subscribe.  It can power content recommendations and personalization right in the content sites and apps. And it can link directly into digital advertising networks via platforms like Oracle's BlueKai data management platform (DMP), to drive increased advertising CPMs. Using Oracle's Media Intelligence solution enables this across De Persgroep's business - comprising eight newspapers and 25 magazines published in Belgium and The Netherlands, and digital properties including websites with 6m daily unique visitors, along with TV and radio stations. "The company strategy is in fact a customer-centric strategy, so we want to get a 360-view about our customers, about our prospects. And the big data project helped us to achieve that goal," says Verbist. Using Oracle's Big Data Appliance to underpin the solution created huge savings.   "The selection of the Big Data Appliance was quite easy.  It was very quick to install, very easy to install, as well. And it was far cheaper than building our own Hadoop cluster. So it was in fact a non-brainer," Verbist explains. Applying Media Intelligence approach has yielded incredible results for De Persgroep, including: Improved products - with a new understanding of how readers are consuming print and digital content across the day Improved customer segmentation - driving a 6X improvement in customer prospecting and acquisition when contacting a specific segment Having the project up and running in three months And that has led to competitive benefits for De Persgroep, as Luc Verbist explains: "one of the results we saw since we started using big data is that we're able to increase the gap between we as the market leader, and the second [by] more than 20 percent."

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • CodePlex Daily Summary for Saturday, December 01, 2012

    CodePlex Daily Summary for Saturday, December 01, 2012Popular ReleasesCleverBobCat: CleverBobCat 1.1.1: Fix for 1.1.1.0 - Energy usage on back drive -> fixed - Breaking energy recharge -> fixed - Not stopped when energy disabled -> fixed - Area transfer range -> fixeddcview: DCView 1.4.2: 1.4.0?? ??/??? ??11/29 ??? ??? ??? ?? ??? ??? ?? ???? ???? ?? ???? ?? ?? ?? ??? ?? ?? ??DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...Command Line Parser Library: 1.9.3.23 beta: Fixes an issue notified by github user sbambrick about parsing negative numbers.MCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...datajs - JavaScript Library for data-centric web applications: datajs version 1.1.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.New Projects4SQH: FourSQHackathon project repositoryASP4 Portscan: A online portscan deployed with ASP and PerlscriptAviSynth language service for Visual Studio: This project provides a Visual Studio language extension for the AviSynth scripting language. AviSynth is a scripted frameserver for video post-production and can be found at http://avisynth.orgBuildSlip: GUI interface application for creating slipsteam files (service pack and/or cumulative update) for SQL2008 installationC#apture the Flag AI: C#apture the Flag AI is a project designed to enable C# support for the excellent AI competition being held at http://aisandbox.com.Channel Performing (Windows Azure Media Service Example): Windows Azure Media Service solution example and portal + Tag + Comment + Search + Membership + Progress List + and other featuresCivilinsation: C# DLL C++ Wrapper Swig Civilization INSA de RENNES Département Informatique 4INFOClickOnceSxquer: awesome Combinator CoffeeScript Preprocessor: CoffeeScript preprocessor for the Combinator Orchard module (https://combinator.codeplex.com/).DineshTest: aDirectoryEnumeratorAsync: Proof of concept for using .NET 4.5 async/await for improving performance enumeration of of file system entries.Grahpic_Salon: It's a test web project.jQuery.gMenu: test projectLIve For Speed Statistics: UskoroMIDI.NET: MIDI.NET allows any .NET developer to access the power of MIDI without doing P/Invokes.Olympia Open College: AACS4134Peto: A college project for a claims submission application.QueuedSmtpClient: A C# library that provides an smtp client built on top of the standard SmtpClient class which is able to serialize email messages if no network connection is available. Serialized emails are sent later if a network connection is available.Simple Merge Sort: This is a simple implementation of Merge-Sort, using to sort an Array of Integers.Simple.Json: Easy to use framework for serializing to and from json. Supports typed dto objects, dynamic dto objects and immutable dto types like F# record types.Sosyal: Sosyal as.teamgujju: This Project is now in planing phase. It will be released shortlyTFS Branch Permission Removal Event Subscriber: When you create a branch in TFS, the explicit TFS source control permissions from the source branch are copied to the target branch. Let's change that...Wild Template: Simple template engine for .NETXNA in WPF: WPF Canvas component that allows hosting and presentation of an XNA window from within WPF applications. ZombieSurvival: Yo-ho-ho and the buttle og rum

    Read the article

  • Folding at home: how to check the actual bonus credited?

    - by netvope
    In the past I've completed many SMP Core A2 units, but I haven't been folding since A3 was out. Now I'm running SMP Core A3 on Linux. HFM.NET shows that I should be getting a certain amount of bonus credit, but somebody pointed out that it is only a prediction. How can I check the amount of actual bonus credited to my account for each completed unit?

    Read the article

  • Google chrome disable url suggestions from history

    - by Tural Aliyev
    I was searching for a solution which will help me to disable URL auto suggestion (from history) while I type url on adressbar. But I haven't found anything about this solution. I tryied to uncheck Use a prediction service to help complete searches and URLs typed in the address barin privacy settings, but it doesn't help. Is there any way to disable history or disable url suggestions from history?

    Read the article

  • tf14087 Cannot undelete [file] because not all of the deletion is being undeleted

    - by Kaius
    We are getting this error when we try to merge from a development branch (Dev) back to its parent branch (Main). Main did have some changesets rolled back a few weeks ago which included the deletion of some folders which exist in Dev. We believe this is the source of the problem. What is the best way to resolve this. Main should pretty much match up with Dev but currently it is missing some sub folders. How can I get these folders / files to move from Dev to Main? We are trying to manually resolve the changes but it is a bit of a mess. It hard to believe that TFS makes things this hard to resolve.

    Read the article

  • Best practices for cross platform git config?

    - by Bas Bossink
    Context A number of my application user configuration files are kept in a git repository for easy sharing across multiple machines and multiple platforms. Amongst these configuration files is .gitconfig which contains the following settings for handling the carriage return linefeed characters [core] autocrlf = true safecrlf = false Problem These settings also gets applied on a GNU/Linux platform which causes obscure errors. Question What are some best practices for handling these platform specific differences in configuration files? Proposed solution I realize this problem could be solved by having a branch for each platform and keeping the common stuff in master and merging with the platform branch when master moves forward. I'm wondering if there are any easier solutions to this problem?

    Read the article

  • Programatically find TFS changes since last good build

    - by abigblackman
    I have several branches in TFS (dev, test, stage) and when I merge changes into the test branch I want the automated build and deploy script to find all the updated SQL files and deploy them to the test database. I thought I could do this by finding all the changesets associated with the build since the last good build, finding all the sql files in the changesets and deploying them. However I don't seem to be having the changeset associated with the build for some reason so my question is twofold: 1) How do I ensure that a changeset is associated with a particular build? 2) How can I get a list of files that have changed in the branch since the last good build? I have the last successfully built build but I'm unsure how to get the files without checking the changesets (which as mentioned above are not associated with the build!)

    Read the article

  • Problem calling linux C code from FIQ handler

    - by fastmonkeywheels
    I'm working on an armv6 core and have an FIQ hander that works great when I do all of my work in it. However I need to branch to some additional code that's too large for the FIQ memory area. The FIQ handler gets copied from fiq_start to fiq_end to 0xFFFF001C when registered static void test_fiq_handler(void) { asm volatile("\ .global fiq_start\n\ fiq_start:"); // clear gpio irq asm("ldr r10, GPIO_BASE_ISR"); asm("ldr r9, [r10]"); asm("orr r9, #0x04"); asm("str r9, [r10]"); // clear force register asm("ldr r10, AVIC_BASE_INTFRCH"); asm("ldr r9, [r10]"); asm("mov r9, #0"); asm("str r9, [r10]"); // prepare branch register asm(" ldr r11, fiq_handler"); // save all registers, build sp and branch to C asm(" adr r9, regpool"); asm(" stmia r9, {r0 - r8, r14}"); asm(" adr sp, fiq_sp"); asm(" ldr sp, [sp]"); asm(" add lr, pc,#4"); asm(" mov pc, r11"); #if 0 asm("ldr r10, IOMUX_ADDR12"); asm("ldr r9, [r10]"); asm("orr r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("bic r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); #endif asm(" adr r9, regpool"); asm(" ldmia r9, {r0 - r8, r14}"); // return asm("subs pc, r14, #4"); asm("IOMUX_ADDR12: .word 0xFC2A4000"); asm("AVIC_BASE_INTCNTL: .word 0xFC400000"); asm("AVIC_BASE_INTENNUM: .word 0xFC400008"); asm("AVIC_BASE_INTDISNUM: .word 0xFC40000C"); asm("AVIC_BASE_FIVECSR: .word 0xFC400044"); asm("AVIC_BASE_INTFRCH: .word 0xFC400050"); asm("GPIO_BASE_ISR: .word 0xFC2CC018"); asm(".globl fiq_handler"); asm("fiq_sp: .long fiq_stack+120"); asm("fiq_handler: .long 0"); asm("regpool: .space 40"); asm(".pool"); asm(".align 5"); asm("fiq_stack: .space 124"); asm(".global fiq_end"); asm("fiq_end:"); } fiq_hander gets set to the following function: static void fiq_flip_pins(void) { asm("ldr r10, IOMUX_ADDR12_k"); asm("ldr r9, [r10]"); asm("orr r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("bic r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("IOMUX_ADDR12_k: .word 0xFC2A4000"); } EXPORT_SYMBOL(fiq_flip_pins); I know that since the FIQ handler operates outside of any normal kernel API's and that it is a rather high priority interrupt I must ensure that whatever I call is already swapped into memory. I do this by having the fiq_flip_pins function defined in the monolithic kernel and not as a module which gets vmalloc. If I don't branch to the fiq_flip_pins function, and instead do the work in the test_fiq_handler function everything works as expected. It's the branching that's causing me problems at the moment. Right after branching I get a kernel panic about a paging request. I don't understand why I'm getting the paging request. fiq_flip_pins is in the kernel at: c00307ec t fiq_flip_pins Unable to handle kernel paging request at virtual address 736e6f63 pgd = c3dd0000 [736e6f63] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT Modules linked in: hello_1 CPU: 0 Not tainted (2.6.31-207-g7286c01-svn4 #122) PC is at strnlen+0x10/0x28 LR is at string+0x38/0xcc pc : [<c016b004>] lr : [<c016c754>] psr: a00001d3 sp : c3817ea0 ip : 736e6f63 fp : 00000400 r10: c03cab5c r9 : c0339ae0 r8 : 736e6f63 r7 : c03caf5c r6 : c03cab6b r5 : ffffffff r4 : 00000000 r3 : 00000004 r2 : 00000000 r1 : ffffffff r0 : 736e6f63 Flags: NzCv IRQs off FIQs off Mode SVC_32 ISA ARM Segment user Control: 00c5387d Table: 83dd0008 DAC: 00000015 Process sh (pid: 1663, stack limit = 0xc3816268) Stack: (0xc3817ea0 to 0xc3818000) Since there are no API calls in my code I have to assume that something is going wrong in the C call and back. Any help solving this is appreciated.

    Read the article

  • Git checkout <SHA> and Heroku

    - by Bob
    I created a local Git repo on my laptop and then pushed the source to Heroku creating a remote branch. After a few days of commits and pushes, I need to rollback to an earlier commit. Here's what I did. cd <app root> git checkout 35fbd894eef3e114c814cc3c7ac7bb50b28f6b73 Someone told me that doing the checkout created a new working tree(?) and not the branch itself, so when I pushed the rollback changes to Heroku, it said everything is up to date and nothing was pushed. How do I fix this situation? Thanks for your help in advance.

    Read the article

  • How to get --detect-branches to work with git-p4?

    - by Michael Brennan
    My p4 repository has a structure similar to: //depot/project/branch1 //depot/project/branch2 //depot/project/branch3 ... etc However, when I use git-p4 to clone "project", all 3 branches are not considered as branches and all get cloned into the single master branch. This is how I'm invoking git-p4: git-p4 clone --detect-branches //depot/project I was expecting git-p4 to create a git database for "project" with three branches, and the root of the project would be mapped to the portion of the path after the branch name (for example: if //depot/project/branch1 has a subdirectory called "lib" (//depot/project/branch1/lib) then my local file system should be something like /git_project/lib with 3 git branches). Is what I'm expecting wrong? Am I invoking git-p4 incorrectly?

    Read the article

  • svn merge - moved repository to a different server, and now getting 'has different repository root'

    - by HorusKol
    This is kind of similar to http://stackoverflow.com/questions/1601021/subversion-merge-has-different-repository-root-than - but appears to be a very different cause (especially as the answer for that question didn't resolve my problem). A while back, we swapped out the server where our SVN repositories are located - but we've been using an alias so that the old server name points to the new server. I've been getting in the habit where I will use the new server name wherever I checkout new working copies - but we having made changes to most of the current working copies as they are live websites. Until now, this hasn't been a problem - except that this morning I merged in some changes from my development branch to a working copy I have of the release version and I got the message "file has different repository root" and the merge stops dead. I know this is because I'm using the new server name when the development branch was updated via the old server name - but is there a simple way to fix this? Or if not a simple way - is there a well-documented way to fix this?

    Read the article

  • What does your ~/.gitconfig contain?

    - by Rajkumar S
    Hi, I am looking to pimp up my ~/.gitconfig to make it really beautiful and take maximum advantage of capabilities git can offer. My current ~/.gitconfig is below, what more would you add? Have some nice ~/.gitconfig you want to share? Any recommendations for merge and diff tools in linux? Post away and let's build a nice ~/.gitconfig [user] name = Rajkumar email = [email protected] [color] diff = auto status = auto branch = auto interactive = auto ui = true pager = true [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold [color "status"] added = yellow changed = green untracked = cyan [core] pager = less -FRSX whitespace=fix,-indent-with-non-tab,trailing-space,cr-at-eol [alias] co = checkout Thanks! raj

    Read the article

  • JQuery treeview - add node(s) in middle of tree

    - by Chris
    Hi all, I'm just getting started with JQuery and the treeview plugin so this should be a relatively easy question: The example code for adding branches to the tree: var newnodes = $("<li><span class='folder'>New Sublist</span><ul>" + "<li><span class='file'>Item1</span></li>" + "<li><span class='file'>Item2</span></li></ul></li>").appendTo("#browser"); $("#browser").treeview({ add: branches }); Works fine for me, but adds the new branch at the end of the tree - instead what I want is to be able to select a specific node and add to that branch. I've managed to get the node being added by using the id of the particular node instead of the whole treeview in - appendTo("nodeID") However I can't get the tree to render correctly, either with: $("nodeID").treeview({ add: branches }); or $("browser").treeview({ add: branches }); or calling it on both without arguments. Cheers in advance

    Read the article

  • query SQL how to check all records from a three table join share the same value

    - by Stefano
    Hello Since i'm a poor sql developer, i need support to write a sql query for the following scenario (just a simplified example of my situation): i've got 3 tables, say employe table,department table and companybranch table. the dept column , on the employe table is a fk on the department table; the branch column on the department table is a fk on the companybranch table. Finally more employee are "marked" with the same value . There's a way to select all employes with the same "mark" and, in the same query, check that they work in the same company branch ? thank you in advance Stefano

    Read the article

  • Subclipse > Accidental Merge Conflict Resolution

    - by DTS
    I'm trying to merge changes from one branch into another using Subclipse. On a particular file in a particular subdirectory, I had a file conflict and edited the conflicts via the context menu option for this. However, when I went to resolve the conflict I apparently chose the wrong option and was left with the original unmerged file in my branch. Since then, I can no longer get this file back into a conflicted state so I can resolve this issue properly. I've tried deleting the file and the directory that contains it, to no avail. Any ideas?

    Read the article

  • Generating a Change Log from Subversion Logs and integrating with Jira

    - by neves
    We use Subversion for version control and Jira for tickets. All our commit messages have a Jira ticket id in it. The repository has a traditional organization with a main trunk and a version branch. I'd like to answer this question: Which closed ticket items entered in this release? See that there are some caveats, like when an item is committed in a release branch and in the main trunk. Is there a tool that already does it for me? Or should I write my own Subversion log analyzer tool?

    Read the article

  • Git: can't undo local changes (error: path ... is unmerged)

    - by mklhmnn
    I have following working tree state $ git status foo/bar.txt # On branch master # Unmerged paths: # (use "git reset HEAD <file>..." to unstage) # (use "git add/rm <file>..." as appropriate to mark resolution) # # deleted by us: foo/bar.txt # no changes added to commit (use "git add" and/or "git commit -a") File foo/bar.txt is there and I want to get it to the "unchanged state" again (similar to 'svn revert'): $ git checkout HEAD foo/bar.txt error: path 'foo/bar.txt' is unmerged $ git reset HEAD foo/bar.txt Unstaged changes after reset: M foo/bar.txt Now it is getting confusing: $ git status foo/bar.txt # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file: foo/bar.txt # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: foo/bar.txt # The same file in both sections, new and modified? What should I do? Thanks in advance.

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • Value Comparison with a multivalued column in SQL Database Table

    - by Rishabh Ohri
    Hi All, Suppose there is a table A which has a column AccessRights which is multivalued( Eg of values in it in this format STOLI,HELP,BRANCH(comma separated string) Now a stored procedure is written against this table to fetch records based on a AccessRight parameter sent to the SP. Let that parameter be @AccessRights, this is also a comma separated string which may have a value like STOLI,BRANCH,HELPLINE etc Now I want to compare individual values from the parameter @AccessRights with the column AccessRights. Current Approach is I split the Comma Separated string(@AccessRights) using a User Defined Function Split. And I get Individual values in a Table variable(Contains only one column "accessGroup"), the individual values are in a Table variable under the column name accessGroup and I use following code in the SP for comparison Where AccessRights like '%'+accessGroup+'%' Now if the user passes the parameter (HELP, OLI) instead of( HELP,STOLI) the SP will give the output. What should be done for comparison so that that subststring OLI does not give the output for STOLI

    Read the article

  • Cannot combine commits using TortoiseGit

    - by JC
    I have two branches with several commits each. On one branch, I can go to the log, select two commits, and TortoiseGit shows "combine to one commit" in the context menu. On the other branch this option does not show in the context menu. Both sequence of commits is very similar; add file then modify it, so there is no difference really between the branches. What factors would cause this "combine to one commit" to not be available? I'm wondering if I should just switch to the command line.

    Read the article

  • Capistrano configuration

    - by Eli
    I'm having some issues with variable scope with the capistrano-ext gem's multistage module. I currently have, in config/deploy/staging.rb. set(:settings) { YAML.load_file("config/deploy.yml")['staging'] } set :repository, settings["repository"] set :deploy_to, settings["deploy_to"] set :branch, settings["branch"] set :domain, settings["domain"] set :user, settings["user"] role :app, domain role :web, domain role :db, domain, :primary => true My config/deploy/production.rb file is similar. This doesn't seem very DRY. Ideally, I think I'd like everything to be in the deploy.rb file. If there were a variable set with the current stage, everything would be really clean. UPDATE: I found a solution. I defined this function in deploy.rb: def set_settings(params) params.each_pair do |k,v| set k.to_sym, v end if exists? :domain role :app, domain role :web, domain role :db, domain, :primary => true end end Then my staging.rb file is just set_settings(YAML.load_file("config/deploy.yml")['staging'])

    Read the article

  • How to turn off project mirroring from SourcForge to launchpad?

    - by C.W.Holeman II
    I have project Emle in Launchpad. I set it to import from emle.svn.sourceforge.net. My intention was to do a single import of the files from SourceForge. EmleBranches2.0 shows that what I actually did was set it to mirror the SourceForge project. Import details Import Status: Reviewed This branch is an import of the Subversion branch from https://emle.svn.sourceforge.net/svnroot/emle/trunk. The next import is scheduled to run in 35 minutes. Last successful import was 5 hours ago. Import started 5 hours ago on russkaya and finished 5 hours ago taking 30 seconds — see the log Import started 12 hours ago on neumayer and finished 12 hours ago taking 30 seconds — see the log Import started 20 hours ago on russkaya and finished 20 hours ago taking 30 seconds — see the log How can I turn off the mirroring?

    Read the article

  • SVN best practice - checking out root folder

    - by Stephen Dolier
    Hi all, quick question about svn checkout best practice. Once the structure of a repository is set up, ie trunk, branches, tags, is it normal to have the root checked out to our local machines. Or should you only check out the trunk if that's what you are working on or a branch if we so choose to create one. The reason i ask is that every time someone creates a branch or tag we all get a copy when we do an update. btw, we're recently migrated from vss.

    Read the article

  • Using git branches for variations of a project

    - by Trevor Hartman
    I'm using git's branching feature to manage 5 variations of a small website. There are 5 versions that will all be live in different subdirectories on production. My approach to checking out the various branches to their respective folders was to: mkdir foo && cd foo git init git remote add origin git@...:project.git git fetch origin foo:foo Where "foo" is a given branch name. This was fine, except for that it pulled my entire repo (designs, as3 source, etc...) into those branch folders instead of just the public www folder, which is the only thing I really want on production. Is there a cleaner way to handle this? Git can't clone subdirectories right?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >