Search Results

Search found 17076 results on 684 pages for 'nightly build'.

Page 530/684 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • Keep Learning After Your Oracle Training Class is Over - Save 50%!

    - by KJones
    Written by Amit Kumar, Senior Director Oracle University Digital Training        Every training class you take about the latest Oracle application or technology moves you closer to developing the skills you need to succeed. But after class is over, how do you keep up with today’s accelerating pace of innovation? To   To keep with the very latest technological advances, you need an ongoing and flexible training solution.       One that lets you learn during your own downtime.       Knowledge that’s easy to access.       Interactive lessons where you connect with experts.       A simple way to increase your knowledge, on your own time and at your own pace. The new Oracle Learning Streams is the flexible training solution you're looking for. Continuously Learn with Oracle Learning Streams Over time, Oracle Learning Streams help you develop the depth and breadth of knowledge that will give you the tools to become an expert in your field. By taking advantage of comprehensive and frequently updated information, you can keep learning continuously, at your own pace, when it's convenient for you. Sign up today and get 12 months of unlimited access to: •    Hundreds of videos delivered by Oracle experts for fresh and continuous product learning•    Live connections with Oracle's top instructors•    Robust video search capability to find exactly what you’re looking for•    Features that allow you to build your own custom learning queue and request new content Oracle Learning Streams are now available for Oracle Database and Oracle Middleware. Take a moment to preview the content now.  For a Limited Time - Save 50% For a limited time, save 50% when you order Oracle Learning Streams with any other Oracle Classroom, Live Virtual Class or Training On Demand course. Now there is no reason for learning to stop when class is over!

    Read the article

  • UK Oracle User Group Event: Trends in Identity Management

    - by B Shashikumar
    As threat levels rise and new technologies such as cloud and mobile computing gain widespread acceptance, security is occupying more and more mindshare among IT executives. To help prepare for the rapidly changing security landscape, the Oracle UK User Group community and our partners at Enline/SENA have put together an User Group event in London on Apr 19 where you can learn more from your industry peers about upcoming trends in identity management. Here are some of the key trends in identity management and security that we predicted at the beginning of last year and look how they have turned out so far. You have to admit that we have a pretty good track record when it comes to forecasting trends in identity management and security. Threat levels will grow—and there will be more serious breaches:   We have since witnessed breaches of high value targets like RSA and Epsilon. Most organizations have not done enough to protect against insider threats. Organizations need to look for security solutions to stop user access to applications based on real-time patterns of fraud and for situations in which employees change roles or employment status within a company. Cloud computing will continue to grow—and require new security solutions: Cloud computing has since exploded into a dominant secular trend in the industry. Cloud computing continues to present many opportunities like low upfront costs, rapid deployment etc. But Cloud computing also increases policy fragmentation and reduces visibility and control. So organizations require solutions that bridge the security gap between the enterprise and cloud applications to reduce fragmentation and increase control. Mobile devices will challenge traditional security solutions: Since that time, we have witnessed proliferation of mobile devices—combined with increasing numbers of employees bringing their own devices to work (BYOD) — these trends continue to dissolve the traditional boundaries of the enterprise. This in turn, requires a holistic approach within an organization that combines strong authentication and fraud protection, externalization of entitlements, and centralized management across multiple applications—and open standards to make all that possible.  Security platforms will continue to converge: As organizations move increasingly toward vendor consolidation, security solutions are also evolving. Next-generation identity management platforms have best-of-breed features, and must also remain open and flexible to remain viable. As a result, developers need products such as the Oracle Access Management Suite in order to efficiently and reliably build identity and access management into applications—without requiring security experts. Organizations will increasingly pursue "business-centric compliance.": Privacy and security regulations have continued to increase. So businesses are increasingly look for solutions that combine strong security and compliance management tools with business ready experience for faster, lower-cost implementations.  If you'd like to hear more about the top trends in identity management and learn how to empower yourself, then join us for the Oracle UK User Group on Thu Apr 19 in London where Oracle and Enline/SENA product experts will come together to share security trends, best practices, and solutions for your business. Register Here.

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • Building a Solaris 11 repository without network connection

    - by user12611852
    Solaris 11 has been released and is a fantastic new iteration of Oracle's rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates. My customers typically don't have network access and, in fact, can't connect to any network until they have "Authority to connect."  It's useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I'm simply providing the quick and dirty steps.  The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root: pkg set-publisher -G '*' -g file:///cdrom/sol11repo_full/repo solaris Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don't have a DVD drive), however, it takes a little more work. After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site Sneaker-net the files to your Solaris 11 system Unzip and cat the two files together to create one large ISO image. The file is about 6.9 GB in size zfs create rpool/export/repoSolaris11 zfs set atime=off rpool/export/repoSolaris11 zfs set compression=on rpool/export/repoSolaris11 (save some space) lofiadm -a sol-11-1111-repo-full.iso /dev/lofi/1 mount -F hsfs /dev/lofi/1 /mnt You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location. rsync -aP /mnt/repo /export/repoSolaris11 pkgrepo -s /export/repoSolaris11 refresh pkg set-publisher -G '*' -g /export/repoSolaris11/repo solaris You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Per-pixel collision detection - why does XNA transform matrix return NaN when adding scaling?

    - by JasperS
    I looked at the TransformCollision sample on MSDN and added the Matrix.CreateTranslation part to a property in my collision detection code but I wanted to add scaling. The code works fine when I leave scaling commented out but when I add it and then do a Matrix.Invert() on the created translation matrix the result is NaN ({NaN,NaN,NaN},{NaN,NaN,NaN},...) Can anyone tell me why this is happening please? Here's the code from the sample: // Build the block's transform Matrix blockTransform = Matrix.CreateTranslation(new Vector3(-blockOrigin, 0.0f)) * // Matrix.CreateScale(block.Scale) * would go here Matrix.CreateRotationZ(blocks[i].Rotation) * Matrix.CreateTranslation(new Vector3(blocks[i].Position, 0.0f)); public static bool IntersectPixels( Matrix transformA, int widthA, int heightA, Color[] dataA, Matrix transformB, int widthB, int heightB, Color[] dataB) { // Calculate a matrix which transforms from A's local space into // world space and then into B's local space Matrix transformAToB = transformA * Matrix.Invert(transformB); // When a point moves in A's local space, it moves in B's local space with a // fixed direction and distance proportional to the movement in A. // This algorithm steps through A one pixel at a time along A's X and Y axes // Calculate the analogous steps in B: Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, transformAToB); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, transformAToB); // Calculate the top left corner of A in B's local space // This variable will be reused to keep track of the start of each row Vector2 yPosInB = Vector2.Transform(Vector2.Zero, transformAToB); // For each row of pixels in A for (int yA = 0; yA < heightA; yA++) { // Start at the beginning of the row Vector2 posInB = yPosInB; // For each pixel in this row for (int xA = 0; xA < widthA; xA++) { // Round to the nearest pixel int xB = (int)Math.Round(posInB.X); int yB = (int)Math.Round(posInB.Y); // If the pixel lies within the bounds of B if (0 <= xB && xB < widthB && 0 <= yB && yB < heightB) { // Get the colors of the overlapping pixels Color colorA = dataA[xA + yA * widthA]; Color colorB = dataB[xB + yB * widthB]; // If both pixels are not completely transparent, if (colorA.A != 0 && colorB.A != 0) { // then an intersection has been found return true; } } // Move to the next pixel in the row posInB += stepX; } // Move to the next row yPosInB += stepY; } // No intersection found return false; }

    Read the article

  • Releasing the new Sample Browser Phone app

    - by Jialiang
    Originally posted on: http://geekswithblogs.net/Jialiang/archive/2014/06/05/releasing-the-new-sample-browser-phone-app.aspx Starting its journey in 2010, Sample Browser is achieving its tetralogy by releasing a Windows Phone version Sample Browser today. The new Windows Phone app is the fourth milestone of Sample Browser since we released the desktop version and the Visual Studio version in 2012 and the Windows Store version in 2013. This time, by providing a sample browser designed for a ‘walking’ platform in response to MVPs’ suggestions during last year’s MVP Global Summit, we are literally putting a world of code samples "at developers’ fingertips”. If you like to have a code gallery of over 7000 quality code samples in your pocket, then click here to download our Windows Phone Sample Browser and start a fantastic mobile experience. With Windows Phone version Sample Browser and the Internet, you can search for code samples on MSDN at anytime and anywhere you want, 24/7 and–even to bed. You can also check code sample details and share them with your friends. Compared to the other 3 pieces in the tetralogy (desktop version, Visual Studio version, and the Windows Store version), the Windows Phone version Sample Browser sells itself for convenience and instant connectivity. For those who need to reach code samples under mobile circumstances where no PCs is available, Windows Phone version Sample Browser will definitely be the right service you are seeking for. Aside from sharing samples via emails as the other 3 do, the Windows Phone version Sample Browser also allows you to share the sample via SMS and Near Field Communication (NFC).   What's Next Currently, the Windows Phone Sample Browser only supports online MSDN code searching, but we already plan to upgrade Sample Browser to allow users to do ‘Bing code search’, and add and manage their private code snippets.  We will also upgrade the app to universal app. Universal App is a new concept brought up in the Microsoft Build Developer Conference 2014. It is a new development model that allows for a single app to be deployed across multiple Windows devices such as Windows Phone, Windows 8.1, and XBox. Therefore, once we finish upgrading Sample Browser to a universal app, you can synchronize your own code snippets across different devices; You can also mark a code sample as favorite on your Windows Phone and continue to study the sample when you are on your desktop. By then, sharing data between platforms will be a piece of cake. Also, the user experience of Sample Browser on different platforms will be more consistent.  The best is yet to come!   We sincerely suggest you give Sample Browser a try (click here to download). If you love what you see in Sample Browser, please recommend it to your friends and colleagues. If you encounter any problems or have any suggestions for us, please contact us at [email protected]. Your precious opinions and comments are more than welcome.

    Read the article

  • Generate MERGE statements from a table

    - by Bill Graziano
    We have a requirement to build a test environment where certain tables get reset from production every night.  These are mainly lookup tables.  I played around with all kinds of fancy solutions and finally settled on a series of MERGE statements.  And being lazy I didn’t want to write them myself.  The stored procedure below will generate a MERGE statement for the table you pass it.  If you have identity values it populates those properly.  You need to have primary keys on the table for the joins to be generated properly.  The only thing hard coded is the source database.  You’ll need to update that for your environment.  We actually used a linked server in our situation. CREATE PROC dba_GenerateMergeStatement (@table NVARCHAR(128) )ASset nocount on; declare @return int;PRINT '-- ' + @table + ' -------------------------------------------------------------'--PRINT 'SET NOCOUNT ON;--'-- Set the identity insert on for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] ON; 'declare @sql varchar(max) = ''declare @list varchar(max) = '';SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + 's.' + [name] +', 'from sys.columnswhere object_id = object_id(@table)-- --------------------------------------------------------------------------------PRINT 'MERGE [dbo].[' + @table + '] AS t'PRINT 'USING (SELECT * FROM [source_database].[dbo].[' + @table + ']) as s'-- Get the join columns ----------------------------------------------------------SET @list = ''select @list = @list + 't.[' + c.COLUMN_NAME + '] = s.[' + c.COLUMN_NAME + '] AND 'from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE cwhere pk.TABLE_NAME = @tableand CONSTRAINT_TYPE = 'PRIMARY KEY'and c.TABLE_NAME = pk.TABLE_NAMEand c.CONSTRAINT_NAME = pk.CONSTRAINT_NAMESELECT @list = LEFT(@list, LEN(@list) -3)PRINT 'ON ( ' + @list + ')'-- WHEN MATCHED ------------------------------------------------------------------PRINT 'WHEN MATCHED THEN UPDATE SET'SELECT @list = '';SELECT @list = @list + ' [' + [name] + '] = s.[' + [name] +'],'from sys.columnswhere object_id = object_id(@table)-- don't update primary keysand [name] NOT IN (SELECT [column_name] from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @table and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME)-- and don't update identity columnsand columnproperty(object_id(@table), [name], 'IsIdentity ') = 0 --print @list PRINT left(@list, len(@list) -3 )-- WHEN NOT MATCHED BY TARGET ------------------------------------------------PRINT ' WHEN NOT MATCHED BY TARGET THEN';-- Get the insert listSET @list = ''SELECT @list = @list + '[' + [name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' INSERT(' + @list + ')'-- get the values listSET @list = ''SELECT @list = @list + 's.[' +[name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' VALUES(' + @list + ')'-- WHEN NOT MATCHED BY SOURCEprint 'WHEN NOT MATCHED BY SOURCE THEN DELETE; 'PRINT ''PRINT 'PRINT ''' + @table + ': '' + CAST(@@ROWCOUNT AS VARCHAR(100));';PRINT ''-- Set the identity insert OFF for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] OFF; 'PRINT ''PRINT 'GO'PRINT '';

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

  • Oracle ENDECA Discovery 3.1 Partner Training 3-Day Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 To find out more about the ENDECA training, and to Register for this, click here. June 24-26, 2014: Oracle Reading, UK – Free to partners in EMEA. FREE of charge to OPN member Partners, this Oracle Endeca Information Discovery (OEID) 3-day bootcamp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. This workshop will provide hands-on experience with Oracle Endeca Information Discovery. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. You will also learn about working with ETL components for content acquisitions and other aspects of the project such as security. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Endeca Information Discovery solution. If you are a Bigdata Analytics Architect or Developer, BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Click here to Register for this. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • C# Preprocessor Directives

    - by MarkPearl
    Going back to my old c++ days at university where we had all our code littered with preprocessor directives - I thought it made the code ugly and could never understand why it was useful. Today though I found a use in my C# application. The scenario – I had made various security levels in my application and tied my XAML to the levels by set by static accessors in code. An example of my XAML code for a Combobox to be enabled would be as follows… <ComboBox IsEnabled="{x:Static security:Security.SecurityCanEditDebtor}" />   And then I would have a static method like this… public static bool SecurityCanEditDebtorPostalAddress { get { if (SecurityCanEditDebtorPostalAddress) { return true; } else { return false; } } } My only problem was that my XAML did not like the if statement – which meant that while my code worked during runtime, during design time in VS2010 it gave some horrible error like… NullReferenceException was thrown on “StatiucExtension”: Exception has been thrown by the target of an invocation… If however my C# method was changed to something like this… public static bool SecurityCanEditDebtorPostalAddress { get { return true; } }   My XAML viewer would be happy. But of course this would bypass my security… <Drum Roll> Welcome preprocessor directives… what I wanted was during my design experience to totally remove the “if” code so that my accessor would return true and not have any if statements, but when I release my project to the big open world, I want the code to have the is statement. With a bit of searching I found the relevant MSDN sample and my code now looks like this… public static bool SecurityCanEditDebtorPostalAddress { get { #if DEBUG return true; #else if (Settings.GetInstance().CurrentUser.SecurityCanEditDebtorPostalAddress) { return true; } else { return false; } #endif } }   Not the prettiest beast, but it works. Basically what is being said here is that during my debug mode compile my code with just the code between the #if … #else block, but what I can now do is if I want to universally switch everything to the “if else” statement, I just go to my project properties –> Build and change the “Debug” flag as illustrated in the picture below. Also note that you can define your own conditional compilation symbols, and if you even wanted to you could skip the whole properties page and define them in code using the #define & #undef directives. So while I don’t like the way the code works and would like to look more into AOP and compare it to this method, it works for now.

    Read the article

  • SEO effects of intermix of WP blog, custom PHP site and FB app game

    - by melbournetechlover
    We're a melbourne tech company in the process of building a custom site in PHP. We plan to launch a "pre-launch" page which is also custom coded (CSS3 on twitter bootstrap framework + HTML5 front end and PHP back end). On that site will be a link to a blog - the idea behind this is to build up ranking for a variety of relevant keywords prior to the full site going live (given the majority of the site is a member only community anyway so the blog is really the main way we'll be able to execute on-site SEO. Ideally, we would like to install wordpress in a subdirectory on our servers and just customise the header to look the same as the landing page of the website. But some questions and concerns... Is there any detrimental effect on SEO efforts in having two separate systems (one custom PHP, the other an installation of wordpress) to manage the blog vs the rest of the site? Are there any benefits or detriments to installing on a sub domain such as blog.sitename.com vs. sitename.com/blog. My preference would be sitename.com/blog as it feels neater - but open to suggestions based on knowledge of Google preferences. Separately, we are building a Facebook app which is under another site name. Again because we are launching this app first, from an SEO perspective, would it actually be better to run it from a sub domain on the main site - e.g. gamename.mainsitename.com instead of on app.gamename.com? Currently we have it on app.gamename.com, but if there are SEO benefits to moving it to the other domain and server then we'll do it. Basically we don't want to have our SEO efforts divided - will Google algorithms prefer two sites heavily referring traffic, or is it better to focus our efforts on one. I guess that's the crux of the issue. But the other one is - does Google care about traffic accessing a page built for the Facebook app iFrame - does that count toward rankings? Sorry I hope these questions aren't too complex - but we're in the tech world every day and still can't seem to find a good answer to these ones...hence I'm taking to the forums!! Free beer for whoever can give me a solid answer!

    Read the article

  • SQL Saturday 194 - Exeter

    - by Dave Ballantyne
    Many kudos goes to Jonathan and Annette Allen and the others on the team for confirming SQL Saturday 194 in Exeter on the 8th and 9th of March.  The event home page is here http://www.sqlsaturday.com/194/eventhome.aspx and I delighted that myself and Dave Morrison will be presenting a full day pre-con on the 8th on favourite subjects “TSQL and Internals”. Here is the full abstract : TSQL and internals - When faced with performance issues there are many lines of attack. Tuning the engine itself can get you so far, however for maximum effect you need to understand how the engine and how it translates SQL statements into performable actions. This is not a simple task, it is a massive task to deal with a multi-table join and the number of permutations can be immense. To back up this knowledge, we can create better performing TSQL and understand the impact that is has upon the engine and recognize the pitfalls and gotcha’s that exist in SQLServer. Ultimately, there is no ‘best way’ to perform a single task only many variations of ‘it depends’ , but now we can pick the most appropriate option for the required dataload. Over the years, there have been many myths and misconceptions have grown around the product, some have basis in older versions and some are just wrong. Continuing to build on the knowledge given so far these issue will be explored and broken down and proved or disproved. Finally we will look to the future and explore SQL Server 2012 and the new functionality that that brings and some of the common uses that we will be able to address. After completion of this days pre-con, attendees will have a more complete knowledge of execution plans, and how they relate to the physical and logical actions that SQLServer will be executing on their behalf. The attendees will also have a more rounded and fuller knowledge of TSQL and the implications of incorrectly defining a query. Dave is a fountain of knowledge on execution plans and optimizer internals and ,though i may flatter myself, I’m no shrinking violet when it comes to TSQL and such matters.  I hope that if you cant join us, then there are other pre-cons available from other experts in their fields that may ‘float you boat’ too.  The pre-con page is http://sqlsouthwest.co.uk/SQLSaturday_precon.htm Also, excitingly, this pre-con day is sponsored by Fusion-IO which is a great boon for the day. If you want a more of this then i am offering a 2 day TSQL course starting on the 19th of March. More details on this are available here

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • Nvidia driver overscan issue second monitor via dvi-d cable

    - by benmichael
    Ok, I know that I have a bit of a bizarre setup, but here goes. I have an old laptop, HP Pavilion 6000. The graphics card in there is a GeForce 7150M. The monitor connection is an old 18pin. The external monitor I use is a Samsung SyncMaster 2333. Don't ask me why, but this monitor only has a dvi-d connection (yes, i have searched it). So I have the monitor plugged into the laptop. If I use any of the Nvidia propriety drivers and try to set the resolution up to 1920x1080 (the monitor's native resolution), I get a massive overscan issue. Over the years I have tried to get this to work, tinkering with my xorg.conf to death. I have also tried this on every Ubuntu since 10.04, on all the corresponding LUbuntus, and on all the Linux Mints since Lisa. Exact same issue. I have even tried it in WinDoze and it works perfectly there (although I did get the error once, but was unable to reproduce it). Using the Open Source drivers it works perfectly iff I switch off the laptop monitor (this makes no difference with the Nvidia drivers). I would have happily gone on using the Open Source drivers, except that since upgrading to LUbuntu 12.10, the Open Source drivers make my monitor completely hazy and have the same overscan issue until I (through the haze, only because I know where things are) go to the monitor settings, activate the laptop's monitor, then deactivate it, and suddenly it comes right. I have to do this every time. So I have to find a way to fix one of them, so I may as well tackle the propriety drivers, hence this overlong question. Amidst other things, I have tried the nvidia-settings, but because it is connected to an 18pin, it detects the monitor as a vga monitor and does not give me overscan correction options. I have tried custom modlines (although there are always more of those try), I have tried using xrandr, and I have tried all the FlatPanelOptions. What I have not tried is a Gentoo build, as I don't have time any more to do that installation, but up to about three years ago when I ran Gentoo exclusively I did not have this issue. Below is an link to an image with a red highlight around the portion of the screen visible to me, the numbers around it are the number of pixels which are cut off. This does seem to drift a few pixels every now and then. Thanks in advance. Nvidia driver issue image

    Read the article

  • Oracle Brings Java to iOS Devices (and Android too)

    - by Shay Shmeltzer
    Java developer, did you ever wish that you can take your Java skills and apply them to building applications for iOS mobile devices? Well, now you can! With the new Oracle ADF Mobile solution, Oracle has created a unique technology that allows developers to use the Java language and develop applications that install and run on both iOS and Android mobile devices. The solution is based on a thin native container that installs as part of your application. The container is able to run the same application you develop unchanged on both Android and iOS devices. One part of the container is a headless lightweight JVM based on the Java ME CDC technology. This allows the execution of Java code on your mobile device. Java is used for building business logic, accessing local SQLite encrypted database, and invoking and interacting with remote services. Java concept on the UI too To further help transition Java developers to mobile developers, ADF Mobile borrows familiar concepts from the world of JSF to make the UI development experience simpler. The user interface layer of Oracle ADF Mobile is rendered with HTML5 which delivers native user experience on the devices, including animations and gesture support. Using a set of rich components, developers can create mobile pages without needing to write low level HTML5 and JavaScript code. The components cover everything from simple controls such as text fields, date pickers, buttons and links, to advanced data visualization components such as graphs, gauges and maps, and including unique mobile UI patterns such as lists, and toggle selectors. Want to see the components in action? Access this demo instance from your mobile device. Need to further customize the look and feel? You can use CSS3 to achieve this. A controller layer - similar in functionality to the JSF controller - allows developer to simplify the way they build navigation between pages. The logic behind the pages is written in managed beans with various scopes – again similar to the JSF approach. Need to interact with device features like camera, SMS, Contacts etc? Oracle conveniently packaged access to these services in a set of services that you can just drag and drop into your pages as buttons and links, or code into your managed beans Java calls to activate. Underneath the covers this layer is implemented using the open source phonegap solution. With the new Oracle ADF Mobile solution, transferring your Java skills into the Mobile world has become much easier. Check out this development experience demo. And then go and download JDeveloper and the ADF Mobile extension and try it out on your own. For more on ADF Mobile, see the ADF Mobile OTN page.

    Read the article

  • An Unstoppable Force!

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Building a high-availability database platform presents unique challenges. Combining servers, storage, networking, OS, firmware, and database is complicated and raises important concerns: Will coordination between multiple SME’s delay deployment? Will it be reliable? Will it scale? Will routine maintenance consume precious IT-staff time? Ultimately, will it work? Enter the Oracle Database Appliance, a complete package of software, server, storage, and networking that’s engineered for simplicity. It saves time and money by simplifying deployment, maintenance, and support of database workloads. Plus, it’s based on Intel Xeon processors to ensure a high level of performance and scalability. Take a look at this video to compare Heather and Ted’s approach to building a server for their Oracle database! http://www.youtube.com/watch?v=os4RDVclWS8 If you missed the “Compare Database Platforms: Build vs. Buy” webcast or want to listen again to find out how Jeff Schulte - Vice President at Yodlee uses Oracle Database Appliance.

    Read the article

  • Oredev 2012: Summary and source code

    - by Laurent Bugnion
    This week, I had the pleasure to be invited to talk at Oredev, a really cool conference taking place in Malmo, Sweden. The whole event is awesome, including a very special dinner on Monday including sauna and swimming in a 6 degrees cold Baltic sea, and a reception with dinner at the town hall, including the mayor himself. Considering Malmo is a town of 300'000 inhabitants, it is a pretty nice occasion and the historical building itself is really worth seeing. For those interested, I placed my pictures on my Flickr account. I had a workshop on Tuesday morning about Windows 8 development with XAML/C#, and then a session on Wednesday about MVVM in Windows Phone 8 and Windows 8, of course using MVVM Light. I was very nervous because I reworked some of my demos as recently as this morning, in the wake of the Build conference last week and the release of both the Windows Phone SDK and MVVM Light V4.1. Everything went well however, and if I judge by the people I talked t after the talk, and Twitter, everything went pretty well. Before my talk on Tuesday, I had the pleasure to see a talk by Iris Classon (@irisclasson) on the challenges of being a "n00b" and a woman in software development. I especially appreciated her research and conclusions on the lack of women I our industry, a topic that is dear to my heart (because I want the best possible future for my two daughters, and also because I really enjoy working with women on projects, and getting a different insight on the art of software development. I really want to thank the excellent organization committee for their hard work and their fantastic welcome to Malmo. In particular Emily Holweck did a wonderful job and was super helpful throughout the preparation and the conference itself. I made a few pictures during my stay, all with the new Nokia Lumia 920, and hope you will enjoy them too. The source code and the slides… The source code is available for download from Skydrive. You will find the following: Windows 8 workshop slides. MVVM Applied slides Source code package with Win8Demo: The demo I built during the 4 hours workshop, with some light MVVM, web services (JSON), GridView, Design time data (Blend / Visual Studio designer), Bing maps integration, location sensor, Search pane integration. SemanticZoomSample: a sample I put together to demonstrate the SemanticZoom control, with two GridViews and of course full design time data for Blend work. Due to time constraints, I was not able to show this demo during the workshop, but I publish it anyway, hoping it will be useful to someone. PictureUploader: The demo I built during my 50 minutes session about MVVM Applied in Windows Phone 8 and Windows 8. Code sharing, design time data, MVVM Light are used in Windows Phone 8 and Windows 8 apps. And in video… You can also see the video of my MVVM talk thanks to the good services of the Oredev team! MVVM Applied in Windows Phone and Windows 8 from Øredev Conference on Vimeo.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Exalytics OBI11g Partner Training 3-day hands-on Workshops

    - by Mike.Hallett(at)Oracle-BI&EPM
    These FREE to OPN Partners hands-on workshops highlight both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle's proven Business Intelligence Foundation (OBI 11g v 11.1.1.6 and Essbase) with enhanced visualization capabilities and performance optimizations. Priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam, and to Partners who have purchased an Exalytics for their own data centres to demonstrate it to their clients. Topics covered will include: Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution.You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics. Prerequisites Experience and understanding of OBIEE 11g is required ·       Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11g Introduction Workshop is highly recommended, and priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam. Good understanding of data warehousing and data modelling for reporting and analysis purpose.  Strong experience with database technologies preferred Attendee to provide their own laptops which must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free disk space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop.  Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Register Here for 3-day Workshops: 11-Dec-12 Birmingham UK 29-Jan-13 Utrecht NL 12-Feb-13 Frankfurt Germany 12-Mar-13 Moscow Russia

    Read the article

  • Taking a Chomp out of a (Social Network) Product Hype

    - by kellsey.ruppel
    Andrew Kershaw, Senior Director Oracle Social Network Product Development, speaks about Oracle Social Network One of our competitors is being very aggressive with its own developed Social Network add-on, but there should be no doubt in the minds that the Oracle social capabilities available with Fusion CRM stack up well against it. Within the Oracle Cloud, we have announced a product called Oracle Social Network. That technology is pre-integrated into Fusion Applications, enabling your customer to build a collaborative and social enterprise (without all the noise!). Oracle Social Network is designed together with our Fusion Applications. It is very conveniently pre-integrated with CRM, HCM, Financials, Projects, Supply Chain, and the Fusion family. But what's even better is that the individual teams can take a considered approach to what they are trying to achieve within the collaboration process and the outcome they are trying to enable. Then they can utilize the network and collaboration tools to support that result. And there's more! The Fusion teams can design social interactions that bridge across and outside their individual product lines because we have more than just a product line and they know they have the social network to connect them. I know we have a superior product, but it is our ability to understand and execute across the enterprise that will enable us to deliver a much more robust and capable platform in the short term than our competitor can. We have built a product specifically designed for enterprise social collaboration which is not the same for the competition. We have delivered a much more effective solution - one in which individuals can easily collaborate to get results, while being confident that they know who has access to their information. Our platform has been pre-built to cross the company boundaries and enable our customers to collaborate, not just with their customers, but with their partners and suppliers as well. So Fusion addresses the combination of the enterprise application suite with enterprise collaboration and social networking. Oracle Social Network already has a feature function advantage over our competitor's tool providing a real added value to the employees. Plus Oracle has the ability to execute in a broad enterprise and cross-enterprise way that our competitors cannot. We have the power of a tool that provides the core social fabric across all of the applications, as well as supporting enterprise collaboration. That allows us to provide intelligent business insight, connections, and recommendations that our competitor simply can't. From our competitors, customers get integration for Sales; they get integration for Service, but then they have to integrate every other enterprise asset that they have by themselves. With Oracle, we are doing the integration. Fusion Applications will be pre-integrated, and over time, all of the applications in the business suite, including our Applications Unlimited and specialist industry applications, will connect to the Oracle Social Network. I'm confident these capabilities make Oracle Social Network the only collaboration platform on which to deliver the social enterprise.

    Read the article

  • Announcing the Next AppAdvantage IT Leader Documentary

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Close on the heels of the very successful launch of Oracle BPM 12c, we are so excited to be announcing the next video in the Oracle AppAdvantage IT Leaders Series. As you know, the Oracle AppAdvantage IT Leaders Series profiles a successful executive in an industry leading organization that has embarked on a business transformation or platform modernization journey by taking full advantage of the Oracle AppAdvantage pace layered architecture. Tune in on Tuesday, September 9th at 10 am Pacific/1 pm Eastern to catch our IT executive, Regis Louis in an in-depth conversation with IT executives from Siram S.p.a., a major energy services management company in Italy. The documentary will explore how the company is doing process orchestration and business process management to build inter-department collaboration, maintain data integrity and offer complete transparency throughout their Request for Proposal (RFP) process. Oracle’s technology expert will then do a comprehensive walk-through of Siram’s IT model, the various components and how Oracle Fusion Middleware technologies are enabling their Oracle E-Business Suite processes. Experts will be at hand to answer your questions live. Check out this live documentary webcast and find out how organizations like yours are building business agility leveraging existing investments in Oracle Applications. Register today. And if you are on twitter, send your questions to @OracleMiddle with #ITLeader, #AppAdvantage. We look forward to connecting with you on Tue, Sep 9. Siram Achieves Commercial Efficiency with Improved Business Process Agility Date: Tuesday, September 9, 2014 Time: 10:00 AM PT / 1:00 PM ET /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >