Search Results

Search found 38931 results on 1558 pages for 'database testing'.

Page 572/1558 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Why Register for Oracle PartnerNetwork Exchange @ OpenWorld?

    - by Richard Lefebvre
    Oracle Partner Network Exchange @ OpenWorld premiers this year with a dedicated program of keynotes and sessions created to enhance the opportunities for partners to learn from and network with Oracle executives and experts. The new program also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, and colleagues.   Program Benefits:  • Partner Keynote, hosted by Judson Althoff, SVP, WWA&C, Oracle (September 30) • 35+ Partner specific sessions • Free Certification testing • Exclusive access to the OPN Lounge • All Oracle OpenWorld and JavaOne keynotes • Oracle OpenWorld and JavaOne Exhibition Halls • Executive Solution sessions • Scene and Be Heard Theater • Oracle OpenWorld Welcome Reception (September 30) • Lunch in the Howard Street Tent (October 1 through October 4) • It's A Wrap! closing event (October 4) •Oracle OpenWorld Conference Materials   TOP 5 REASON TO ATTEND:   1.NETWORK WITH YOUR TOP PROSPECTS • Access to 40,000+ customers who will be attending OpenWorld and JavaOne Conferences 2.HEAR FROM TOP ORACLE EXECUTIVES • Partner keynote led by Judson Althoff, SVP, WWA&C, Oracle 3.  GET THE TOOLS TO DIFFERENTIATE YOURSELVES FROM YOUR COMPETITORS • 35+ sessions tailored to Partners • All sessions will be held Monday – Thursday during main OpenWorld conference hours • Sessions led by key Oracle Executives 4.  FREE CERTIFICATION TESTING 5.  OPENWORLD APPRECIATION PASS CAN BE ADDED ON TO THE OPN EXCHANGE PASS FOR ONLY $200!

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • Overwhelmed by complex C#/ASP.NET project in Visual Studio 2008

    - by Darren Cook
    I have been hired as a junior programmer to work on projects that extend existing functionality in a very large, complex solution. The code base consists of C#, ASP.NET, jQuery, javascript, html and xml. I have some knowledge of all these in addition to fair knowledge of object-oriented programming and its fundamental concepts of inheritance, abstraction, polymorphism and encapsulation. I can follow code up through its base classes, interfaces, abstract classes and understand a large part of the code that I read while doing this. However, this solution is so humongous and so many things get tied together whenever I navigate through the code that I feel absolutely overwhelmed. I often find myself unable to fully follow everything that is going on with objects being serialized, large amounts of C# and javascript operating on the same pages and methods being called from template files that consist mainly of markup. I love learning about code, but trying to deal with this really stresses me out. Additionally, I do know that a significant amount of unit testing has been done but I know nothing about unit testing or how to utilize it. Any advice anyone could offer me regarding dealing with a large code base while using Visual Studio 2008 would be greatly appreciated. Are there tools that I can use to help get a handle on what is going on? Perhaps there are things even in Visual Studio that I am not aware of. How can I follow the code to low level functionality in order to get a better grasp of what is going on at a high level?

    Read the article

  • Fixed Assets Recommended Patch Collections

    - by Cindy A B-Oracle
    After the introduction of the Recommended Patch Collections (RPCs) in late 2012, Fixed Assets development has released an RPC about every six months.  You may recall that an RPC is a collection of recommended patches consolidated into a single, downloadable patch, ready to be applied.  The RPCs are created with the following goals in mind: Stability:  Address issues that occur often and interfere with the normal completion of crucial business processes, such as period close--as observed by Oracle Development and Global Customer Support. Root Cause Fixes:  Deliver a root cause fix for data corruption issues that delay period close, normal transaction flow actions, performance, and other issues. Compact:  While bundling a large number of important corrections, the file footprint is kept as small as possible to facilitate uptake and minimize testing. Reliable:  Reliable code with multiple customer downloads and comprehensive testing by QA, Support and Proactive Support.  There has been a revision to the RPC release process for spring 2014.  Instead of releasing product-specific RPCs, development has released a 12.1.3 RPC that is EBS-wide.  This EBS RPC includes all product-recommended patches along with their dependencies. To find out more about this EBS-wide RPC, please review Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 1 (RPC1) (Doc ID 1638535.1).

    Read the article

  • Finishing an iteration early

    - by f1dave
    I'd like some input on this on those working with agile methodologies... A current project is finding that development on our planned user stories is finishing some time before the end of the iteration, and that the testing effort and business acceptance is what's actually dragging us out longer towards the end. This means that the devs in question have spare time, and they're essentially going out to the iteration+1 backlog and starting work on cards there before our current iteration cards are 'done'. As iteration manager, I want to put a stop to this - I want a more team-orientated approach where the group takes ownership of getting all the cards done, as opposed to "Well, dev's done so what do I dev next?" The problem I face is convincing the team of this. On one hand, I understand why the devs don't want to test the code they've written (there are unit tests they write of course, but the manual testing to be done could be influenced by their bias). The team sees working ahead as making our next iterations easier, because a lot of the work is done before we start. I see this as screwing with the whole system of planning/actuals - but it's difficult to convince the team as to why this matters. What advice can you guys and girls give? How do we stop devs reaching ahead? What should they be doing instead? How much of a problem is this in the scheme of things, if things are still getting done?

    Read the article

  • Happy Day! VS2010 SP1, Project Server Integration, Load Test Feature Pack

    - by Aaron Kowall
    Microsoft released a PILE of Visual Studio goodness today: Visual Studio 2010 SP1(Including TFS SP1) Finally done with remembering which GDR packs, KB Patches, etc need to be installed with a new VS/TFS 2010 deployment.  Just grab the SP1.  It’s available today for MSDN Subscribers and March 10th for public download. TFS-Project Server Integration Feature Pack MSDN Subscribers got another little treat today with the TFS-Project Server integration feature pack.  We can now get project rollups and portfolio level management with Project Server yet still have the tight developer interaction with TFS.  Finally we can make the PMO happy without duplicate entry or MS Project gymnastics. Visual Studio Load Test Feature Pack This is a new benefit for Visual Studio 2010 Ultimate subscribers.  Previously there was a limit to Ultimate Load Testing of 250 virtual users. If you needed more, you had to buy virtual user license packs.  No more.  Now your Visual Studio Ultimate license allows you to simulate as many virtual users as you need!!  This is HUGE in improving adoption of regular load testing for development projects. All the Details are available from Soma’s blog. Technorati Tags: VS2010,TFS,Load Test

    Read the article

  • MvcReportViewer v.0.4.0 is available!

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2014/06/04/mvcreportviewer-v.0.4.0-is-available.aspxToday I released new version of MvcReportViewer. This release contains mostly bug fixes reported by library users. I am glad to see that Open Source model works and people try to contribute to the project! Thank you everybody for your bug repots and help with the project. Version 0.4.0 Added support for ASP.NET MVC 5 Removed jQuery dependency. I have not tested it on IE8 or earlier versions. Any help with testing is welcome! Fixed problem with SSRS keep-alive cookies. Keep-alive cookies are issued every time a report is opened during a browser session. Many people don't restart their browsers and in my case, Chrome doesn't get rid of the cookie session data on close - had to manually delete them for the reports to start working again. I added KeepSessionAlive control settings to manage SSRS keep-alive behavior. It is set to false by default to fix Bad Request 400: Request Too Long issue. You can find usage example in Fluent.cshtml. Fixed the bug when ReportViewer Control parameters was not parsed when ShowParameterPrompts parameter had not been set. Changed public static MvcReportViewerIframe MvcReportViewer method to use IEnumerable<KeyValuePair<string, object>> reportParameters instead of simple object. The reason is users reported that they mostly use multiple report parameters’ values. Added support for SSRS hosted on Windows Azure. Users should set MvcReportViewer.IsAzureSSRS property to true in Web.config to use Windows Azure authentication. I do not have Windows Azure SSRS and build the code using http://msdn.microsoft.com/en-us/library/gg552871.aspx#Authentication article. It would be nice if somebody from community tested the change or provided me a test report on Windows Azure for testing purposes.

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • HTML5 media loading sometimes suspends or aborts: misconfigured Apache?

    - by Joan Botella
    Recently, some code that has been working fine for months started to run unexpectedly. That code is just a media files loading JavaScript function, that uses jQuery. It's pretty long, but in essence it is like this: var $audio=$('<audio>'); $audio.on('canplaythrough',function(e){ $audio[0].play(); }); $audio.attr('src','song.ogg'); Basically, the file only loads sometimes, and sometimes stops loading with a suspend or even an abort event. I have uploaded a little testing HTML to http://www.joanbotella.com/tests/loading , where you can see what's happening. You can download the test files from http://www.joanbotella.com/tests/loading/loadingTest.zip for local testing. I have just checked that opening the test index.html file directly into Firefox, and not through my localhost Apache server, makes the audio files perfectly playable. So, I assume, my hosting and I have the Apache server misconfigured for serving media files. My software versions are: Apache 2.2.22-1ubuntu1.7 , Mozilla Firefox 31.0 , Chromium 36.0.1985.125 and jQuery 1.11.0. Can you help me? Thanks in advance!

    Read the article

  • Microsoft releases Visual Studio 2010 SP1

    - by brian_ritchie
    Microsoft has been beta testing SP1 since December of last year.  Today, it was released to MSDN subscribers and will be available for public download on March 10, 2011.The service pack includes a slew of fixes, and a number of new features: Silverlight 4 supportBasic Unit Testing support for the .NET Framework 3.5Performance Wizard for SilverlightIntelliTrace for 64-bit and SharePointIIS Express supportSQL CE 4 supportRazor supportHTML5 and CSS3 support (IntelliSense and validation)WCF RIA Services V1 SP1 includedVisual Basic Runtime embeddingALM Improvements Of all the improvements, IIS Express probably has the largest impact on web developer productivity.  According to Scott Gu, it provides the following:It’s lightweight and easy to install (less than 10Mb download and a super quick install)It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, Media Support, and all other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.Good stuff indeed.  This will make our lives much easier.  Thanks Microsoft...we're feeling the love!  

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • PostgreSQL, Ubuntu, NetBeans IDE (Part 1)

    - by Geertjan
    While setting up PostgreSQL from scratch, with the aim to use it in NetBeans IDE, I found the following resources helpful: http://railskey.wordpress.com/2012/05/19/postgresql-installation-in-ubuntu-12-04/ http://ohdevon.wordpress.com/2011/09/17/postgresql-to-netbeans-1/ http://ohdevon.wordpress.com/2011/09/19/postgresql-to-netbeans-2/ For quite a while I had problems relating to  "/var/run/postgresql/.s.PGSQL.5432", which had something to do with "postmaster.pid", which I somehow solved via a link I can't find anymore, and which may not have been a problem to begin with. A key moment was this one, which was useful for setting the password of a new user I'd created: http://stackoverflow.com/questions/7695962/postgresql-password-authentication-failed-for-user-postgres This was useful for setting up a table in my database, which I did by pasting in the below into NetBeans after I made the connection there: http://use-the-index-luke.com/sql/example-schema/postgresql/where-clause Now I have a database set up with all permissions everywhere (which turned out to be the hard part) correct: The next step will be to create a NetBeans Platform application based on this database. I'm assuming it shouldn't be any different to what's described in the NetBeans Platform CRUD Tutorial.

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • WCF - Automatically create ServiceHost for multiple services

    - by Rajesh Pillai
    WCF - Automatically create ServiceHost for multiple services Welcome back readers!  This blog post is about a small tip that may make working with WCF servicehost a bit easier, if you have lots of services and you need to quickly host them for testing. Recently I was encountered a situation where we were faced to create multiple service host quickly for testing.  Here is the code snippet which is pretty self explanatory.  You can put this code in your service host which in this case is  a console application. class Program   {       static void Main(string[] args)       { // Stores all hosts           List<ServiceHost> hosts = new List<ServiceHost>();           try           { // Get the services element from the serviceModel element in the config file               var section = ConfigurationManager.GetSection("system.serviceModel/services") as ServicesSection;               if (section != null)               {                   foreach (ServiceElement element in section.Services)                   { // NOTE : If the assembly is in another namespace, provide a fully qualified name here in the form // <typename, namespace> // For e.g. Business.Services.CustomerService, Business.Services                       var serviceType = Type.GetType(element.Name); // Get the typeName                        var host = new ServiceHost(serviceType);                       hosts.Add(host); // Add to the host collection                       host.Open(); // Open the host                   }               }               Console.ReadLine();           }           catch (Exception e)           {               Console.WriteLine(e.Message);               Console.ReadLine();           }           finally           {               foreach (ServiceHost host in hosts)               {                   if (host.State == CommunicationState.Opened)                   {                       host.Close();                   }                   else                   {                       host.Abort();                   }               }           }       }   } I hope you find this useful.  You can make this as a windows service if required.

    Read the article

  • Is a "model" branch a common practice?

    - by dukeofgaming
    I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) Schema source (a .sql file) Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. Edit: I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary-looking merges. I'm also doing feature branching.

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • Boolean checks with a single quadtree, or multiple quadtrees?

    - by Djentleman
    I'm currently developing a 2D sidescrolling shooter game for PC (think metroidvania but with a lot more happening at once). Using XNA. I'm utilising quadtrees for my spatial partitioning system. All objects will be encompassed by standard bounding geometry (box or sphere) with possible pixel-perfect collision detection implemented after geometry collision (depends on how optimised I can get it). These are my collision scenarios, with < representing object overlap (multiplayer co-op is the reason for the player<player scenario): Collision scenarios (true = collision occurs): Player <> Player = false Enemy <> Enemy = false Player <> Enemy = true PlayerBullet <> Enemy = true PlayerBullet <> Player = false PlayerBullet <> EnemyBullet = true PlayerBullet <> PlayerBullet = false EnemyBullet <> Player = true EnemyBullet <> Enemy = false EnemyBullet <> EnemyBullet = false Player <> Environment = true Enemy <> Environment = true PlayerBullet <> Environment = true EnemyBullet <> Environment = true Going off this information and the fact that were will likely be several hundred objects rendering on-screen at any given time, my question is as follows: Which method is likely to be the most efficient/optimised and why: Using a single quadtree with boolean checks for collision between the different types of objects. Using three quadtrees at once (player, enemy, environment), only testing the player and enemy trees against each other while testing both the player and enemy trees against the environment tree.

    Read the article

  • FILESTREAM in SQL Server 2008 R2

    - by CatherineRussell
    Much data is unstructured, such as text documents, images, and videos. This unstructured data is often stored outside the database, separate from its structured data. This separation can cause data management complexities. Or, if the data is associated with structured storage, the file streaming capabilities and performance can be limited. FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data. FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing. FILESTREAM data is not encrypted even when transparent data encryption is enabled. To read more, go to: http://technet.microsoft.com/en-us/library/bb933993.aspx

    Read the article

  • Displaying a Paged Grid of Data in ASP.NET MVC

    This article demonstrates how to display a paged grid of data in an ASP.NET MVC application and builds upon the work done in two earlier articles: Displaying a Grid of Data in ASP.NET MVC and Sorting a Grid of Data in ASP.NET MVC. Displaying a Grid of Data in ASP.NET MVC started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). Sorting a Grid of Data in ASP.NET MVC enhanced the application by adding a view-specific Model (ProductGridModel) that provided the View with the sorted collection of products to display along with sort-related information, such as the name of the database column the products were sorted by and whether the products were sorted in ascending or descending order. The Sorting a Grid of Data in ASP.NET MVC article also walked through creating a partial view to render the grid's header row so that each column header was a link that, when clicked, sorted the grid by that column. In this article we enhance the view-specific Model (ProductGridModel) to include paging-related information to include the current page being viewed, how many records to show per page, and how many total records are being paged through. Next, we create an action in the Controller that efficiently retrieves the appropriate subset of records to display and then complete the exercise by building a View that displays the subset of records and includes a paging interface that allows the user to step to the next or previous page, or to jump to a particular page number, we create and use a partial view that displays a numeric paging interface Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • SOA Suite 11g Purging Guide

    - by ShawnBailey
    We now have a single source of truth concerning Purging in My Oracle Support. The material is contained within the SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide under the 'Purging' tab. All of the previous purge related content for 11g is now deprecated and many of the documents will redirect to this Guide while others simply contain a disclaimer. What does the Guide contain? Summary Overview of Purging. What it does and why it's important Specific information on each release of 11g Available patches for each release of 11g and recommendations How to run the different purge scripts Tips on improving performance How to begin troubleshooting problems with the process How to identify orphaned records Useful reference information Here are a couple of screen shots to help with navigation: Guide Landing Page: (click image for full view) Select the 'Purging' tab: (click image for full view) The left menu contains the following options: Alternative: Database Partitioning What to do on 11gR1 GA (11.1.1.1) What to do on PS1 (11.1.1.2) What to do on PS2 (11.1.1.3) What to do on PS3 (11.1.1.4) What to do on PS4 (11.1.1.5) Overview of PS5 (11.1.1.6) Purging Step by Step Performance Tips Troubleshooting Purge Orphaned Records Reference This resource goes hand in hand with the excellent documents SOA 11g Database Growth Management Strategy and Start Small, Grow Fast available on OTN. The latest product documentation can be found here.

    Read the article

  • Oracle as a Service in the Cloud

    - by Jason Williamson
    This should really be a Tweet, but I guess I'm writing a bit more. In theme of data migration and legacy modernization, I am seeing more and more of a push to consolidate data sources, especially from non-oracle to oracle in an effort to save dollars. From a modernization perspective, this fits in quite well. We are able to migrate things like Terradata, Sybase and DB2 and put that all into an Oracle database and then provide that as a OaaS (Oracle as a Service) Cloud offering. This seems to be a growing trend, and not unlike the cool RDS Amazon Database cloud being built on Oracle as well. We again find ourselves back in the similar theme of migration, however. The target technology sounds like a winner (COBOL to Java/SOA...DB2/Datacom/Adabas to Oracle) but the age-old problem of how to get there still persists. It it not trivial to migrate large amounts of pre-relational or "devolved" relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have. I'm working on a couple of new sections and chapters for a book that Tom and Prakash and I are writing around Database Migration and Consolidation. I'll share some snipits shortly.

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • Maintaining Two Separate Software Versions From the Same Codebase in Version Control

    - by Joseph
    Let's say that I am writing two different versions of the same software/program/app/script and storing them under version control. The first version is a free "Basic" version, while the second is a paid "Premium" version that takes the codebase of the free version and expands upon it with a few extra value-added features. Any new patches, fixes, or features need to find their way into both versions. I am currently considering using master and develop branches for the main codebase (free version) along side master-premium and develop-premium branches for the paid version. When a change is made to the free version and merged to the master branch (after thorough testing on develop of course), it gets copied over to the develop-premium branch via the cherry-pick command for more testing and then merged into master-premium. Is this the best workflow to handle this situation? Are there any potential problems, caveats, or pitfalls to be aware of? Is there a better branching strategy than what I have already come up with? Your feedback is highly appreciated! P.S. This is for a PHP script stored in Git, but the answers should apply to any language or VCS.

    Read the article

  • Java Components Landing Page and Documentation Updates

    - by joni g.
    The new Java Components page provides access to the documentation for tools that are available for monitoring, managing, and testing Java applications. Documentation for the new versions of the following tools is available: JavaTest Harness 4.6. The JavaTest harness is a general purpose, fully-featured, flexible, and configurable test harness that is suited for most types of unit testing. See the JavaTest tab for documentation. SigTest 3.1. SigTest is a collection of tools that can be used to compare APIs and to measure the test coverage of an API. See the SigTest tab for documentation. The following tools are part of Oracle Java SE Advanced and Oracle Java SE Suite. Java Mission Control and Java Flight Control 5.4 are supported in JDK 8u20. Java Flight Recorder and Java Mission Control together create a complete tool chain to continuously collect low level and detailed runtime information enabling after-the-fact incident analysis. See the JMC tab for documentation. Advanced Management Console 1.0 is a new tool that is now available. AMC can be used to view information about the Java applets and Java Web Start applications running in your enterprise, and create deployment rules and rule sets to manage the execution of these applications. See the AMC tab for documentation. Usage Tracker tracks how Java Runtime Environments (JREs) are being used in your systems. See the Usage Tracker tab for documentation.

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >