Search Results

Search found 2818 results on 113 pages for 'corporate procedures'.

Page 41/113 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Identity R2 - Experts Podcast Series

    - by Tanu Sood
    To follow up on the Identity Management R2 launch, a series of podcasts were recorded with subject matter experts from customer organizations, our partners and Oracle’s PM team to discuss key trends, R2 capabilities, implementation best practices and more. Below is a roll-up of the podcast series that is available on Fusion Middleware radio. R2 Podcasts:   ·         Designing the Next-Generation Identity Platform Vadim Lander, Oracle Highlights: Common architecture model, integration, interoperability and the driving factors behind R2 innovation IT Departments are shifting their Identity Management strategy to be able to support mobile, cloud and social applications. Oracle has anticipated this shift and has built a product roadmap to take advantage of this focus. Join Vadim as he discusses the design strategy behind the latest 11gR2 release and talks about how IDM services have to evolve to meet this new challenge.   ·         BETA Customer Perspective on R2 Ravi Meduri, Kaiser Permanente Highlights: R2 scalability and high availability In this podcast Ravi discusses the new features in 11gR2 that he is most interested in, including High Availability options for Access Management, multi-datacenter architecture, and what it was like working with the Oracle product team during the BETA program.   ·         Partner Perspective on R2 Rex Thexton, PricewaterhouseCoopers Highlights: Usability Enhancements for Users and Administrators A lot of new usability features went into the 11gR2 release making this the most business friendly IDM release to date. In this podcast Rex Thexton, Managing Director from PwC, talks about some of the new UI changes for both end users and administrators, and also about the new connector creation framework.   Access Request Updates in R2 Marc Boroditsky, Oracle Highlights: Access request User Interface innovations A lot of changes have been made to the Access Request user interface in the latest version of Oracle Identity Manager 11gR2. A real focus has been put on making the request process more business user friendly, and a lot of new customization capability has been added for the IT administrators. Hear Marc discuss the updated UI, and explain how administrators will be able to customize OIM to meet their company's requirements   ·         Oracle Optimized System for Oracle Unified Directory (OOS4OUD) Nick Kloski, Oracle Highlights: New Optimized System configuration for Unified Directory One of the new features in 11gR2 is the availability of an Optimized System configuration for Oracle Unified Directory. Oracle engineers installed the OUD software onto off the shelf hardware and then created a performance tuned configuration. Join us as we talk to Nick Kloski, Infrastructure Solutions Manager, all about the testing process and the resulting performance metrics.   Privileged Account Management Mark Wilcox, Oracle Highlights: Oracle Privileged Account Manager key capabilities, use cases The new release of Oracle Identity Management 11g R2 includes the capability to manage privileged accounts. Privileged accounts, if compromised, create a risk for fraud in the enterprise and as a result controlling access to privileged accounts is critical. Hear what Mark Wilcox, Principal Product Manager of Oracle Privileged Account Manager has to say about the capabilities of the offering in this podcast.   ·         Browser-based User Interface (UI) Customization Clayton Donley, Oracle Highlights: Benefits of Durable UI Configuration framework Business users need user interfaces that are not only friendly but also easily customizable. However the downside of any customization project is the cost and complexity involved in developing, testing, deploying and managing custom code. In this podcast, we examine how a new capability in Oracle Identity Management around browser based UI customization can reduce costs and complexity of customization while simplifying self service integration with corporate portal strategies.   ·         Simplifying Mobile and Social Sign-On Dan Killmer, Oracle Highlights: Secure mobile sign-on and consumption of social identities with Oracle Access Management The proliferation of mobile devices has spurred a new trend where employees tend to bring their own mobile devices to work and access corporate applications the same way they would access from a desktop or laptop. In this podcast, we examine how Oracle's latest innovation in Identity Management around Mobile and Social Sign On can simplify security and access management challenges posed by the widespread adoption of mobile devices in the enterprise. ·         Enabling Your Business with IDM R2 Scott Bonnell, Oracle Highlights: Self service, mobile access, personalization Gone are the days when Identity Management was just about stopping unauthorized users in their tracks. Identity Management if done right, can also enable your business. Join Scott Bonnell as he discusses how the IDM 11gR2 release enables the enterprise by providing self service, personalization and mobile access to corporate resources.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #004

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Auto Generate Script to Delete Deprecated Fields in Current Database In early career everytime I have to drop a column, I had hard time doing it because I was scared what if that column was needed somewhere in the code. Due to this fear I never dropped any column. I just renamed the column. If the column which I renamed was needed afterwards it was very easy to rename it back again. However, it is not recommended to keep the deleted column renamed in the database. At every interval I used to drop the columns which was prefixed with specific word. This script is 6 years old but still works. Give it a look, I am open for improvements. 2007 Shrinking Truncate Log File – Log Full – Part 2 Shrinking database or mdf file is indeed bad thing and it creates lots of problems. However, once in a while there is legit requirement to shrink the log file – a very rare one. In the rare occasion shrinking or truncating the log file may be the only solution. However, one should make sure to take backup before and after the truncate or shrink as in case of a disaster they can be very useful. Remember that truncating log file will break the log chain and while restore it can create major issue. Anyway, use this feature with caution. 2008 Simple Use of Cursor to Print All Stored Procedures of Database Including Schema This is a very interesting requirement I used to face in my early career days, I needed to print all the Stored procedures of my database. Interesting enough I had written a cursor to do so. Today when I look back at this stored procedure, I believe there will be a much cleaner way to do the same task, however, I still use this SP quite often when I have to document all the stored procedures of my database. Interesting Observation about Order of Resultset without ORDER BY In industry many developers avoid using ORDER BY clause to display the result in particular order thinking that Index is enforcing the order. In this interesting example, I demonstrate that without using ORDER BY, same table and similar query can return different results. Query optimizer always returns results using any method which is optimized for performance. The learning is There is no order unless ORDER BY is used. 2009 Size of Index Table – A Puzzle to Find Index Size for Each Index on Table I asked this puzzle earlier where I asked how to find the Index size for each of the tables. The puzzle was very well received and lots of interesting answers were received. To answer this question I have written following blog posts. I suggest this weekend you try to solve this problem and see if you can come up with a better solution. If not, well here are the solutions. Solution 1 | Solution 2 | Solution 3 Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. The SQL Server Query optimizer is a very smart tool and it makes a better selection of execution plan. Suggesting hints to the Query Optimizer should be attempted when absolutely necessary and by experienced developers who know exactly what they are doing (or in development as a way to experiment and learn). Interesting Observation – TOP 100 PERCENT and ORDER BY I have seen developers and DBAs using TOP very causally when they have to use the ORDER BY clause. Theoretically, there is no need of ORDER BY in the view at all. All the ordering should be done outside the view and view should just have the SELECT statement in it. It was quite common that to save this extra typing by including ordering inside of the view. At several instances developers want a complete resultset and for the same they include TOP 100 PERCENT along with ORDER BY, assuming that this will simulate the SELECT statement with ORDER BY. 2010 SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience In year 2010 I attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever. Instead of writing about my usual routine or the event, I wrote about the interesting things I did and how I felt about it! When I go back and read it, I feel that this is the best event I attended in year 2010. Change Database Access to Single User Mode Using SSMS Image says all. 2011 SQL Server 2012 has introduced new analytic functions. These functions were long awaited and I am glad that they are now here. Before when any of this function was needed, people used to write long T-SQL code to simulate these functions. But now there’s no need of doing so. Having available native function also helps performance as well readability. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • 7 Steps To Cut Recruiting Costs & Drive Exceptional Business Results

    - by Oracle Accelerate for Midsize Companies
    By Steve Viarengo, Vice President Product Management, Oracle Taleo Cloud Services  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 In good times, trimming operational costs is an ongoing goal. In tough times, it’s a necessity. In both good times and bad, however, recruiting occurs. Growth increases headcount in good times, and opportunistic or replacement hiring occurs in slow business cycles. By employing creative recruiting strategies in tandem with the latest technology developments, you can reduce recruiting costs while driving exceptional business results. Here are some critical areas to focus on. 1.  Target Direct Cost Savings Total recruiting process expenses are the sum of external costs plus internal labor costs. Most organizations can reduce recruiting expenses with direct cost savings. While additional savings on indirect costs can be realized from process improvement and efficiency gains, there are direct cost savings and benefits readily available in three broad areas: sourcing, assessments, and green recruiting. 2. Sourcing: Reduce Agency Costs Agency search firm fees can amount to 35 percent of a new employee’s annual base salary. Typically taken from the hiring department budget, these fees may not be visible to HR. By relying on internal mobility programs, referrals, candidate pipelines, and corporate career Websites, organizations can reduce or eliminate this agency spend. And when you do have to pay third-party agency fees, you can optimize the value you receive by collaborating with agencies to identify referred candidates, ensure access to candidate data and history, and receive automatic notifications and correspondence. 3. Sourcing: Reduce Advertising Costs You can realize significant cost reductions by placing all job positions on your corporate career Website. This will allow you to reap a substantial number of candidates at minimal cost compared to job boards and other sourcing options. 4.  Sourcing: Internal Talent Pool Internal talent pools provide a way to reduce sourcing and advertising costs while delivering improved productivity and retention. Internal redeployment reduces costs and ramp-up time while increasing retention and employee satisfaction. 5.  Sourcing: External Talent Pool Strategic recruiting requires identifying and matching people with a given set of skills to a particular job while efficiently allocating sourcing expenditures. By using an e-recruiting system (which drives external talent pool management) with a candidate relationship database, you can automate prescreening and candidate matching while communicating with targeted candidates. Candidate relationship management can lower sourcing costs by marketing new job opportunities to candidates sourced in the past. By mining the talent pool in this fashion, you eliminate the need to source a new pool of candidates for each new requisition. Managing and mining the corporate candidate database can reduce the sourcing cost per candidate by as much as 50 percent. 6.  Assessments: Reduce Turnover Costs By taking advantage of assessments during the recruitment process, you can achieve a range of benefits, including better productivity, superior candidate performance, and lower turnover (providing considerable savings). Assessments also save recruiter and hiring manager time by focusing on a short list of qualified candidates. Hired for fit, such candidates tend to stay with the organization and produce quality work—ultimately driving revenue.  7. Green Recruiting: Reduce Paper and Processing Costs You can reduce recruiting costs by automating the process—and making it green. A paperless process informs candidates that you’re dedicated to green recruiting. It also leads to direct cost savings. E-recruiting reduces energy use and pollution associated with manufacturing, transporting, and recycling paper products. And process automation saves energy in mailing, storage, handling, filing, and reporting tasks. Direct cost savings come from reduced paperwork related to résumés, advertising, and onboarding. Improving the recruiting process through sourcing, assessments, and green recruiting not only saves costs. It also positions the company to improve the talent base during the recession while retaining the ability to grow appropriately in recovery. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Building a Repository Pattern against an EF 5 EDMX Model - Part 1

    - by Juan
    I am part of a year long plus project that is re-writing an existing application for a client.  We have decided to develop the project using Visual Studio 2012 and .NET 4.5.  The project will be using a number of technologies and patterns to include Entity Framework 5, WCF Services, and WPF for the client UI.This is my attempt at documenting some of the successes and failures that I will be coming across in the development of the application.In building the data access layer we have to access a database that has already been designed by a dedicated dba. The dba insists on using Stored Procedures which has made the use of EF a little more difficult.  He will not allow direct table access but we did manage to get him to allow us to use Views.  Since EF 5 does not have good support to do Code First with Stored Procedures, my option was to create a model (EDMX) against the existing database views.   I then had to go select each entity and map the Insert/Update/Delete functions to their respective stored procedure. The next step after I had completed mapping the stored procedures to the entities in the EDMX model was to figure out how to build a generic repository that would work well with Entity Framework 5.  After reading the blog posts below, I adopted much of their code with some changes to allow for the use of Ninject for dependency injection.http://www.tcscblog.com/2012/06/22/entity-framework-generic-repository/ http://www.tugberkugurlu.com/archive/generic-repository-pattern-entity-framework-asp-net-mvc-and-unit-testing-triangle IRepository.cs public interface IRepository : IDisposable where T : class { void Add(T entity); void Update(T entity, int id); T GetById(object key); IQueryable Query(Expression> predicate); IQueryable GetAll(); int SaveChanges(); int SaveChanges(bool validateEntities); } GenericRepository.cs public abstract class GenericRepository : IRepository where T : class { public abstract void Add(T entity); public abstract void Update(T entity, int id); public abstract T GetById(object key); public abstract IQueryable Query(Expression> predicate); public abstract IQueryable GetAll(); public int SaveChanges() { return SaveChanges(true); } public abstract int SaveChanges(bool validateEntities); public abstract void Dispose(); } One of the issues I ran into was trying to do an update. I kept receiving errors so I posted a question on Stack Overflow http://stackoverflow.com/questions/12585664/an-object-with-the-same-key-already-exists-in-the-objectstatemanager-the-object and came up with the following hack. If someone has a better way, please let me know. DbContextRepository.cs public class DbContextRepository : GenericRepository where T : class { protected DbContext Context; protected DbSet DbSet; public DbContextRepository(DbContext context) { if (context == null) throw new ArgumentException("context"); Context = context; DbSet = Context.Set(); } public override void Add(T entity) { if (entity == null) throw new ArgumentException("Cannot add a null entity."); DbSet.Add(entity); } public override void Update(T entity, int id) { if (entity == null) throw new ArgumentException("Cannot update a null entity."); var entry = Context.Entry(entity); if (entry.State == EntityState.Detached) { var attachedEntity = DbSet.Find(id); // Need to have access to key if (attachedEntity != null) { var attachedEntry = Context.Entry(attachedEntity); attachedEntry.CurrentValues.SetValues(entity); } else { entry.State = EntityState.Modified; // This should attach entity } } } public override T GetById(object key) { return DbSet.Find(key); } public override IQueryable Query(Expression> predicate) { return DbSet.Where(predicate); } public override IQueryable GetAll() { return Context.Set(); } public override int SaveChanges(bool validateEntities) { Context.Configuration.ValidateOnSaveEnabled = validateEntities; return Context.SaveChanges(); } #region IDisposable implementation public override void Dispose() { if (Context != null) { Context.Dispose(); GC.SuppressFinalize(this); } } #endregion IDisposable implementation } At this point I am able to start creating individual repositories that are needed and add a Unit of Work.  Stay tuned for the next installment in my path to creating a Repository Pattern against EF5.

    Read the article

  • .DBML file and LINQ to SQL

    - by Rishabh Ohri
    In my DBML file I have mapped some tables and stored procedures, and the stored procedures return type is ISingleResult . T is some mapped table. But I want to take the data into my own created entities rather than LINQ to SQL created entites. The entites created by me are also the same as the mapped table entities and their use lies when we send data across the a web service. So , how can I proceed by creating a wrapper around the DBML file so that I always get data in my own created entites.

    Read the article

  • all rows plus min / max values using a single stored procedure

    - by Rajeev
    I have a custom data source which pulls out data form a flat file. The flat file contains a timestamp , source and data. I can use sp_execute to execute a select query against the flat file. I'm currently using 2 stored procedures . - one which runs a select * from flat_file into a temp table - the other which does a select min/max from flat_file grouping by source into another temp file Im using the data retrieved using the stored procedures in a SSRS report Is is possible in a a single stored procedure to retrieve all the rows from the file within a date range and also identify the min/max values for each group retrieved ?e

    Read the article

  • Database Version Control SQL Server 2008 Drop SP's and Functions

    - by Lieven Cardoen
    I'm working on versioning our database and now searching for a way to drop all stored procedures and functions from a C# Console Application. I'd rather not create a stored procedure that drops all stored procedures and functions. I has to be some sql executed from C#. I tried to drop the stored procedure before creating it, but I get this message: System.Data.SqlClient.SqlException: 'CREATE/ALTER PROCEDURE' must be the first statement in a query batch. Script for one SP for example: DROP PROCEDURE [dbo].[sp_Economatic_LoadJournalEntryFeedbackByData] SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON CREATE PROCEDURE [dbo].[sp_Economatic_LoadJournalEntryFeedbackByData] @Data VARCHAR(MAX) AS BEGIN ... END So I guess before creating all SP's and functions I'll need to drop all SP's and functions first with one sql script.

    Read the article

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • T-SQL Query Results Not as Expected Deduplication

    - by Yoda
    Hi Guys, I am attempting to get all records where and Id field exists more than once, trouble is my query is returning nothing and I have no idea as to why!? And this is the only method I know. Here is my code: select [Customer Number], [Corporate Customer Number], [Order Date], [Order Number], [Order No], [Order Line Status], [Payment Method] , [ProcessOrder], [Order Platform] from Temp_ICOSOrder group by [Customer Number], [Corporate Customer Number], [Order Date], [Order Number], [Order No], [Order Line Status], [Payment Method] , [ProcessOrder] , [Order Platform] having COUNT([Order Number]) > 1 Any help is much appriciated!

    Read the article

  • Best way to deploy VB.NET Code / create an assembly on a SQLServer

    - by 1passenger
    I've created some functions / procedures with VB.NET and want to deploy them to a SQLServer. Within Visual Studio you can click right and select "deploy". An assembly will be created on the server and a lot of functions and procedures. It's really easy. Now I want to script the whole deployment process. What is Visual Studio doing when I make a deployment? Can Visual Studio autoscript all the deployment steps for me? Can I save it to a file and execute it manually?

    Read the article

  • Generate Info (wrapper) Class from stored procedure

    - by Adem
    Hello everybody I am in a crucial project and I am trying to speed up the development phase by using codesmith for generating the business class DAL and info class for the tables of my project. There are about 50 tables with relationships parent child many to many and for retrieving data I have to code several inner joins in stored procedures. I have to combine fields from many tables and this makes working with the info class difficult. Is there anyway to generate info class from stored procedures or to be more exact is there a way to parse the result set of the stored procedure and to generate the info class with properties for every column in that result set. Please if anyone can give me some advice and tell me how to achieve this. Best Regards

    Read the article

  • Setting up a SQL Membership Provider and attaching the MDF file in Visual Studio 2008

    - by aubreyrhodes
    I'm trying to set up a SQL Membership Provider for an ASP.NET MVC 1.0 and I'm having problems setting up the tables and stored procedures in the database. I've tried attaching both the applications current database and a blank database to my local SQLEXPRESS instance (using SSEUtil) and then running the aspnet_regsql wizard against them. When I detach the mdf file and try to load it in Visual Studio 2008, the data connection in the server explorer shows that the database has no tables or stored procedures. Am I missing a step or something here? I've been having a heap of trouble with compatibility between Visual Studio and SQLEXPRESS.

    Read the article

  • HTML - Word Doc Images

    - by Michael
    Okay I have roughly 150 + pages of procedures all written in MS word. The person who wrote the procedures did an excellent job of recording the process how to perform specific tasks. This individual went though and created screen shots a MS word doc. There are approx 2 screen shots per page. So it is roughly 300 images and I do not want to recreate the wheel. Does anyone know a quick way of handling the images? The written portion is pretty straightforward but the images is what I am struggling with. Regards, Mike

    Read the article

  • Shell script query

    - by WENzER
    Hi, I am having database. for that i want to get the script of all Store procedure. I have used dba_source table for getting script of Store procedure. Now I want to write a shell script which will give me all the scripts of store procedure in the seperate files. The approach i have applied that first I have written a query which will give the name of all store procedures and I am storing these names in seperate file. Now I am running a loop for getting sript of store procedures by using the names stored in file. All this i am able to do seperatly but not in shell script since I am new to it Please help me Thanks

    Read the article

  • Two different definitions of database schema

    - by AspOnMyNet
    a) I found two definitions of schema: FIRST - A set of information that describes a table is known as a schema, and schemas are used to describe specific tables within a database, as well as entire databases (and the relationship between tables in them, if any). SECOND - A database schema is a way to logically group objects such as tables, views, stored procedures etc. Think of a schema as a container of objects. I assume the two descriptions describe entirely different concepts, which just happen to use the same name? b) A database schema is a way to logically group objects such as tables, views, stored procedures etc. Think of a schema as a container of objects. If I understand the above definition correctly, then database schema is similar to a namespace, only difference being that we can assign access permissions to database schema, while same can’t be done with namespaces? thanx

    Read the article

  • Reading from a oracle temp table in a separate procedure than the one it was populted in

    - by Bob
    I have a 2 stored procedures, the first creates an oracle temp table and the second reads from it. The temp table only has scope for that session. I'm calling the procedures from .Net and the second procedure never returns any results. However if I use the same sprocs and parameters in SQL*Plus it works fine. I've tried creating an Oracle Transaction object and had hoped I'd be able to read the tables in while still using the same transaction - trying to emulate an SQL Plus type of single connection environment. Any ideas what I'm doing wrong??

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

  • Besides EAR and EJB, what do I get from a J2EE app server that I don't get in a servlet container li

    - by dacracot
    We use Tomcat to host our WAR based applications. We are servlet container compliant J2EE applications with the exception of org.apache.catalina.authenticator.SingleSignOn. We are being asked to move to a commercial J2EE application server. The first downside to changing that I see is the cost. No matter what the charges for the application server, Tomcat is free. Second is the complexity. We don't use either EJB nor EAR features (of course not, we can't), and have not missed them. What then are the benefits I'm not seeing? What are the drawbacks that I haven't mentioned? Mentioned were... JTA - Java Transaction API - We control transaction via database stored procedures. JPA - Java Persistence API - We use JDBC and again stored procedures to persist. JMS - Java Message Service - We use XML over HTTP for messaging. This is good, please more!

    Read the article

  • Strange problem with VSS: VSS doesn't work after 5 p.m.

    - by afsharm
    I have a very strange problem with Visual Source Safe. I use VSS as a add-on with VS 2008 in a corporate intranet with 5 other colleagues. My VS stops working after 5 p.m. most of days with complaining can not connect Source Safe. I'm running VS 2008 and VSS 2005 on Vista and no other one in corporate have such a problem. What do you think about its cause? Is there any log for VSS?

    Read the article

  • Anyone have experience calling Rake from MSBuild for code gen and other benefits? How did it go? Wha

    - by Charlie Flowers
    While programming in C# using Visual Studio 2008, I often wish for "automatic" code generation. If possible, I'd like to achieve it by making my MSBuild solution file call out to Rake, which would call Ruby code for the code generation, having the resulting generated files automatically appear in my solution. Here's one business example (of many possible examples I could name) where this kind of automatic code generation would be helpful. In a recent project I had an interface with some properties that contained dollar amounts. I wanted a second interface and a third interface that had the same properties as the first interface, except they were "qualified" with a business unit name. Something like this: public interface IQuarterlyResults { double TotalRevenue { get; set; } double NetProfit { get; set; } } public interface IConsumerQuarterlyResults { double ConsumerTotalRevenue { get; set; } double ConsumerNetProfit { get; set; } } public interface ICorporateQuarterResults { double CorporateTotalRevenue { get; set; } double CorporateNetProfit { get; set; } } In this example, there is a "Consumer Business Unit" and a "Corporate Business Unit". Every property on IQuarterlyResults becomes a property called "Corporate" + [property name] on ICorporateQuarterlyResults, and likewise for IConsumerQuarterlyResults. Why make interfaces for these, rather than merely having an instance of IQuarterlyResults for Consumer and another instance for Corporate? Because, when working with the calculator object I was building, the user had to deal with 100's of properties, and it is much less confusing if he can deal with "fully qualified" property names such as "ConsumerNetProfit". But let's not get bogged down in this example. It is only an example and not the main question. The main question is this: I love using Ruby and ERB for code generation, and I love using Rake to manage dependencies between tasks. To solve the problem above, what I'd like to do is have MSBuild call out to Rake, and have Rake / Ruby read the list of properties on the "core" interface and then generate the code to make all the dependent interfaces and their properties. This would get triggered every time I do a build, because I'd put it into the MSBuild file for the VS.NET solution. Has anyone tried anything like this? How did it work out for you? What insights can you share about pros, cons, tips for success, etc.? Thanks!

    Read the article

  • I need a program to store the database script for oracle

    - by Hakan Kara
    We are developing a project that has 3 enviroments (development, test, production) So there are 3 databases (actually more than 3, because we have 5 customers so we have more than 10 databases) and they must be synchronised. There are 30 coders working for this project. Everone adds, deletes, and changes procedures, table columns etc. We need a program to store our database scripts like visual studio's team foundation server. See the change history of script file. Everyone must access that program and be able to put their scripts. Recover previous versions of script file. Execute these scripts over a selected database. Compare databases by procedures (not only by name, by content of procedure), functions, table columns, packages etc. I am searching a program like that. Which one do you suggest me?

    Read the article

  • how to debug MySql stored procs without breaking control flow from application

    - by M.Taha Masood
    Is there a way to do the following: I have a MySQL DB , and there are many stored procs written in it as well. I use MySQL client library in C to connect to this DB and amongst other things , call the stored procedures. Is there a way to set breakpoints in the stored procedures such that when the call is made from C program ( using mySql client library ) into the stored proc , then control flow is halted in the C program and we can step into the stored proc called to whatever level of nesting and insspecting variables etc ( like any decent C debugged provides )? Is there ANY way to do the above ? Through some third party tool or the like if not through plain MySql . Help is appreciated. thanks

    Read the article

  • Entity framework and database logic.

    - by Xavier Devian
    Hi all, i have a question that's being around for several years. As all you know entity framework is an ORM tool that tries to model the database to an object oriented access model. All the samples I've seen are quering directly to the database tables. So, which is the role of the views in the database now?. The views were used to model the database in a more friendly way, that is, several physical tables, one logic table. This was great for example in hidding the complex relational model on stored procedures as queryng the views inside them was much easier than reproducing the query joins over and over on each stored procedure. So the question is, why is entity framework so good if stored procedures can not take benefit of it?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >