Search Results

Search found 5714 results on 229 pages for 'held packages'.

Page 140/229 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • SQLAuthority News – Reliving TechEd with Vinod Kumar at Bangalore User Groups

    - by pinaldave
    TechEd India 2012 was held in Bangalore last March 21 to 23, 2012. Just like every year, this event is bigger, grander and inspiring. Here is my blog post reviewing the event SQLAuthority News – #TechEdIn – TechEd India 2012 Memories and Photos. For me this is family event – I get to meet my friends who are dear as my family. I like to call User Groups as family too. Family shares life’s personal happiness and experience – the same way User Group shares professional experiences and quite often UG members become just like family member. When I learned that follower user group together building up a unique event I was pretty excited to learn who is going to be speaker for the event. BDotNet.in – Bangalore .NET Usergroup BITPro.in – Bangalore ITPro Usergroup It was indeed joy when I learned that presenter will be Vinod Kumar, who is integral part of user groups and hardcore SQL Server enthusiast. Vinod Kumar is going to present on following two sessions which are both focused on internals of the Windows and SQL Server. Understanding Windows with SysInternals Tools – This session will cover various tools from usage of Memory, x86 architecture, x64, WOW mode, Page faults, Virtual Memory mapping, OOM scenario, Perf Tool, PAL tool, Logman and more. Peeling the Onion: SQL Server Internals Demystified – This session will cover advanced disk formats, SQL Server 2012 security changes, memory changes, indirect checkPoint and more. I am very excited as this time I will get opportunity to sit in front rows (as I will be reaching there to get best possible position) and learn. I am looking forward to the event and I hope you will join us as well. Event Details: Date: Saturday, April 7, 2012 (10:30am until 1:30pm) Venue: Microsoft, Domlur, Bangalore. Event Details: https://www.facebook.com/events/139444029517882/ This session is FREE for all and everybody and anybody can walk in. Community Blog Posts Here are few of the blog post written by the community on this subject. Vinod Kumar on Reliving #TechEdIn at Blr UG Manas Dash on Reliving TechEd India 2012 with Vinod Kumar Sudeepta Ganguly on SysInternals n SQLInternals with Vinod Kumar Lohith Re Live TechEd India 2012 with Vinod Kumar  Reference: Pinal Dave (http://blog.sqlauthority.com) http://www.youtube.com/watch?v=oRw-p4mahLU Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology, Video

    Read the article

  • Week 21: FY10 in the Rear View Mirror

    - by sandra.haan
    FY10 is coming to a close and before we dive into FY11 we thought we would take a walk down memory lane and reminisce on some of our favorite Oracle PartnerNetwork activities. June 2009 brought One Red Network to partners offering access to the same virtual kickoff environment used by Oracle employees. It was a new way to deliver valuable content to key stakeholders (and without the 100+ degree temperatures). Speaking of hot, Oracle also announced in June new licensing options for our ISV partners. This model enables an even broader community of ISVs to build, deploy and manage SaaS applications on the same platform. While some people took the summer off, the OPN Program team was working away to deliver a brand new partner program - Oracle PartnerNetwork Specialized - at Oracle OpenWorld in October. Specialized. Recognized. Preferred. If you haven't gotten the message yet, we may need an emergency crew to pull you out from that rock you've been hiding under. But seriously, the announcement at the OPN Forum drew a big crowd and our FY11 event is shaping up to be just as exciting. OPN Specialized was announced in October and opened our doors for enrollment in December 2009. To mark our grand opening we held our first ever social webcast allowing partners from around the world to interact with us live throughout the day. We had a lot of great conversations and really enjoyed the chance to speak with so many of you. After a short holiday break we were back at it - just a small announcement - Oracle's acquisition of Sun. In case you missed it, here is a short field report from Ted Bereswill, SVP North America Alliances & Channels on the partner events to support the announcement: And while we're announcing things - did we mention that both Ted Bereswill and Judson Althoff were named Channel Chiefs by CRN? Not only do we have a couple of Channel Chiefs, but Oracle also won the Partner Program 5 Star Programs Award and took top honors at the CRN Channel Champion Awards for Financial Factors/Financial Performance in the category of Data and Information Management and the and Xchange Solution Provider event in March 2010. We actually caught up with Judson at this event for a quick recap of our participation: But awards aside, let's not forget our main focus in FY10 and that is Specialization. In April we announced that we had over 35 Specializations available for partners and a plan to deliver even more in FY11. We are just days away from the end of FY10 but hope you enjoyed our walk down memory lane. We are already planning lots of activity for our partners in FY11 starting with our Partner Kickoff event on June 29th. Join us to hear the vision and strategy for FY11 and interact with regional A&C leaders. We look forward to talking with you then. The OPN Communications Team

    Read the article

  • AutoVue at the Oracle Asset Lifecycle Management Summit

    - by celine.beck
    I recently had the opportunity to attend and present the integration between AutoVue and Primavera P6 during the Oracle ALM Summit, which was held in March at Redwood Shores, on Oracle Headquarters grounds. The ALM Summit brought together over 300 Oracle maintenance practitioners who endured the foggy and rainy San Francisco weather to attend the 4th edition of this Oracle-driven conference. Attendees have roles in maintenance management and IT. Following a general session, Ralph Rio from ARC Advisory Group provided a very interesting keynote session discussing Asset Management directions, both in the short and long run. An interesting point that Ralph raised is that most organizations have done a good job at improving performance at the design / build, operate and maintain and portfolio management phases by leveraging solutions like Asset Lifecycle Management and Project & Portfolio management solutions; however, there seem to be room for improvement in between those phases, when information flows from one group to the other, during the data handover phase or when time comes to update / modify drawings to reflect the reality of physical assets. This is where AutoVue comes into play. By integrating with enterprise applications like content management systems, asset lifecycle management applications and project management solutions, AutoVue can be a real-process enabler, streamlining information flows from concept/design to decommissioning and ensuring that all project stakeholders have access to asset information and engineering data throughout the asset lifecycle. AutoVue's built-in digital annotation capabilities allows maintenance workers and technicians to report changes in configuration and visually capture the delta between as-built and as-maintained versions of asset documents. This information can then be easily handed over to engineers who can identify changes and incorporate these modifications into the drawings during the next round of document revisions. PPL Power Generation, an electric utilities headquarted in Allentown, Pennsylvania discussed this usage of AutoVue during an interesting Webcast around AutoVue's role in the Utilities space. After the keynote sessions, participants broke off into product-centric tracks around Oracle's Asset Lifecycle Management solutions (E-Business Suite, PeopleSoft, and JD Edwards). The second day of the conference was the occasion for us to present the integration between AutoVue and Primavera P6 to the Maintenance Summit audience. The presentation was a great success and generated much discussion with partners and customers during breaks. People seemed highly interested in learning more about our plans for integrating AutoVue and Primavera P6 with Oracle's ALM solutions...stay tune for further information on the subject!

    Read the article

  • SQL SERVER – Online Index Rebuilding Index Improvement in SQL Server 2012

    - by pinaldave
    Have you ever faced situation when you see something working and you feel it should not be working? Well, I had similar moments few days ago. I know that SQL Server 2008 supports online indexing. However, I also know that I cannot rebuild index ONLINE if I have used VARCHAR(MAX), NVARCHAR(MAX) or few other data types. While I held my belief very strongly I came across situation, where I had to go online and do little bit reading from Book Online. Here is the similar example. First of all – run following code in SQL Server 2008 or SQL Server 2008 R2. USE TempDB GO CREATE TABLE TestTable (ID INT, FirstCol NVARCHAR(10), SecondCol NVARCHAR(MAX)) GO CREATE CLUSTERED INDEX [IX_TestTable] ON TestTable (ID) GO CREATE NONCLUSTERED INDEX [IX_TestTable_Cols] ON TestTable (FirstCol) INCLUDE (SecondCol) GO USE [tempdb] GO ALTER INDEX [IX_TestTable_Cols] ON [dbo].[TestTable] REBUILD WITH (ONLINE = ON) GO DROP TABLE TestTable GO Now run the same code in SQL Server 2012 version. Observe the difference between both of the execution. You will be get following resultset. In SQL Server 2008/R2 it will throw following error: Msg 2725, Level 16, State 2, Line 1 An online operation cannot be performed for index ‘IX_TestTable_Cols’ because the index contains column ‘SecondCol’ of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline. In SQL Server 2012 it will run successfully and will not throw any error. Command(s) completed successfully. I always thought it will throw an error if there is VARCHAR(MAX) or NVARCHAR(MAX) used in table schema definition. When I saw this result it was clear to me that it will be for sure not bug enhancement in SQL Server 2012. For matter for the fact, I always wanted this feature to be added in SQL Server Engine as this will enable ONLINE Index Rebuilding for mission critical tables which needs to be always online. I quickly searched online and landed on Jacob Sebastian’s blog where he has blogged about it as well. Well, is there any other new feature in SQL Server 2012 which gave you good surprise? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • Recruitment Drive - Things Don't Always Go As Planned - Stay Flexible by Kalyan Neelagiri

    - by david.talamelli
    I am one of the Recruiters for Oracle and work in our India Recruitment Team. When we are hiring for multiple positions we often hold Recruitment Events to interview a large number of people as effectively as possible. These Events are often held on the weekend as many people are not free to attend an all day event during the working week. Just recently during a recruitment campaign we were running I was tasked to set up a Recruitment Event for some roles we were hiring for. I have set up and run weekend recruitment events in the past which have all run smoothly. However, this time arranging this recruitment event was quite a challenge for me. The planned event was taking place on a Saturday. I had almost sent out the confirmed scheduled list of candidates to the respective hiring team on Friday and was on track for the event to take place, but unfortunately there was breaking news in the media that there was a strike called in the city because of some political agitations and protests taking place on the event day. The hiring manager had rushed to me asking for my thoughts and ideas. I was in two minds on what to do. One on hand I was not ready to cancel the event because of all the work that so many people had put into getting this prepared and also I did not want to reschedule the event at the last minute if I did not need to. On the other hand I understood it may be best to reschedule the event as people may not be able to attend based on the political protests taking place on the day. In the end I decided to gather and check for other options because this might cause confusion and a problem for the scheduled candidates to drive in to the venue. So we had concluded to reschedule our event plans and moved the event to the next week. The good news is that we successfully executed this recruitment drive the following Saturday. We were glad that 100% of the candidates we able to make it to the new interview date and despite all the agitations in the city we were successful in hiring people for all the roles we had open. Things do not always go as planned. The best laid plans can sometimes be for nought based on external factors outside of our control. What this experience has taught me is that rather than focus on the negatives when you are thrown a curveball the best approach is to stay flexible and focus on finding ways to reach your outcome. Your plans may need to change but you can still achieve the results you are after if you have the right mind set.

    Read the article

  • Oracle AutoVue Key Highlights from Oracle OpenWorld 2012

    - by Celine Beck
    We closed another successful Oracle Open World for AutoVue. Thanks to everyone who joined us this year. As usual, from customer presentations to evening networking activities, there was enough to keep us busy during the entire event. Here is a summary of some of the key highlights of the conference: Sessions:We had two AutoVue-specific sessions during Oracle Open World this year. The first session was part of the Product Lifecycle Management track and covered how AutoVue can be used to help drive effective decision making and streamline design for manufacturing processes. Attendees had the opportunity to learn from customer speaker GLOBALFOUNDRIES how they have been leveraging Oracle AutoVue within Agile PLM to enable high degree of collaboration during the exceptionally creative phases of their product development processes, securely, without risking valuable intellectual property. If you are interested, you can actually download the presentation by visiting launch.oracle.com/?plmopenworld2012.AutoVue was also featured as part of the Utilities track. This session focused on how visualization solutions play a critical role in effective plant optimization and configuration strategies defined by owners and operators of power generation facilities. Attendees learnt how integrated with document management systems, and enterprise applications like Oracle Primavera and Asset Lifecycle Management, AutoVue improves change management processes; minimizes risks by providing access to accurate engineering drawings which capture and reflect the as-maintained status of assets; and allows customers to drive complex maintenance projects to successful completion.Augmented Business Visualization for Agile PLMDuring Oracle Open World, we also showcased an Augmented Business Visualization-based solution for Oracle Agile PLM. An Augmented Business Visualization (ABV) solution is one where your structured data (from Oracle Agile PLM for instance) and your unstructured data (documents, designs, 3D models, etc) come together to allow you to make better decisions (check out our blog posts on the topic: Augment the Value of Your Data (or Time to replace the “attach” button) and Context is Everything ). As part of the Agile PLM, the idea is to support more effective decision-making by turning 3D assemblies into color-coded reports, and streamlining business processes like Engineering Change Management by enabling the automatic creation of engineering change requests in Agile PLM directly from documents being viewed in AutoVue. More on this coming soon...probably during the Oracle Value Chain Summit to be held in San Francisco, from Feb. 4-6, 2013 in San Francisco! Mark your calendars and stay tuned for more information! And thanks again for joining us at Oracle OpenWorld!

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • Oracle SOA Partner Community Forum Lisbon, Portugal - 21/Apr/10

    - by Claudia Costa
      We would like to invite you to attend our SOA Partner Community Forum that will be in held in Lisbon, April 21, 2010 The Oracle SOA Partner Community Forum is a wonderful opportunity to: Meet with Oracle SOA and BPM Product management Exchange thoughts and knowledge with SOA and BPM experts Learn from successful SOA implementation Network within the Oracle SOA Partner Community During this highly informative event you can learn about partner success stories, participate in an array of breakout sessions, exchange information with other partners and enjoy a vibrant panel discussion. Places are limited, so register today. Registration only takes a few minutes and it is free of charge. By registration you will confirm that you will attend to the event. Seminar is free. In the event that you cancel your registration after April 16th 2010 Oracle may request that you will pay late cancellation fee of € 150. Please visit our website for further information. Alternatively, if you require assistance or have any queries please contact Jürgen Kress. Agenda 10:00     Welcome & Introduction 10:15     SOA Cloud presentation 11:15     SOA Partner Sales Campaign 12:30     Lunch break 13:15     Partner Reference Case 14:15     BPMN 2.0 15:00     Cocktail reception   Become a member of the SOA Partner Community - free of charge! Please first login at http://partner.oracle.com and then visit: http://www.oracle.com/goto/emea/soaThe goal of the SOA Partner Community is to provide you with the latest information on Oracle's SOA and BPM offerings and to facilitate the exchange of experience around Oracle SOA between community members.   The SOA Partner Community keeps you informed about: ·         News & Events ·         Product Information ·         Education & Certification ·         Analyst Report ·         Marketing & Sales ·         Monthly SOA Newsletter ·         Monthly SOA Webcast ·         SOA Community Forum ------------------------------------------------   Location: Lagoas Park Hotel 2740-245, Porto Salvo, Oeiras  

    Read the article

  • How to Write an E-Book

    A few days ago my attention was drawn to a tweet spat between Karl Seguin and Scott Hanselman around the relaunch of ASP.NET and the title element in HTML. Tempest in a teapot of course, but worthwhile as I did some googling on Karl and found his blog at codebetter.com. From there it was a short jump to his free e-book, The Foundations of Programming. This short book is distinguished by its orientation, opinionated, its tone, mentoring and its honesty, which is refreshing. In Foundations, Karl covers what he considers the basics of programming and good design, including test driven development, dependency injection and domain driven design. Karl is opinionated, as the topics suggest, and doesnt bother to pretend that he doesnt think what hes suggesting is the better way, not just another way. He is aligned with ALT.NET, and gives an excellent overview of what that means; an overview more enlightening than the ALT.NET site. ALT.NET has its critics, but presenting a strong opinion grabbed my attention as a reader. It is a short walk from opinionated to hectoring,  but Karl held my attention without insulting me. He takes the time to explain, with examples, from the ground up, the problems that test driven development and dependency injection solve. So for dependency injection he builds it up from no DI, to a hand crafted approach, to a full fledged DI framework. This approach is more persuasive than just proscriptive and engaged me as the reader to follow along with his train of thought. Foundations is not as pedantic as I am making it sound. The final ingredient in Karls mix is honesty. He acknowledges that sometimes unit testing does cost more up front and take more time. He admits that sometimes he designs something a certain way just to be testable. He also warns that focusing too much on DI and loose coupling can lead to the poor design you are trying to avoid. These points add depth to his argument as I could tell hes speaking from experience, with some hard won lessons. I enjoyed The Foundations of Programming. When I was done with it, I was amazed how much I got a lot out of its 80 some pages. It is a rarity to come across something worthwhile that is longer then a tweet, but shorter than a tome these days. Well done Karl.   -- Relevant Links -- The now titled and newly relaunched page in question: http://www.asp.net/ The pleasantly confusing ALT.NET homepage: http://altdotnet.org/ A longer review, with details, chapter listings and all that important stuff: http://accidentaltechnologist.com/book-reviews/book-review-foundations-of-programming-by-karl-seguin/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • JRockit Virtual Edition Debug Key

    - by changjae.lee
    There are a few keys that can help the debugging of the JRVE env in console. you can type in each keys in JRVE console to see what's happening under the hood. key '0' : System information key '5' : Enable shutdown key '7' : Start JRockit Management Server (port 7091) key '8' : Statistics Counters key '9' : Full Thread Dump key '0' : Status of Debug-key Below is the sample out from each keys. Debug-key '1' pressed ============ JRockitVE System Information ============ JRockitVE version : 11.1.1.3.0-67-131044 Kernel version : 6.1.0.0-97-131024 JVM version : R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 Hypervisor version : Xen 3.4.0 Boot state : 0x007effff Uptime : 0 days 02:04:31 CPU : uniprocessor @2327 Mhz CPU usage : 0% ctx/s: 285 preempt/s: 0 migrations/s: 0 Physical pages : 82379/261121 (321/1020 MB) Network info : 10.179.97.64 (10.179.97.64/255.255.254.0) GateWay : 10.179.96.1 MAC address : 00:16:3e:7e:dc:78 Boot options : vfsCwd : /application/user_projects/domains/wlsve_domain mainArgs : java -javaagent:/jrockitve/services/sshd/sshd.jar -cp /jrockitve/jrockit/lib/tools.jar:/jrockitve/lib/common.jar:/application/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/application/wlserver_10.3/server/lib/weblogic.jar -Dweblogic.Name=WlsveAdmin -Dweblogic.Domain=wlsve_domain -Dweblogic.management.username=weblogic -Dweblogic.management.password=welcome1 -Dweblogic.management.GenerateDefaultConfig=true weblogic.Server consLog : /jrockitve/log/jrockitve.log mounts : ext2 / dev0; posixLocale : en_US posixTimezone : Asia/Seoul posixEncoding : ISO-8859-1 Local disk : Size: 1024M, Used: 728M, Free: 295M ======================================================== Debug-key '5' pressed Shutdown enabled. Debug-key '7' pressed [JRockit] Management server already started. Ignoring request. Debug-key '8' pressed Starting stat recording Debug-key '8' pressed ========= Statistics Counters for the last second ========= dev.eth0_rx.cnt : 22 packets dev.eth0_rx_bytes.cnt : 2704 bytes dev.net_interrupts.cnt : 22 interrupts evt.timer_ticks.cnt : 123 ticks hyper.priv_entries.cnt : 144 entries schedule.context_switches.cnt : 271 switches schedule.idle_cpu_time.cnt : 997318849 nanoseconds schedule.idle_cpu_time_0.cnt : 997318849 nanoseconds schedule.total_cpu_time.cnt : 1000031757 nanoseconds time.system_time.cnt : 1000 ns time.timer_updates.cnt : 123 updates time.wallclock_time.cnt : 1000 ns ======================================= Debug-key '9' pressed ===== FULL THREAD DUMP =============== Fri Jun 4 08:22:12 2010 BEA JRockit(R) R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 "Main Thread" id=1 idx=0x4 tid=1 prio=5 alive, in native, waiting -- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method) at java/lang/Object.wait(J)V(Native Method) at java/lang/Object.wait(Object.java:485) at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:919) ^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:479) at weblogic/Server.main(Server.java:67) at jrockit/vm/RNI.c2java(IIIII)V(Native Method) -- end of trace "(Signal Handler)" id=2 idx=0x8 tid=2 prio=5 alive, in native, daemon Open lock chains ================ Chain 1: "ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=23 idx=0x50 tid=20 waiting for java/lang/String@0x630c588 held by: "ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=24 idx=0x54 tid=21 (active) ===== END OF THREAD DUMP =============== Debug-key '0' pressed Debug-keys enabled Happy Cloud Walking :)

    Read the article

  • Victor Grazi, Java Champion!

    - by Tori Wieldt
    Congratulations to Victor Grazi, who has been made a Java Champion! He was nominated by his peers and selected as a Java Champion for his experience as a developer, and his work in the Java and Open Source communities. Grazi is a Java evangelist and serves on the Executive Committee of the Java Community Process, representing Credit Suisse - the first non-technology vendor on the JCP. He also arranges the NY Java SIG meetings at Credit Suisse's New York campus each month, and he says it has been a valuable networking opportunity. He also is the spec lead for JSR 354, the Java Money and Currency API. Grazi has been building real time financial systems in Java since JDK version 1.02! In 1996, the internet was just starting to happen, Grazi started a dot com called Supermarkets to Go, that provided an on-line shopping presence to supermarkets and grocers. Grazi wrote most of the code, which was a great opportunity for him to learn Java and UI development, as well as database management. Next, he went to work at Bank of NY building a trading system. He studied for Java certification, and he noted that getting his certification was a game changer because it helped him started to learn the nuances of the Java language. He has held other development positions, "You may have noticed that you don't get as much junk mail from Citibank as you used to - that is thanks to one of my projects!" he told us. Grazi joined Credit Suisse in 2005 and is currently Vice President on the central architecture team. Grazi is proud of his open source project, Java Concurrent Animated, a series of animations that visualize the functionality of the components in the java.util.concurrent library. "It has afforded me the opportunity to speak around the globe" and because of it, has discovered that he really enjoys doing public presentations. He is a fine addition to the Java Champions program. The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Nominees are named and selected through a peer review process. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. This interchange may be in the form of technical discussions and/or community-building activities with Oracle's Java Development and Developer Program teams.

    Read the article

  • Packaging Swing apps with integrated JavaFX content

    - by igor
    JavaFX provides a lot of interesting capabilities for developing rich client applications in Java, but what if you are working on an existing Swing application and you want to take advantage of these new features?  Maybe you want to use one or two controls like the LineChart or a MediaView.  Maybe you want to embed a large Scene Graph as an initial step in porting your application to FX.  A hybrid Swing/FX application might just be the answer. Developing a hybrid Swing + JavaFX application is not terribly difficult, but until recently the deployment of hybrid applications has not simple as a "pure" JavaFX application.  The existing tools focused on packaging FX Applications, or Swing applications - they did not account for hybrid applications. But with JavaFX 2.2 the tools include support for this hybrid application use case.  Solution  In JavaFX 2.2 we extended the packaging ant tasks to greatly simplify deploying hybrid applications.  You now use the same deployment approach as you would for pure JavaFX applications.  Just bundle your main application jar with the fx:jar ant task and then generate html/jnlp files using fx:deploy.  The only difference is setting toolkit attribute for the fx:application tag as shown below: <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/>  The value of ${main.class} in the example above is your application class which has a main method.  It does not need to extend JavaFX Application class. The resulting package provides support for the same set of execution modes as a package for a JavaFX application, although the packages which are created are not identical to the packages created for a pure FX application.  You will see two JNLP files generated in the case of a hybrid application - one for use from Swing applet and another for the webstart launch.  Note that these improvements do not alter the set of features available to Swing applications. The packaging tools just make it easier to use the advanced features of JavaFX in your Swing application. The same limits still apply, for example a Swing application can not use JavaFX Preloaders and code changes are necessary to support HTML splash screens. Why should I use the JavaFX ant tasks for packaging my Swing application?  While using FX packaging tool for a Swing application may seem like a mismatch at face value, there are some really good reasons to use this approach.  The primary justification for our packaging tools is to simplify the creation of your application artifacts, and to reduce manual errors.  Plus, no one should have to write JNLP by hand. Some specific benefits include: Your application jar will include a launcher program.  This improves your standalone launch by: checking for the JavaFX runtime guiding the user through any necessary installations setting the system proxy for Java The ant tasks will generate JNLP and HTML files for your swing app: avoids learning unnecessary details about JNLP, and eliminates the error-prone hand editing of JNLP files simplifies using advanced features like embedding JNLP and signing jars as BLOBs to improve launch performance.you can also embed the signing certificate details to improve the user's experience  allows the use of web page templates to inject the generated code directly into your actual web page instead of being forced to copy/paste the generated code snippets. What about native packing? Absolutely!  The very same ant task can generate a native bundle for a Swing application with JavaFX content.  Try running one of these sample native bundles for the "SwingInterop" FX example: exe and dmg.   I also used another feature on these examples: a click-through license agreement for .exe installers and OS X DMG drag installers. Small Caveat This packaging procedure is optimized around using the JavaFX packaging tools for your entire Swing application.  If you are trying to embed JavaFX content into existing project (with an existing build/packing process) then you may need to experiment in order to find the best way to integrate the JavaFX packaging steps into your existing build procedure. As long as you can use ant in your build process this should be a workable approach. It some cases solution could be less than ideal. For example, you need to use fx:jar to package your main jar file in order to produce a double-clickable jar or a native bundle.  The jar will be created from scratch, but you may already be creating the main jar file with a custom manifest.  This may lead to some redundant steps in your build process.  Hopefully the benefits will outweigh the problems. This is an area of ongoing development for the team, and we will continue to refine and improve both the tools and the process. Please share your experiences and suggestions with us.  You can comment here on the blog or file issues to JIRA. Sample code Here is the full ant code used to package SwingInterop.  You can grab latest JavaFX samples and try it yourself:  <target name="-post-jar"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <!-- Mark application as Swing-based --> <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/> <!-- Create doubleclickable jar file with embedded launcher --> <fx:jar destfile="${dist.jar}"> <fileset dir="${build.classes.dir}"/> <fx:application refid="swingFXApp" name="SwingInterop"/> <manifest> <attribute name="Implementation-Vendor" value="${application.vendor}"/> <attribute name="Implementation-Title" value="${application.title}"/> <attribute name="Implementation-Version" value="1.0"/> </manifest> </fx:jar> <!-- sign application jar. Use new self signed certificate --> <delete file="${build.dir}/test.keystore"/> <genkey alias="TestAlias" storepass="xyz123" keystore="${build.dir}/test.keystore" dname="CN=Samples, OU=JavaFX Dev, O=Oracle, C=US"/> <fx:signjar keystore="${build.dir}/test.keystore" alias="TestAlias" storepass="xyz123"> <fileset file="${dist.jar}"/> </fx:signjar> <!-- generate JNLPs, HTML and native bundles --> <fx:deploy width="960" height="720" includeDT="true" nativeBundles="all" outdir="${basedir}/${dist.dir}" embedJNLP="true" outfile="${application.title}"> <fx:application refId="swingFXApp"/> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="SwingInterop.jar"/> </fx:resources> <fx:permissions/> <info title="Sample app: ${application.title}" vendor="${application.vendor}"/> </fx:deploy> </target>

    Read the article

  • Indie Software Developers - How do I handle taxes?

    - by Connor
    I apologize if this is the wrong site to post on, perhaps someone could point me to the proper place if it is not. Hello, I am 17 years old and currently develop applications/games for Android and iPhone as well as develop internet websites and code a variety of my own projects. I have been very fortunate and have made a large amount of money and continue to make money online to the point where I do not need a stable job, though I'd like to get one after college. I've never held a job anywhere, and have never had to pay taxes. I'm coming into a lot of issues and I am quite confused. I get money from MANY sources- 15 different advertisement networks(!), 4 different payment processors, 5 different affiliate networks and a variety of other sources. All of them pay to different places and at different times (checking account, PayPal, reloadable debit card, ect.) I essentially have a list in a Notepad with names and login information for each source. I have also created a PHP script that uses cURL to grab all the revenue from each service, add it all up, then text me every few hours so I can keep track. It's a mess, but it's working OK, and I can create custom reports (for IRS?). But enough of that, my questions are about taxes in the US, and how indie developers handle it all. I'm at slightly over $250k so far this year, with negligible earnings last year. I have it all stockpiled in a bank account and haven't touched it, I'm a bit scared to. What do I file as? A sole proprietor, a business, just a regular person? How can I handle all of the different revenue sources? (AdSense, CJ, LinkShare) So far none of them have sent me any paperwork on taxes and I've read that I'm supposed to pay taxes quarterly? Do I need paperwork from EACH source to file? Or can I just say I got $x total and that'd be it? What percentage do you pay of total earnings? Average? Should I create an LLC? A corporation? Or stay as a developer? What would be the cheapest options? Could I go to jail? I haven't touched the money except a few dollars to help my parents pay the mortgage once. Any insight would be great. My parents have no idea what I should do, both have no forms of higher education and both have no high school diploma's. They just live day by day with simple jobs. I appreciate any help or experience with this.

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • Using Open MQ as an Oracle CEP Event Source

    - by seth.white
    I helped an Oracle CEP customer recently who wanted to use Open MQ has an event source for their Oracle CEP application.  In this case, the Oracle CEP application was being used to provide monitoring for an electronic commerce website, however, the steps for configuring Open MQ are entirely independent of the application logic. I thought I would list the configuration steps in a blog post in case they might help others in the future. Note that although the Oracle CEP documentation states that only WebLogic and Tibco JMS are "officially" supported, any JMS implementation that provides a Java client should work with Oracle CEP. The first step is to add an adapter to the application's EPN. This can be done in the usual way, using the Eclipse IDE. The end result is something like the following bit of configuration in the application's Spring application context. Note that the provider attribute value of 'jms-inbound' specifies that the out-of-the-box JMS adapter is being used. <wlevs:adapter id="helloworldAdapter" provider="jms-inbound"> </wlevs:adapter>   Next, configure the inbound adapter so that it can connect to Open MQ in the Oracle CEP configuration file (config.xml). The snippet below provides an example of what this configuration should look like. The exact values specified for jndi-provider-url, jndi-factory, connection-jndi-name, destination-jndi-name elements will depend on your Open MQ configuration.  For example , if the name of your Open MQ topic destination is 'ElectronicCommerceTopic', then you would specify that as the destination-jndi-name.  The name of your Open MQ connection factory goes in the connection-jndi-name element. In my simple example, I also specify in event-type element so that the out-of-the-box JMS adapter will attempt to automatically convert incoming messages to events of type HelloWorldEvent. In a more complex application, one would configure a custom converter on the JMS adapter to convert from messages to events.  The Oracle CEP 11.1.3 documentation describes how to do this.   <jms-adapter> <name>helloworldAdapter</name> <event-type>HelloWorldEvent</event-type> <jndi-provider-url>file:///C:/Temp</jndi-provider-url> <jndi-factory>com.sun.jndi.fscontext.RefFSContextFactory</jndi-factory> <connection-jndi-name>YourJMSConnectionFactoryName</connection-jndi-name> <destination-jndi-name>YourJMSDestinationName</destination-jndi-name> </jms-adapter>   Finally, one needs to package the client-side Open MQ jars so that the classes that they contain are available to the Oracle CEP runtime. The recommended way for doing this in the Oracle CEP 11.1.3 release is to package the classes as a library module or simply place them in the application bundle.  The advantage of deploying the classes as a library module is that they are available to any application that wants to connect to Open MQ. In my case, I packaged the classes in my application bundle. A best practice when you want to include additional jars in your application bundle is to create a 'lib' directory in your Eclipse project and then copy the required jars into that directory.  Then, use the support that Eclipse provides to add the jars to the bundle classpath (which makes the classes part of your application in the same way that regular application classes are), and export all of the classes from your application bundle so that they are available to the Oracle CEP server runtime.  The screenshot below Illustrates how this is done in Eclipse.  The bundle classpath contains two Open MQ jars and all packages in the jars are exported.     Finally, import the javax.jms and javax.naming packages into the application module as these are needed by the Open MQ classes. The screenshot below shows the complete list of package imports for my sample application.       Once you have completed these steps, you should be able to build and deploy your application and begin receiving inbound messages from Open MQ. Technorati Tags: CEP,JMS,Adapter,Open MQ,Eclipse .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 }

    Read the article

  • ETPM Environment Health Monitoring Tools

    - by Paula Speranza-Hadley
    This post is to provide some useful information about the tools typically used by Oracle ETPM implementations for performance tuning and analysis.   This includes tools to monitor and gather performance information and statistics on the Database, Application Server, and Client (browser).  Enterprise Monitoring Tools Oracle Enterprise Manager - OEM Grid Control comes with a comprehensive set of performance and health metrics that allow monitoring of key components in your environment such as applications, application servers, databases, as well as the back-end components on which they rely, such as hosts, operating systems and storage. Tools for the Database Oracle Diagnostics Pack Automatic Workload Repository (AWR)  - this tool gets statistics from memory abut the Time Model or DB Time, Wait Events, Active Session History and High Load SWL queries Automatic Database Diagnostic Monitor (ADDM) - This self-diagnostic software is built into the database.  It examines and analyzes data captured in AWR to dertermine possible performance issues.  It locates the root cause of the issue, provides recommendations for correcting the issues and qualifies the expected benefit. Oracle Database Tuning Pack SQL Tuning Advisor - This enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations on how to tune statements.  The recommendation relates to collection of statistics on objects, creation on new indexes and restructuring of SQL statements. SQL Access Advisor - This enables you to optimize data access paths of SQL queries by recommending a proper set of materialized views, indexes and partitions for a given SQL workload. Tools for the Application Server Weblogic Console - is a web-based, user interface used to configure and control a set of WebLogic servers or clusters (i.e. a "domain").  In any logical group of WebLogic servers there must exist one admin server, which hosts the WebLogic Admin console application and manages the associated configuratoin files. WebLogic Administrators will use the Administration Console for a number of tasks, including: Starting and stopping WebLogic servers or entire clusters. Configuring server parameters, security, database connections and deployed applications. Viewing server status, health and metrics. Yourkit for Profiling - helps analyze synchronization issues, including: Which threads were calling wait(), and for how long Which threads were blocked on attempt to acquire a monitor held by another thread (synchronized methods/blocks), and for how long Tools for the Client Fiddler - allows you to inspect traffic logs, debug and set breakpoints. Firebug – allows you to inspect and edit HTML, monitor network activity and debug JavaScript

    Read the article

  • SQL Saturday #162 Cambridge

    - by Most Valuable Yak (Rob Volk)
    Despite the efforts of American Airlines, this past weekend I attended the first SQL Saturday in the UK!  Hosted by the SQLCambs Chapter of PASS and organized by Mark (b|t) & Lorraine Broadbent, ably assisted by John Martin (b|t), Mark Pryce-Maher (b|t) and other folks whose names I've unfortunately forgotten, it was held at the Crowne Plaza Hotel, which is completely surrounded by Cambridge University. On Friday, they presented 3 pre-conference sessions given by the brilliant American Cloud & DBA Guru, Buck Woody (b|t), the brilliant Danish SQL Server Internals Guru, Mark Rasmussen (b|t), and the brilliant Scottish Business Intelligence Guru and recent Outstanding Pass Volunteer, Jen Stirrup (b|t).  While I would have loved to attend any of their pre-cons (having seen them present several times already), finances and American Airlines ultimately made that impossible.  But not to worry, I caught up with them during the regular sessions and at the speaker dinner.  And I got back the money they all owed me.  (Actually I owed Mark some money) The schedule was jam-packed even with only 4 tracks, there were 8 regular slots, a lunch session for sponsor presentations, and a 15 minute keynote given by Buck Woody, who besides giving an excellent history of SQL Server at Microsoft (and before), also explained the source of the "unknown contact" image that appears in Outlook.  Hint: it's not Buck himself. Amazingly, and against all better judgment, I even got to present at SQL Saturday 162!  I did a 5 minute Lightning Talk on Regular Expressions in SSMS.  I then did a regular 50 minute session on Constraints.  You can download the content for the regular session at that link, and for the regular expression presentation here. I had a great time and had a great audience for both of my sessions.  You would never have guessed this was the first event for the organizers, everything went very smoothly, especially for the number of attendees and the relative smallness of the space.  The event sponsors also deserve a lot of credit for making themselves fit in a small area and for staying through the entire event until the giveaways at the very end. Overall this was one of the best SQL Saturdays I've ever attended and I have to congratulate Mark B, Lorraine, John, Mark P-M, and all the volunteers and speakers for making this an astoundingly hard act to follow!  Well done!

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Keep taking the tablets

    - by Roger Hart
    A guest editorial for the SimpleTalk newsletter. So why would Red Gate build an Ipad Game? Is it just because tablet devices are exciting and cool? Ok, maybe a little. Mostly, it was seeing that the best existing tablet and smartphone apps do simple, intuitive things, using simple intuitive interfaces to solve single problems. That's pretty close to what we call our own "intuitively simple" approach to software. Tablets and mobile could be fantastic for us, if we can identify those problems that a tablet device can solve. How do you create THE next tool for a completely new technology? We're glad we don't face that problem every day, but it's pretty exciting when we do. We figure we should learn by doing. We created "MobileFoo" (a Red Gate Company) , we picked up some shiny Apple tech, and got to grips with Objective C, and life in the App Store ecosystem. The result so far is an iPad game: Stacks and Heaps It's Rob and Marine's spin on Snakes and Ladders. Instead of snakes we have unhandled exceptions, a blue screen of death, and other hazards. We wanted something compellingly geeky on mobile, and we're pretty sure we've got it. It's trudging through App Store approval as we speak. but if you want to get an idea of what it is like to switch from .net to Objective C, take a look at Rob's post Android and iOS is quite a culture-change for Windows developers. So to give them a feel for the problems real users might have, we needed some real users - we offered our colleagues subsidised tablets. The only conditions were that they get used at work, and we get the feedback. Seeing tablets around the office is starting to give us some data points: Is typing the bottleneck? Will tablets ever cut it as text-entry devices, and could we fix it? Is mobile working held up by the pain of connecting to work LANs? How about security? Multi-tasking will let tablets do more. They're small, easy to use, almost instant to switch on, and connect by Wi Fi. There's plenty on that list to make a sysadmin twitchy. We'll find out as people spend more time working with these devices, and we'd love to hear what you think about tablet devices too. (comments are filtered, what with the spam)

    Read the article

  • links for 2011-02-16

    - by Bob Rhubart
    On the Software Architect Trail Software architect is the #1 job, according to a 2010 CNN-Money poll. In this article in Oracle Magazine, several members of the OTN architect community talk about the career paths that led them to this lucrative role.  (tags: oracle oraclemagazine softwarearchitect) Oracle Technology Network Architect Day: Denver Registration opens soon for this event to be held in Denver on March 23, 2011.  (tags: oracle otn entarch) How the Internet Gets Inside Us : The New Yorker "It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with." - Adam Gopnik (tags: internet progress technology innovation) The Insider Threat: Understand and Mitigate Your Risks: CSO Webcast February 23, 2011 at 1:00 PM EST/ 10:00 AM PST .  Speakers: Randy Trzeciak, lead for the CERT Insider Threat research team, and  Roxana Bradescu, Director of Database Security at Oracle. (tags: oracle CERT security) The Tom Kyte Blog: An Interesting Read... Tom looks at "an internet security firm brought down by not following the most *basic* of security principals." (tags: security oracle) Jason Williamson: Oracle as a Service in the Cloud "It is not trivial to migrate large amounts of pre-relational or 'devolved' relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have." - Jason Williamson (tags: oracle cloud soa) Edwin Biemond: Java / Oracle SOA blog: Building an asynchronous web service with JAX-WS "Building an asynchronous web service can be complex especially when you are used to synchronous Web services where you can wait for the response in your favorite tool." - Oracle ACE Edwin Biemond (tags: oracle oracleace java soa) Shared Database Servers (The SaaS Report) "Outside the virtualization world, there are capabilities of Oracle Database which can be used to prevent resource contention and guarantee SLA." - Shivanshu Upadhyay (tags: oracle database cloud SaaS) White Paper: Experiencing the New Social Enterprise "Increasingly organizations recognize the mandate to create a modern user experience that transforms existing business processes and increases business efficiency and agility." (tags: e20 enterprise2.0 socialcomputing oracle) Clusterware 11gR2 - Setting up an Active/Passive failover configuration Gilles Haro illustrates the steps necessary to achieve "a fully operational 11gR2 database protected by automatic failover capabilities." (tags: oracle clusterware) Oracle ERP: How to overcome local hurdles in a global implementation "The corporate world becomes a global village as many companies expand their business and offices around different countries and even continents. And this number keeps increasing. This globalization raises interesting questions..." - Jan Verhallen (tags: oracle capgemini entarch erp) Webcast: Successful Strategies for Optimizing Your Data Warehouse. March 3. 10 a.m. PT/1 p.m. ET Thursday, March 3, 2011. 10 a.m. PT/1 p.m. ET. Speakers: Mala Narasimharajan (Senior Product Marketing Manager, Oracle Data Integration) and Denis Gray (Principal Product Manager, Oracle Data Integration) (tags: oracle dataintegration datawarehousing)

    Read the article

  • Oracle Developer Day, Poland, 2012

    - by Geertjan
    Oracle Developer Day took place in Poland today. Oracle's Gregor Rayman did the keynote, where NetBeans was positioned, yet again, as Oracle's IDE for the Java Platform, via the JavaFX Roadmap: Well, it's not so clear from my pic above, but NetBeans is closely tied to the JavaFX Roadmap, as well as the JDK Roadmap too. Then the tracks started, one of which was the Java Track (the other two tracks were on ADF/WebLogic and SOA/BPM/BAM), where among other things I demonstrated the Java EE 6 Platform via tools in NetBeans IDE at some length. The room could hardly have been fuller, chairs had to be brought in and people were standing along the walls. The above pic shows the session being set up, with the room full of developers ready to hear about Java EE 6. I also did a session on pluggable Java desktop development (i.e., NetBeans Platform) and on "What's New in NetBeans IDE 7.1?", while Martin Grebac had a session on Java EE Web Services. Some of the many questions asked during the day that I thought were interesting: Is there localization support for the @Pattern annotation? I.e., what if I want to display the error message in Polish, what do I do? Is there filtering/sorting support for the DataTable component in JSF? Why is there no visual editor for ejb-jar.xml, in the same way that there is for web.xml? "Would be handy if there were to be a JSR for IDE Keyboard Shortcuts." (Two different people asked this question, separately, without knowing about each other. The second didn't know about the Eclipse and IntelliJ keyboard shortcut support in NetBeans IDE and was happy when I told them about it.) Wouldn't it be cool if, on start up, or during installation, there'd be a question: "Are you migrating from Eclipse/IntelliJ?" Then, if "yes", reset the keyboard shortcuts to match the IDE they're coming from.Is there a way in NetBeans to find subclasses of a class? "Would be cool if HTML or JSF files could be visualized in the same way as JavaFX and Swing classes." I.e., Visual Debugger for web developers. I had a great day and am looking at the Oracle Developer Day that will be held in Cluj, Romania, on Friday.

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >