Search Results

Search found 25418 results on 1017 pages for 'back ache'.

Page 141/1017 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • SQL SERVER – Recover the Accidentally Renamed Table

    - by pinaldave
    I have no answer to following question. I saw a desperate email marked as urgent delivered in my mailbox. “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. I am in big trouble. Help me get my original tablename.” I have seen many similar scenarios in my life and they give me a very good opportunity to preach wisdom but when the house is burning, we cannot talk about how we should have conserved the water earlier. The goal at that point is to put off the fire as fast as we can. I decided to answer this email with my best knowledge. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you idea about what your tablename can be if you are lucky. Method 1: (Not Recommended but try your luck) Check your naming convention of your system. I have often seen that many organizations name their index as IX_TableName_Colms or name their keys as FK_TableName1_TableName2_Cols. If your organization is following the same you can get the name from your table, you may refer your keys. Again, note that this is quite possible that your tablename was already renamed and your keys were not updated. This can easily lead you to select incorrect name. I think follow this if you are confident or move to the next method. Method 2: (Not Recommended but try your luck) This method is also based on your orgs naming convention. If you use the name of the table in any columnname (some organizations use tablename in their incremental identity column name), you can get that name from there. Method 3: (Not Recommended but try your luck) If you know where your table was used in your stored procedures, you can script your stored procedure and find the name of the table back. Method 4: (Try your luck) All the best organizations first create a data model of the schema and there is good chance that this table is used there, you should take your chances and refer original document. If your organization is good at managing docs or source code, you will get the name of the table back for sure. Method 5: (It WORKS but try on a development server) There is no sure way to get you the name of the table which you accidentally renamed however, there is one way which will work for sure. You need to take your latest full backup and restore it on your development server (remember not on production or where you have renamed this column). Now restore latest differential file of the full backup. Now restore all the log files one by one making sure that you are restoring before the point of time of you renamed the tablename. Now go to explore and this will give you the name of the table which you have renamed. If you are confident that the same table existed with the same name when the last full backup was made, you do not have to go to all the steps. You can just get the name of the table directly from last backup’s restore. Read the article about Backup Timeline. Wisdom: How can I miss to preach wisdom when I get the opportunity to do so? Here are a few points to remember. Use a different account to explore production environment. Do not use the same account which have all the rights and permissions all the time. Use the account which has read only permissions if there are no modification required. Use policy based management to prevent changes which are accidental. If there was policy of valid names, the accidental change of the table was not possible unless it was intentional delibarate changes. Have a proper auditing of the system in place. You can use DDL triggers but be careful with its usage (get it reviewed properly first). (Add your suggestion here) I guess Method 5 will work all the time (using point in time restore). Everything else is chance of luck and if you are lucky are bad – you will get further incorrect name. Now go back and read the first line of this blog. Out of five method four methods are just lucky guesses. The method 5 will work but again it is a lengthy process if the size of the database is huge or if you do not have full backup. Did I miss anything obvious? Please leave a comment and I will publish your answer with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Common usecases and techniques when integrating a 3rd party application with Oracle Sales Cloud

    - by asantaga
    Over the last year or so I've see a lot of partners migrating and integrate their applications with Oracle Sales Cloud. Interestingly I'd say 60% of the partners use the same set of design patterns over and over again. Most of the time I see that they want to embed their application into Oracle Sales Cloud, within a tab usually, perhaps click on a link to their application (passing some piece of data + credentials) and then within their application update sales cloud again using webservices. Here are some examples of the different use-cases I've seen , and how partners are embedding their applications into Sales Cloud, NB : The following examples use the "Desktop" User Interface rather than the Newer "Simplified User Interface", I'll update the sample application soon but the integration patterns are precisely the same Use Case 1 :  Navigator "Link out" to third party application This is an example of where the developer has added a link to the global navigator and this links out to the 3rd Party Application. Typically one doesn't pass any contextual data with the exception of perhaps user credentials, or better still JWT Token. Techniques Used   Adding Link to Menu Item Using JWT Token in Sales Cloud Use Case 2 : Application Embedded within the Sales Cloud Dashboard Within the Oracle Sales Cloud application there is a tab called "Sales", within this tab its possible to embed a SubTab and embed a iFrame pointing to your application. To do this the developer simply needs to edit the page in customization mode, add the tab and then add the iFrame, simples! The developer can pass credentials/JWT Token and some other pieces of data but not object data (ie the current OpportunityID etc)  Techniques Used Adding a page to the dashboard  Using JWT Token in Sales Cloud  Use Case 3 : Embedding a Tab and Context Linking out from a Sales Cloud object to the 3rd party application In this usecase the developer embeds two components into Oracle Sales Cloud. The first is a SubTab showing summary data to the user (a quote in our case) and then secondly a hyperlink, (although it could be a button) which when clicked navigates the user to the 3rd party application. In this case the developer almost always passes context specific data (i.e. the opportunityId) and a security token (username password combo or JWT Token). The third party application usually takes the data, perhaps queries more data using the Sales Cloud SOAP/WebService interface and then displays the resulting mashup to the user for further processing. When the user has finished their work in the 3rd party application they normally navigate back to Oracle Sales Cloud using what's called a "DeepLink", ie taking them back to the object [opportunity in our case] they came from. This image visually shows a "Happy Path" a user may follow, and combines linking out to an application , webservice calls and deep linking back to Sales Cloud. Techniques Used Extending a SalesCloud application with a custom button Using JWT Token in Sales Cloud Extending Oracle Sales Cloud [Opportnity] with a custom tab exposing External Content Retrieving Data from Oracle Sales cloud using WebServices Coding some groovy script to generate the URLs required (Doc 1571200.1 on MyOracle Support) DeepLinking to specific Oracle Sales Cloud Pages (Doc 1516151.1 on My Oracle Support) Use-Case 4 :  Server Side processing/synchronization This usecase focuses on the Server Side processing of data, in this case synchronizing data. Here the 3rd party application is running on a "timer", e.g. cron or similar, and when triggered it queries data from Oracle Sales Cloud, then it queries data from the 3rd party application, determines the deltas and then inserts the data where required. Specifically here we are calling Oracle Sales Cloud using SOAP/WebServices and the 3rd party application is being communicated to using the REST API, for Oracle Sales Cloud one would use standard JAX-WS WebService calls and for REST one would use the JAX-RS api and perhap the Jackson api for managing JSON objects.. This is a very common use case and one which specifically lends itself to using the Oracle Java Cloud Service as the ideal application server where to host the mediator between the two applications.  Techniques Used Using JWT Token in Sales Cloud Integrating with the Oracle Java Cloud Service Retrieving Data from Oracle Sales cloud using WebServices General Resources The above is just a small set of techniques and use-cases which are used today. There are plenty of other sources of documentation and resources available on the internet but to get you started here are a few of my favourite places  Sales Cloud General Documentation Sales Cloud Customize Tab is useful for general customization of Sales Cloud Sales Cloud Integration Tab focuses on the 3rd party integration techniques  Official Oracle Fusion Developer Relations Blog Official Oracle Fusion Developer Relations YouTube Channel Enjoy integrating! 

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • Enable DreamScene in Any Version of Vista or Windows 7

    - by DigitalGeekery
    Windows DreamScene was a utility available for Vista Ultimate that allowed users to set video as desktop wallpaper. It was dropped in Windows 7, but we’ll take a look at how to play DreamScenes in all versions of Windows 7 or Vista. Downloading DreamScenes First, you’ll need to find some DreamScenes to download. We’ve found some nice ones at both DreamScene.org and DeviantArt. You can find those download links at the end of the article. They’ll come as compressed files, so you’ll need to extract them after downloading. Windows 7 DreamScene Activator If you are running Windows 7 you can use Windows 7 DreamScene Activator. This free portable utility enables DreamScene in both 32 & 64 bit versions of Windows 7. Users can then set either MPG or WMV files as desktop wallpaper. Download and extract the Windows 7 DreamScene Activator (link below). Once extracted, you’ll need to run the application as administrator. Right-click on the .exe and select Run as administrator. Click on Enable DreamScene. This will also restart Windows Explorer if it is open. To play your DreamScene, browse for the file in Windows Explorer, right-click the file and select Set as Desktop Background. Enjoy your new Windows 7 DreamScene.   Although it says it is for Windows 7 only, we were able to get it to work with no problems on Vista Home Premium x32 as well.   You can Pause the DreamScene at anytime by right-clicking on the desktop and selecting Pause DreamScene.   When you are ready for a change, click Disable DreamScene and switch back to your previous wallpaper. Using VLC Media Player Users of all versions of Windows 7 & Vista can enable a DreamScene using VLC. Recently, we showed you how to set a video as your desktop wallpaper in VLC.  Since DreamScenes are in MPEG or WMV format, we will use the same tactic to display them as desktop wallpaper. We’ll just need to make a few additional tweaks to the VLC settings. You’ll need to download and install VLC media player if you don’t already have it. You can find the download link below. Next, select Tools > Preferences from the Menu. Select the Video button on the left and then choose DirectX video output from the Output dropdown list. Next, select All under Show Settings at the lower left, then select the Video button on the left pane. Uncheck Show media title on video. This will prevent VLC from constantly showing the title of the video on the screen each time the video loops. Click Save and the restart VLC.   Now we will add the video to our playlist and set it to continuously loop. Select View > Playlist from the Menu. Select the Add file button from the bottom of the Playlist window and select Add file.   Browse for your file and click Open.   Click the Loop button at the bottom so the video plays in a continuous loop.   Now, we’re ready to play the video. After the video starts playing, select Video > DirectX Wallpaper from the Menu, then minimize VLC.   If you’re using Aero Themes, you may get a pop-up warning and Windows will switch automatically to a basic theme.   If looping one video gets to be a little repetitive, you can add multiple videos to your playlist in VLC and loop the entire playlist. Just make sure you toggle the Loop button on the playlist window to Loop All. Now you’ve got a nice DreamScene playing on your desktop. Another cool trick you can do with VLC is take snapshots of favorite movie scenes and set them as backgrounds. When you’re ready to go back to your old wallpaper, maximize VLC, select Video and click DirectX Wallpaper again to turn it off the video background. Occasionally we were left with a black screen and had to manually change our wallpaper back to normal even after turning off the DirectX Wallpaper. Note: Keep in mind that using the VLC method takes up a lot of resources so if you try to run it on older hardware, or say a netbook, you’re not going to get good results. We also tried to use the VLC method in XP, but couldn’t get it to work. If you have leave a comment and let us know. While the DreamScene feature never really caught on in Vista, we find them to be a cool way to pump a little life into your desktop on any version of Vista or Windows 7. Downloads DreamScenes from Dreamscene.org DreamScenes from DeviantArt Download VLC media player Windows 7 DreamScene Activator Similar Articles Productive Geek Tips Wait, How do I Turn on DreamScene Again?Enable Run Command on Windows 7 or Vista Start MenuEnable or Disable UAC From the Windows 7 / Vista Command LineUnderstanding Windows Vista Aero Glass RequirementsEnable Mapping to \HostnameC$ Share on Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

  • Goals for 2010 Retrospective

    - by Brian Jackett
    As we approach the end of 2010 I’d like to take a  few minutes to reflect back on this past year and revisit the goals that I set for myself at the beginning of the year (click here to see those goals).  I feel it is important to track your goals not only to see if you accomplished them but also to see what new directions in life you pursued.  Once we enter into 2011 I’ll follow up with a new post on goals for the new year. Professional Blog – This year I intended to write at least 2 posts a month.  Looking back I far surpassed that goal by writing 47 posts (this one being my 48th).  As with many things in life, quantity does not mean quality.  A good example is a number of my posts announcing upcoming speaking engagements and providing links to presentation slides and scripts.  That aside, I like to at least keep content relatively fresh on this blog  which I was able to accomplish.  At the same time I’ve gotten much more comfortable in my blogging style and it has become much easier to write. Speaking – I didn’t define a clear goal for speaking engagements, but had a rough idea of wanting to speak at 2-3 events.  Once again I far exceeded that number by speaking at 10 separate events and delivering 12+ presentations.  I’m very thankful for all of the opportunities that I was given and all of the wonderful people I have met as a result. Volunteering – This year I intended to help out with the COSPUG (now Buckeye SPUG) steering committee and Stir Trek conference.  I fulfilled both goals and as well as taking on lead organizer duties for the first ever SharePoint Saturday Columbus.  Each of these events and groups turned out to be successful and I was glad to be a part of them all.  I look forward to continuing to volunteer with each next year in some capacity. Android Development – My goal for getting into Android development was a late addition, but one I didn’t necessarily fulfill.  I spent a couple nights downloading the tools, configuring my environment, and going through some “simple” tutorials.  I say “simple” because in my opinion the tutorials were not laid out very well, took a long time to get running properly, and confused me more than helped.  After about a week I was frustrated with the process and didn’t think it was a good use of my time.  On a side note, I’ve dabbled in Windows Phone 7 development over the past few months and have been very excited by how easy and intuitive it was to get started and develop some proof of concepts. Personal Getting in Shape – I had intended to play on recreational sports leagues and work out on a semi-regular basis.  For the most part I fulfilled this goal by playing on various softball and volleyball leagues as well as using the gym.  At the same time I had some major setbacks.  In the spring I badly sprained my ankle and got hit in the knee with a softball which kept me inactive for almost 2 months.  More recently I broke my knuckle (click here to read about it) which I am still recovering from. Volunteering – On the volunteering front I kept my commitments at my parish’s high school youth group.  As for other volunteering opportunities I got involved with a great organization called Columbus Gives Back (website).  I’ve volunteered with them a few times and really enjoy their goal to provide opportunities to people with busy schedules.  They  offer a variety of events typically after work hours and spread out around Columbus with no set commitments on time you need to put in.  If you have the time or motivation I highly recommend them. House/Condo – I had been thinking of buying a house or condo this past summer, but decided to extend my apartment lease for another year instead.  I have begun the search for a place in the past few weeks and am excited begin the process of owning a home. Conclusion     This year I was able to set and achieve many of my goals.  For next year I’ll try to put more specific numbers to all of my goals.  If any of you readers set goals for 2011 feel free to send me a link as I’d love to see what you are aiming to accomplish.  Have a great end of 2010 and best wishes for the start of 2011!       -Frog Out

    Read the article

  • Ok it has been pointed out to me

    - by Ratman21
    That it seems my blog is more of poor me or pity me or I deserve a job blog.   Hmmm I wont say, I have not wined here as I have used this blog to vent my frustration on the whole out of work thing (lack of money, self worth, family issues and the never end bills coming my way) but, it was also me trying to reach to others in the same boat as well as advertising, hay I am out here, employers.   It was also said, that I don’t have any thing listed here on me, like a cover letter or resume. Well there is but, it was so many months and post ago. Also what I had posted is not current. So here is my most current cover and resume.   Scott L Newman 45219 Dutton Way Callahan, Fl. 32011 To Whom It May Concern: I am really interested in the IT vacancie that you have listed for your company. Maybe I don’t have all the qualifications you want (hold on don’t hit delete yet) yet! But maybe I do, as I have over 20 + years experience in "IT” RIGHT NOW.   Read the rest of my cover and my resume. You will see what my “IT” skills are and it will Show that I can to this work! I can bring to your company along with my, can do attitude, a broad range of skills, including: Certified CompTIA A+, Security+  and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem  non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I don’t just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes but, I did it.) I would welcome the opportunity to further discuss this position with you. If you have questions or would like to schedule an interview, please contact me by phone at 904-879-4880 or on my cell 352-356-0945 or by e-mail at [email protected] or leave a message on my web site (http://beingscottnewman.webs.com/). I have enclosed/attached my resume for your review and I look forward to hearing from you.   Thank you for taking a moment to consider my cover letter and resume. I appreciate how busy you are. Sincerely, Scott L. Newman    Scott L. Newman 45219 Dutton Way, Callahan, FL 32011? H (904)879-4880 C (352)356-0945 ? [email protected] Web - http://beingscottnewman.webs.com/                                                       ______                                                                                       OBJECTIVE To obtain a Network Operation or Helpdesk position.     PROFILE Information Technology Professional with 20+ years of experience. Volunteer website creator and back-up sound technician at True Faith Christian Fellowship. CompTIA A+, Network+ and Security+ Certified.   TECHNICAL AND PROFESSIONAL SKILLS   §         Technical Support §         Frame Relay §         Microsoft Office Suite §         Inventory Management §         ISDN §         Windows NT/98/XP §         Client/Vendor Relations §         CICS §         Cisco Routers/Switches §         Networking/Administration §         RPG §         Helpdesk §         Website Design/Dev./Management §         Assembler §         Visio §         Programming §         COBOL IV §               EDUCATION ? New HorizonsComputerLearningCenter, Jacksonville, Florida – CompTIA A+, Security+ and Network+ Certified.             Currently working on CCNA Certification ?MottCommunity College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese     PROFESSIONAL             TrueFaithChristianFellowshipChurch – Callahan, FL, October 2009 – Present Web site Tech ·        Web site Creator/tech, back up song leader and back up sound technician. Note church web site is (http://ambassadorsforjesuschrist.webs.com/) U.S. Census (temp employee) Feb. 23 to March 8, 2010 ·        Enumerator for NassauCounty   ThomasCreekBaptistChurch – Callahan, FL,     June 2008 – September 2009 Churchsound and video technician      ·        sound and video technician           Fidelity National Information Services ? Jacksonville, FL ? February 01, 2005 to October 28, 2008 Client Server Dev/Analyst I ·        Monitored Multiple Debit Card sites, Check Authorization customers and the Card Auth system (AuthNet) for problems with the sites, connections, servers (on our LAN) and/or applications ·        Night (NOC) Network operator for a large Wide Area Network (WAN) ·        Monitored Multiple Check Authorization customers for problems with circuits, routers and applications ·        Resolved circuit and/or router issues or assist circuit carrier in resolving issue ·        Resolved application problems or assist application support in resolution ·        Liaison between customer and application support ·        Maintained and updated the NetOps Operation procedures Guide ·        Kept the listing of equipment on the raised floor updated ·        Involved in the training of all Night Check and Card server operation operators ·        FNIS acquired Certegy in 2005. Was one of 3 kept on.   Certegy ? St.Pete, FL ? August 31, 2003 to February 1, 2005 Senior NetOps Operator(FNIS acquired Certegy in 2005 all of above jobs/skills were same as listed in FNIS) ·        Converting Documentation to Adobe format ·        Sole trainer of day/night shift System Management Center operators (SMC) ·        Equifax spun off Card/Check Dept. as Certegy. Certegy terminated contract with EDS. One of six in the whole IT dept that was kept on.   EDS  (Certegy Account) ? St.Pete, FL ? July 1, 1999 to August 31, 2003 Senior NetOps Operator ·        Equifax outsourced the NetOps dept. to EDS in 1999. ·        Same job skills as listed above for FNIS.   Equifax ? St.Pete&Tampa, FL ? January 1, 1991 to July 1, 1999 NetOps/Tandem Operator ·        All of the above for FNIS, except for circuit and router issues ·        Operated, monitored and trouble shot Tandem mainframe and servers on LAN ·        Supported in the operation of the Print, Tape and Microfiche rooms ·        Equifax acquired TelaCredit in 1991.   TelaCredit ? Tampa, FL ? June 28, 1989 to January 1, 1991 Tandem Operator ·        Operated and monitored Tandem Non-stop systems for Card and Check Auths ·        Operated multiple high-speed Laser printers and Microfiche printers ·        Mounted, filed and maintained 18 reel-to-reel mainframe tape drives, cartridges tape drives and tape library.

    Read the article

  • Beginner Geek: How to Use Multiple Monitors to Be More Productive

    - by Chris Hoffman
    Many people swear by multiple monitors, whether they’re geeks or just people who need to be productive. Why use just one monitor when you can use two or more and see more at once? Additional monitors allow you to expand your desktop, getting more screen real estate for your open programs. Windows makes it very easy to set up additional monitors, and your computer probably has the necessary ports. Why Use Multiple Monitors? Multiple monitors give you more screen real estate. Hook up multiple monitors to a computer and you can move your mouse back and forth between them, dragging programs between monitors as if you had an extra-large desktop. People who swear by multiple monitors use them to display multiple things on-screen at a time. Rather than Alt+Tabbing and task switching to glance at another window, you can just look over with your eyes and then look back to the program you’re using. Some examples of use cases for multiple monitors include: Coders who want to view their code on one display with the other display reserved for documentation. They can just glance over at the documentation and look back at their primary workspace. Anyone who needs to view something while working. Viewing a web page while writing an email, viewing another document while writing an something, or working with two large spreadsheets and having both visible at once. People who need to keep an eye on information, whether it’s email or up-to-date statistics, while working. Gamers who want to see more of the game world, extending the game across multiple displays. Geeks who just want to watch a video on one screen while doing something else on the other screen. Hooking Up Multiple Monitors Hooking up an additional monitor to your computer should be very simple. Most new computers come with more than one port for a monitor — whether DVI, HDMI, the older VGA port, or a mix. Some computers may include splitter cables that allow you to connect multiple monitors to a single port. Most laptops also come with ports that allow you to hook up an external monitor. Plug a monitor into your laptop’s DVI or VGA port and Windows will allow you to use both your laptop’s integrated display and the external monitor at once. This all depends on the ports your computer has and how your monitor connects. If you have an old VGA monitor lying around and you have a modern laptop with only DVI or HDMI connectors, you may need an adapter that allows you to plug your monitor’s VGA cable into the new port. Be sure to take your computer’s ports into account before you get another monitor for it. Managing Multiple Monitors With Windows Windows makes using multiple monitors easy. Just plug the monitor into the appropriate port on your computer and Windows should automatically extend your desktop onto it. You can now just drag and drop windows between monitors. To control how this works, right-click your Windows desktop and select Screen resolution. Choose an option from the Multiple displays box. The Extend option extends your desktop onto an additional monitor, while the other options are mainly useful if you’re using an additional monitor for presentations — for example, you could mirror your laptop’s desktop onto a large monitor or blank your laptop’s screen while it’s connected to a larger display. Be sure to arrange your monitors properly so Windows understands how your monitors are physically positioned. Windows 8 allows you to extend your Windows taskbar across multiple monitors. You’ll find this option in the taskbar’s options window — right-click the taskbar and select Properties. You can also choose where you want Windows to display taskbar buttons for open programs — on any monitor’s taskbar or only on the taskbar on the associated monitor. Windows 7 doesn’t have these convenient features built-in — your second monitor won’t have a taskbar. To extend your taskbar onto an additional monitor, you’ll need a third-party utility like the free and open-source Dual Monitor Taskbar. If you just have a single monitor, you can also use the Aero Snap feature to quickly place multiple Windows applications side by side. On Windows 7 or 8, press Windows Key + Left or Windows Key + Right to make the current window take up the left or right half of your display. You could also drag any window’s title bar to the left or right edges of your screen and release the window. How useful this feature is depends on your monitor’s size and resolution. If you have a large, high-resolution monitor, it will allow you to see a lot. If you have a smaller laptop monitor with the seemingly standard 1366×768 resolution, you won’t be able to see much of each snapped window at once, so snapping windows may not be practical. Image Credit: Chance Reecher on Flickr, Camp Atterbury Joint Maneuver Training Center on Flickr, Xavier Caballe on Flickr     

    Read the article

  • Silverlight Cream Monday WP7 App Review # 1

    - by Dave Campbell
    I'm going to try something here... if it seems useful, I'll continue, if it doesn't, I'll stop... so give me feedback! There are *lots* of Apps in the WP7 Marketplace, and heaven help me, but the Marketplace sucks for finding stuff. I won't rehash what's already been said in the blogs, but I agree with one and all. I went out last Saturday to find 2 apps that I knew were released, and couldn't do so on my device. Even in the Zune app, it took quite a while to find them... ok, I'll back off a bit, because I just found out I can do 'Search' now if I know the name... I didn't think that was working before. So my thought is on Monday (like today), I will post a review of 5 apps/games I either use or have played with on my device. These are strictly my opinions, you understand, but hey... it's better than a poke in the eye with an iPhone! A few disclaimers:     Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week, and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. In this Issue:   Jingo! is the first app I bought, just to see what the experience was like. It's very much like a game we used to play in school in the Army in 1971 on paper we passed around. Sort of a cross between hangman and Mastermind, you try to figure out the hidden word in 5 tries. You get really good at 5-letter words after a while. I like this because you have to think, and you're not pressured by a clock Jingo! is by James Furdell and is $1.99 I reviewed René Schulte's Pictures Lab a while back, and have not changed my mind. This is an excellent app for playing with any photo on your device... one you've just taken, one you've synced from your PC, or one you've saved from email. I like this because you can get some cool effects for your photos, and it just works. Pictures Lab is by Schulte Software Development and is $1.99 Since I work as a consultant, and from home, I wanted something I could track my time with. I've test-driven all the contenders I could find so far on the phone, and so far I like ONTRACK! the best. If asked, I have some suggestions, but it's probably just the way I work or think. What I do like is I can tap a project to start/stop/restart a counter, and at the end of the day it shows me how much time I've been working. If there's a way to make an adjustment in case you forget to tap the counter, I don't know how to do it, and that's my biggest complaint. I like this because you can get a daily readout which you can also email as a spreadsheet. The daily results display is very good. ONTRACK! is by Qmino and is $2.99 Remember Item 4 above... I've been playing guitar for 48 years... obviously since before the invention of 'tuners', so I'm not as dependent upon these as some folks are. I've tried some in the past and have always felt I can do just as good by ear (I have perfect relative pitch). So, I gave this app by András Velvárt a dance just to see how it works, and it is surprisingly good. If you're used to one of the stage tuners this may take a little getting used to, but it does the job. The difference with this one is there is no real 'null' point inside which you can think your guitar is in tune. The soundwave stays visible on the device, and if it's moving to the right, your string is flat, if it's moving to the left, your string is sharp. Getting it exact might be tricky, but it is exact! If you need to rely on a tuner, this is a good choice in my opinion, exactly because of the sensitivity.. tune up with this and you're dead-on. Guitar Tuner is by Kinabalu Innovation Limited and is $0.99 Popper 2 is the WP7 version of a wildly popular game by Bill Reiss named Dr. Popper. You can get a trial, or you can now get a free lite version of the game. Popper 2 is a fast-paced bubble breaker game. I find it something fun to play when I just want to buzz out, but maybe the best review is that my daughter didn't want to give my phone back when I showed it to her, and always wants to grab my phone to play 'that game'. A fun distraction with great graphics and a great price Popper 2 is by Blue Rose Systems, LLC and is $1.29 Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light!   Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #004

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Auto Generate Script to Delete Deprecated Fields in Current Database In early career everytime I have to drop a column, I had hard time doing it because I was scared what if that column was needed somewhere in the code. Due to this fear I never dropped any column. I just renamed the column. If the column which I renamed was needed afterwards it was very easy to rename it back again. However, it is not recommended to keep the deleted column renamed in the database. At every interval I used to drop the columns which was prefixed with specific word. This script is 6 years old but still works. Give it a look, I am open for improvements. 2007 Shrinking Truncate Log File – Log Full – Part 2 Shrinking database or mdf file is indeed bad thing and it creates lots of problems. However, once in a while there is legit requirement to shrink the log file – a very rare one. In the rare occasion shrinking or truncating the log file may be the only solution. However, one should make sure to take backup before and after the truncate or shrink as in case of a disaster they can be very useful. Remember that truncating log file will break the log chain and while restore it can create major issue. Anyway, use this feature with caution. 2008 Simple Use of Cursor to Print All Stored Procedures of Database Including Schema This is a very interesting requirement I used to face in my early career days, I needed to print all the Stored procedures of my database. Interesting enough I had written a cursor to do so. Today when I look back at this stored procedure, I believe there will be a much cleaner way to do the same task, however, I still use this SP quite often when I have to document all the stored procedures of my database. Interesting Observation about Order of Resultset without ORDER BY In industry many developers avoid using ORDER BY clause to display the result in particular order thinking that Index is enforcing the order. In this interesting example, I demonstrate that without using ORDER BY, same table and similar query can return different results. Query optimizer always returns results using any method which is optimized for performance. The learning is There is no order unless ORDER BY is used. 2009 Size of Index Table – A Puzzle to Find Index Size for Each Index on Table I asked this puzzle earlier where I asked how to find the Index size for each of the tables. The puzzle was very well received and lots of interesting answers were received. To answer this question I have written following blog posts. I suggest this weekend you try to solve this problem and see if you can come up with a better solution. If not, well here are the solutions. Solution 1 | Solution 2 | Solution 3 Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. The SQL Server Query optimizer is a very smart tool and it makes a better selection of execution plan. Suggesting hints to the Query Optimizer should be attempted when absolutely necessary and by experienced developers who know exactly what they are doing (or in development as a way to experiment and learn). Interesting Observation – TOP 100 PERCENT and ORDER BY I have seen developers and DBAs using TOP very causally when they have to use the ORDER BY clause. Theoretically, there is no need of ORDER BY in the view at all. All the ordering should be done outside the view and view should just have the SELECT statement in it. It was quite common that to save this extra typing by including ordering inside of the view. At several instances developers want a complete resultset and for the same they include TOP 100 PERCENT along with ORDER BY, assuming that this will simulate the SELECT statement with ORDER BY. 2010 SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience In year 2010 I attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever. Instead of writing about my usual routine or the event, I wrote about the interesting things I did and how I felt about it! When I go back and read it, I feel that this is the best event I attended in year 2010. Change Database Access to Single User Mode Using SSMS Image says all. 2011 SQL Server 2012 has introduced new analytic functions. These functions were long awaited and I am glad that they are now here. Before when any of this function was needed, people used to write long T-SQL code to simulate these functions. But now there’s no need of doing so. Having available native function also helps performance as well readability. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • .NET development on a Retina MacBook Pro with Windows 8

    - by Jeff
    I remember sitting in Building 5 at Microsoft with some of my coworkers, when one of them came in with a shiny new 11” MacBook Air. It was nearly two years ago, and we found it pretty odd that the OEM’s building Windows machines sucked at industrial design in a way that defied logic. While Dell and HP were in a race to the bottom building commodity crap, Apple was staying out of the low-end market completely, and focusing on better design. In the process, they managed to build machines people actually wanted, and maintain an insanely high margin in the process. I stopped buying the commodity crap and custom builds in 2006, when Apple went Intel. As a .NET guy, I was still in it for Microsoft’s stack of development tools, which I found awesome, but had back to back crappy laptops from HP and Dell. After that original 15” MacBook Pro, I also had a Mac Pro tower (that I sold after three years for $1,500!), a 27” iMac, and my favorite, a 17” MacBook Pro (the unibody style) with an SSD added from OWC. The 17” was a little much to carry around because it was heavy, but it sure was nice getting as much as eight hours of battery life, and the screen was amazing. When the rumors started about a 15” model with a “retina” screen inspired by the Air, I made up my mind I wanted one, and ordered it the day it came out. I sold my 17”, after three years, for $750 to a friend who is really enjoying it. I got the base model with the upgrade to 16 gigs of RAM. It feels solid for being so thin, and if you’ve used the third generation iPad or the newer iPhone, you’ll be just as thrilled with the screen resolution. I’m typically getting just over six hours of battery life while running a VM, but Parallels 8 allegedly makes some power improvements, so we’ll see what happens. (It was just released today.) The nice thing about VM’s are that you can run more than one at a time. Primarily I run the Windows 8 VM with four cores (the laptop is quad-core, but has 8 logical cores due to hyperthreading or whatever Intel calls it) and 8 gigs of RAM. I also have a Windows Server 2008 R2 VM I spin up when I need to test stuff in a “real” server environment, and I give it two cores and 4 gigs of RAM. The Windows 8 VM spins up in about 8 seconds. Visual Studio 2012 takes a few more seconds, but count part of that as the “ReSharper tax” as it does its startup magic. The real beauty, the thing I looked most forward to, is that beautifully crisp C# text. Consolas has never looked as good as it does at 10pt. as it does on this display. You know how it looks great at 80pt. when conference speakers demo stuff on a projector? Think that sharpness, only tiny. It’s just gorgeous. Beyond that, everything is just so responsive and fast. Builds of large projects happen in seconds, hundreds of unit tests run in seconds… you just don’t spend a lot of time waiting for stuff. It’s kind of painful to go back to my 27” iMac (which would be better if I put an SSD in it before its third birthday). Are there negatives? A few minor issues, yes. As is the case with OS X, not everything scales right. You’ll see some weirdness at times with splash screens and icons and such. Chrome’s text rendering (in Windows) is apparently not aware of how to deal with higher DPI’s, so text is fuzzy (the OS X version is super sharp, however). You’ll also have to do some fiddling with keyboard settings to use the Windows 8 keyboard shortcuts. Overall, it’s as close to a no-compromise development experience as I’ve ever had. I’m not even going to bother with Boot Camp because the VM route already exceeds my expectations. You definitely get what you pay for. If this one also lasts three years and I can turn around and sell it, it’s worth it for something I use every day.

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Cookbook: SES and UCM setup

    - by George Maggessy
    The purpose of this post is to guide you setting up the integration between UCM and SES. On my next post I’ll show different approaches to integrate WebCenter Portal, UCM and SES based on some common scenarios. Let’s get started. WebCenter Content Configuration WebCenter Content has a component that adds functionality to the content server to allow it to be searched via the Oracle SES. To enable the component installation, go to Administration -&gt; Admin Server and select SESCrawlerExport. Click the update button and restart UCM_server1 managed server. Once the managed server is back, we’ll configure the component. In the menu, under Administration you should see SESCrawlerExport. Click on the link. You’ll see the window below. Click on Configure SESCrawlerExport. Configure the values below: Hostname: SES hostname. Feed Location: Directory where data feeds will be saved. Metadata List: List of metadata that will be searchable by SES. After updating the values click on the Update button. Come back to the SESCrawlerExport Administration UI and click on Take Snapshot button. It will create the data feeds in the specified Feed Location. To check if the correct configuration was done, please access the following URL http://&lt;ucm_server&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWLOAD_CONFIG&amp;source=default. It should download config file in the format below: &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;rsscrawler xmlns="http://xmlns.oracle.com/search/rsscrawlerconfig"&gt; &lt;feedLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONTROL&amp;source=default]]&gt;&lt;/feedLocation&gt; &lt;errorFileLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_STATUS&amp;IsJava=1&amp;source=default&amp;StatusFeed=]]&gt;&lt;/errorFileLocation&gt; &lt;feedType&gt;controlFeed&lt;/feedType&gt; &lt;sourceName&gt;default&lt;/sourceName&gt; &lt;securityType&gt;attributeBased&lt;/securityType&gt; &lt;securityAttribute name="Account" grant="true"/&gt; &lt;securityAttribute name="DocSecurityGroup" grant="true"/&gt; &lt;securityAttribute name="Collab" grant="true"/&gt; &lt;/rsscrawler&gt; Make sure Account and DocSecurityGroup values are true. SES Configuration Let’s start by configuring the Identity Plug-ins in SES. Go to Global Settings -&gt; System -&gt; Identity Management Setup. Select Oracle Content Server and click the Activate button. We’ll populate the following values: HTTP endpoint for authentication: URL to WebCenter Content. Notice that /cs/idcplg was added at the end of the URL. Admin User: UCM Admin user. This user must have access to all CPOE content. Password: Password to Admin user. Authentication Type: NATIVE. Go back to the Home tab and click on Sources on the top left. Select Oracle Content Server on the right and click the Create button. Configuration URL: URL that point to the configuration file. Example: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONFIG&amp;source=default. User ID: UCM Admin user. Password: Password to Admin user. Click on the Authorization tab and add the appropriate values to the fields below. Make sure you see the ACCOUNT and DOCSECURITYGROUP security attributes at the end of the page. HTTP endpoint for authorization: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg. Display URL prefix: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs. Administrator user: UCM Admin user. Administrator password. On the Document Types tab, add the documents that should be indexed by SES. As our last step, we’ll configure the Federation Trusted Entities under Global Settings. Entity Name: The user must be present in both the identity management server configured for your WebCenter application and the identity management server configured for Oracle SES. For instance, I used weblogic in my sample. Password: Entity user password.\ Now you are ready to test the integration on the SES UI: http://&lt;ses hostname&gt;:&lt;port&gt;/search/query/.

    Read the article

  • T-SQL Tuesday #34: Help! I Need Somebody!

    - by Most Valuable Yak (Rob Volk)
    Welcome everyone to T-SQL Tuesday Episode 34!  When last we tuned in, Mike Fal (b|t) hosted Trick Shots.  These highlighted techniques or tricks that you figured out on your own which helped you understand SQL Server better. This month, I'm asking you to look back this past week, year, century, or hour...to a time when you COULDN'T figure it out.  When you were stuck on a SQL Server problem and you had to seek help. In the beginning... SQL Server has changed a lot since I started with it.  <Cranky Old Guy> Back in my day, Books Online was neither.  There were no blogs. Google was the third-place search site. There were perhaps two or three community forums where you could ask questions.  (Besides the Microsoft newsgroups...which you had to access with Usenet.  And endure the wrath of...Celko.)  Your "training" was reading a book, made from real dead trees, that you bought from your choice of brick-and-mortar bookstore. And except for your local user groups, there were no conferences, seminars, SQL Saturdays, or any online video hookups where you could interact with a person. You'd have to call Microsoft Support...on the phone...a LANDLINE phone.  And none of this "SQL Family" business!</Cranky Old Guy> Even now, with all these excellent resources available, it's still daunting for a beginner to seek help for SQL Server.  The product is roughly 1247.4523 times larger than it was 15 years ago, and it's simply impossible to know everything about it.*  So whether you are a beginner, or a seasoned pro of over a decade's experience, what do you do when you need help on SQL Server? That's so meta... In the spirit of offering help, here are some suggestions for your topic: Tell us about a person or SQL Server community who have been helpful to you.  It can be about a technical problem, or not, e.g. someone who volunteered for your local SQL Saturday.  Sing their praises!  Let the world know who they are! Do you have any tricks for using Books Online?  Do you use the locally installed product, or are you completely online with BOL/MSDN/Technet, and why? If you've been using SQL Server for over 10 years, how has your help-seeking changed? Are you using Twitter, StackOverflow, MSDN Forums, or another resource that didn't exist when you started? What made you switch? Do you spend more time helping others than seeking help? What motivates you to help, and how do you contribute? Structure your post along the lyrics to The Beatles song Help! Audio or video renditions are particularly welcome! Lyrics must include reference to SQL Server terminology or community, and performances must be in your voice or include you playing an instrument. These are just suggestions, you are free to write whatever you like.  Bonus points if you can incorporate ALL of these into a single post.  (Or you can do multiple posts, we're flexible like that.)  Help us help others by showing how others helped you! Legalese, Your Rights, Yada yada... If you would like to participate in T-SQL Tuesday please be sure to follow the rules below: Your blog post must be published between Tuesday, September 11, 2012 00:00:00 GMT and Wednesday, September 12, 2012 00:00:00 GMT. Include the T-SQL Tuesday logo (above) and hyperlink it back to this post. If you don’t see your post in trackbacks, add the link to the comments below. If you are on Twitter please tweet your blog using the #TSQL2sDay hashtag.  I can be contacted there as @sql_r, in case you have questions or problems with comments/trackback.  I'll have a follow-up post listing all the contributions as soon as I can. Thank you all for participating, and special thanks to Adam Machanic (b|t) for all his help and for continuing this series!

    Read the article

  • Think before you animate

    - by David Paquette
    Animations are becoming more and more common in our applications.  With technologies like WPF, Silverlight and jQuery, animations are becoming easier for developers to use (and abuse).  When used properly, animation can augment the user experience.  When used improperly, animation can degrade the user experience.  Sometimes, the differences can be very subtle. I have recently made use of animations in a few projects and I very quickly realized how easy it is to abuse animation techniques.  Here are a few things I have learned along the way. 1) Don’t animate for the sake of animating We’ve all seen the PowerPoint slides with annoying slide transitions that animate 20 different ways.  It’s distracting and tacky.  The same holds true for your application.  While animations are fun and becoming easy to implement, resist the urge to use the technology just because you think the technology is amazing.   2) Animations should (and do) have meaning I recently built a simple Windows Phone 7 (WP7) application, Steeped (download it here).  The application has 2 pages.  The first page lists a number of tea types.  When the user taps on one of the tea types, the application navigates to the second page with information about that tea type and some options for the user to choose from.       One of the last things I did before submitting Steeped to the marketplace was add a page transition between the 2 pages.  I choose the Slide / Fade Out transition.  When the user selects a tea type, the main page slides to the left and fades out.  At the same time, the details page slides in from the right and fades in.  I tested it and thought it looked great so I submitted the app.  A few days later, I asked a friend to try the app.  He selected a tea type, and I was a little surprised by how he used the app.  When he wanted to navigate back to the main page, instead of pressing the back button on the phone, he tried to use a swiping gesture.  Of course, the swiping gesture did nothing because I had not implemented that feature.  After thinking about it for a while, I realized that the page transition I had chosen implied a particular behaviour.  As a user, if an action I perform causes an item (in this case the page) to move, then my expectation is that I should be able to move it back.  I have since added logic to handle the swipe gesture and I think the app flows much better now. When using animation, it pays to ask yourself:  What story does this animation tell my users?   3) Watch the replay Some animations might seem great initially but can get annoying over time.  When you use an animation in your application, make sure you try using it over and over again to make sure it doesn’t get annoying.  When I add an animation, I try watch it at least 25 times in a row.  After watching the animation repeatedly, I can make a more informed decision whether or not I should keep the animation.  Often, I end up shortening the length of the animations.   4) Don’t get in the users way An animation should never slow the user down.  When implemented properly, an animation can give a perceived bump in performance.  A good example of this is a the page transitions in most of the built in apps on WP7.  Obviously, these page animations don’t make the phone any faster, but they do provide a more responsive user experience.  Why?  Because most of the animations begin as soon as the user has performed some action.  The destination page might not be fully loaded yet, but the system responded immediately to user action, giving the impression that the system is more responsive.  If the user did not see anything happen until after the destination page was fully loaded, the application would feel clumsy and slow.  Also, it is important to make sure the animation does not degrade the performance (or perceived performance) of the application.   Jut a few things to consider when using animations.  As is the case with many technologies, we often learn how to misuse it before we learn how to use it effectively.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • how to label a cuboid using open gl?

    - by usha
    hi this is how my 3dcuboid looks ,i have attached complete code , i want to label this cuboid using different name across sides how is it possible using opengl in android...plz help me out public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • 2012&ndash;The End Of The World Review

    - by Tim Murphy
    The end of the world must be coming.  Not because the Mayan calendar says so, but because Microsoft is innovating more than Apple.  It has been a crazy year, with pundits declaring not that the end of the world is coming, but that the end of Microsoft is coming.  Let’s take a look at what 2012 has brought us. The beginning of year is a blur.  I managed to get to TechEd in June which was the first time that I got to take a deep dive into Windows 8 and many other things that had been announced in 2011.  The promise I saw in these products was really encouraging.  The thought of being able to run Windows 8 from a thumb drive or have Hyper-V native to the OS told me that at least for developers good things were coming. I finally got my feet wet with Windows 8 with the developer preview just prior to the RTM.  While the initial experience was a bit of a culture shock I quickly grew to love it.  The media still seems to hold little love for the “reimagined” platform, but I think that once people spend some time with it they will enjoy the experience and what the FUD mongers say will fade into the background.  With the launch of the OS we finally got a look at the Surface.  I think this is a bold entry into the tablet market.  While I wish it was a little more affordable I am already starting to see them in the wild being used by non-techies. I was waiting for Windows Phone 8 at least as much as Windows 8, probably more.  The new hardware, better marketing and new OS features I think are going to finally push us to the point of having a real presence in the smartphone market.  I am seeing a number of iPhone users picking up a Nokia Lumia 920 and getting rid of their brand new iPhone 5.  The only real debacle that I saw around the launch was when they held back the SDK from general developers. Shortly after the launch events came Build 2012.  I was extremely disappointed that I didn’t make it to this year’s Build.  Even if they weren’t handing out Surface and Lumia devices I think the atmosphere and content were something that really needed to be experience in person.  Hopefully there will be a Build next year and it’s schedule will be announced soon.  As you would expect Windows 8 and Windows Phone 8 development were the mainstay of the conference, but improvements in Azure also played a key role.  This movement of services to the cloud will continue and we need to understand where it best fits into the solutions we build. Lower on the radar this year were Office 2013, SQL Server 2012, and Windows Server 2012.  Their glory stolen by the consumer OS and hardware announcements, these new releases are no less important.  Companies will see significant improvements in performance and capabilities if they upgrade.  At TechEd they had shown some of the new features of Windows Server 2012 around hardware integration and Hyper-V performance which absolutely blew me away.  It is our job to bring these important improvements to our company’s attention so that they can be leveraged. Personally, the consulting business in 2012 was the busiest it has been in a long time.  More companies were ready to attack new projects after several years of putting them on the back burner.  I also worked to bring back momentum to the Chicago Information Technology Architects Group.  Both the community and clients are excited about the new technologies that have come out in 2012 and now it is time to deliver. What does 2013 have in store.  I don’t see it be quite as exciting as 2012.  Microsoft will be releasing the Surface Pro in January and it seems that we will see more frequent OS update for Windows.  There are rumors that we may see a Surface phone in 2013.  It has also been announced that there will finally be a rework of the XBox next fall.  The new year will also be a time for us in the development community to take advantage of these new tools and devices.  After all, it is what we build on top of these platforms that will attract more consumers and corporations to using them. Just as I am 99.999% sure that the world is not going to end this year, I am also sure that Microsoft will move on and that most of this negative backlash from the media is actually fear and jealousy.  In the end I think we have a promising year ahead of us. del.icio.us Tags: Microsoft,Pundits,Mayans,Windows 8,Windows Phone 8,Surface

    Read the article

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >