Search Results

Search found 27852 results on 1115 pages for 'oracle openworld blog team'.

Page 568/1115 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • 5 Useful Wordpress Plugins For Google Adsense

    - by Jyoti
    Google Adsense has become the most popular online contextual advertising program and proper custom integration with Wordpress can help to increase Adsense earnings. Now on this post we have describe 5 useful wordpress plugin for google adsense. Few weeks ago we did a "10 Wordpress Plugins For Google Adsense ". Wordpress allows bloggers to easily integrate Google Adsense inside wordpress using plugins. Adsense Integrator : The Adsense Integrator plugin supports lot of programs other then adsense like AdBrite, AffiliateBOT, SHAREASALE, LinkShare, ClickBank, Oxado, Adpinion, AdGridWork, Adroll, Commission Junction, CrispAds, ShoppingAds, Yahoo!PN so this can be used when you are looking to have adsense as well as other alternatives. The rest of the features of the plugin are same where you give your adsense code into options field and it get inserted into blog posts. All In One Adsense And YPN : This is one of the most powerful adsense plugin for wordpress. Jut like other plugins, you can use this to insert your ads in the post but the plugin has some really good features like randomness which shows ad at random location in your blog which reduces ad blindness for viewers. You can also stop ads being shown on some pages using tags. Adsense Now : Other then the previous plugins , you can also give it a try to Adsense now. I haven’t used it (I have only used the first two) so its difficult to comment on it. It looks to be a lightweight plugin which insert adsense ads between posts and in posts body. Adsense Manager : Adsense Manager is one of the most popular and used plugin to manage adsense in wordpress blogs. Infact its newer version not only supports adsense, it also supports various other programs like adbrite, Commission Junction, YPN etc which makes it very powerful ad management plugin. You can inject adsense code anywhere in your blog posts as well as can put in different regions of your blog. Easy Adsense : Easy adsense is one of the new wordpress adsense plugin and that is why more feature rich. You can have different code for different themes using this plugin. It also support link units. To know all features, check out the plugin page.

    Read the article

  • SSDT - What's in a name?

    - by jamiet
    SQL Server Data Tools (SSDT) recently got released as part of SQL Server 2012 and depending on who you believe it can be described as either: a suite of tools for building SQL Server database solutions or a suite of tools for building SQL Server database, Integration Services, Analysis Services & Reporting Services solutions Certainly the SQL Server 2012 installer seems to think it is the latter because it describes SQL Server Data Tools as "the SQL server development environment, including the tool formerly named Business Intelligence Development Studio. Also installs the business intelligence tools and references to the web installers for database development tools" as you can see here: Strange then that, seemingly, there is no consensus within Microsoft about what SSDT actually is. On yesterday's blog post First Release of SSDT Power Tools reader Simon Lampen asked the quite legitimate question:I understand (rightly or wrongly) that SSDT is the replacement for BIDS for SQL 2012 and have just installed this. If this is the case can you please point me to how I can edit rdl and rdlc files from within Visual Studio 2010 and import MS Access reports.To which came the following reply:SSDT doesn't include any BIDs (sic) components. Following up with the appropriate team (Analysis Services, Reporting Services, Integration Services) via their forum or msdn page would be the best way to answer you questions about these kinds of services. That's from a Microsoft employee by the way. Simon is even more confused by this and replies with:I have done some more digging and am more confused than ever. This documentation (and many others) : msdn.microsoft.com/.../ms156280.aspx expressly states that SSDT is where report editing tools are to be foundAnd on it goes....You can see where Simon's confusion stems from. He has official documentation stating that SSDT includes all the stuff for building SSIS/SSAS/SSRS solutions (this is confirmed in the installer, remember) yet someone from Microsoft tells him "SSDT doesn't include any BIDs components".I have been close to this for a long time (all the way through the CTPs) so I can kind of understand where the confusion stems from. To my understanding SSDT was originally the name of the database dev stuff but eventually that got expanded to include all of the dev tools - I guess not everyone in Microsoft got the memo.Does this sound familiar? Have we not been down this road before? The database dev tools have had upteen names over the years (do any of datadude, TSData, VSTS for DB Pros, DBPro, VS2010 Database Projects sound familiar) and I was hoping that the SSDT moniker would put all confusion to bed - evidently its as complicated now as it has ever been.Forgive me for whinging but putting meaningful, descriptive, accurate, well-defined and easily-communicated names onto a product doesn't seem like a difficult thing to do. I guess I'm mistaken!Onwards and upwards...@Jamiet

    Read the article

  • Sharp Architecture 1.9.5 Released

    - by AlecWhittington
    The S#arp Architecture team is proud to announce the release of version 1.9.5. This version has had the following changes: Upgraded to MVC 3 RTM Solution upgraded to .NET 4 Implementation of IDependencyResolver provided, but not implemented This marks the last scheduled release of 1.X for S#arp Architecture . The team is working hard to get the 2.0 release out the door and we hope to have a preview of that coming soon. With regards to IDependencyResolver, we have provided an implementation, but have...(read more)

    Read the article

  • How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?

    - by Adam Maras
    I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase. The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards: The standard is very loose. It describes more of what not to do (don't use Hungarian notation, etc..) than what to do. The standard isn't always followed. There are inconsistencies with the code formatting everywhere. The standard doesn't follow Microsoft's style guidelines. In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification. As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way. I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in "their way" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question: Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated? I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop their kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable.

    Read the article

  • Part 5, Moving Forum threads from CommunityServer to DotNetNuke

    - by Chris Hammond
    This is the fifth post in a series of blog posts about converting from CommunityServer to DotNetNuke. A brief background: I had a number of websites running on CommunityServer 2.1, I decided it was finally time to ditch CommunityServer due to the change in their licensing model and pricing that made it not good for the small guy. This series of blog posts is about how to convert your CommunityServer based sites to DotNetNuke . Previous Posts: Part 1: An Introduction Part 2: DotNetNuke Installation...(read more)

    Read the article

  • Engagement: Don’t Forget Your Employees!

    - by Kellsey Ruppel
    By Mark Brown, Sr. Director, Oracle WebCenter  This week we want to focus on Employee Engagement, and how it is critical to your business. Today we hear and read a great deal about “Customer Engagement” – and rightly so, it is those customers, whether they be traditional paying customers, citizens, students, club members, or whomever it is that are “paying the bills”.  A more engaged customer is more likely to make it easier to pay those bills by buying more, giving good reviews, or spreading the word of how wonderful their experience was. But what about those who are providing those services, those who design and make those goods; why is it that all too often they are left out of conversations concerning engagement? In fact, it is critical that we consider our employees as customers since they are using internal systems that run your organization the same way customers use external systems. Studies have shown that an organization in which the employees feel “engaged” or better able to make decisions, do their jobs, and are connected to their peers have better return to their stakeholders. (shareholders).  On the surface this seems obvious, happy employees are more productive employees. But it leads to the question – how many of our existing policies, systems and processes are actually reducing that level of engagement? Let’s look at a couple examples. If posting new information that may be of great value to everyone in the larger organization is hard to do because we use an antiquated system, then we’re making it hard to share and increasing the potential for duplicate work. If it is not trivially obvious how to create and publish this post, then chances are very high that I’ll put it on the bottom of my queue. And finally, when critical information is spread across various systems, intranet sites, workgroups and peoples inboxes, then it is very hard to learn and grow from that information.  These may sound trivial, but how often do we push things off not because it is intellectually challenging, we may have the answer at our fingertips, but because it is hard to make that information readily available.  If an engaged employee is a productive employee, then what can we do to increase their level of engagement? We can start by looking for opportunities to provide self-documenting self-service solutions. Our newer employees grew up using simplified web interfaces everyday and they loathe calling a help-desk unless it is the last resort. Sadly, many of our enterprise applications have not kept pace and we all still have processes that are based on sending an email -- like discount approvals, vacation requests, or even offer-letter approvals.   My suggestion is to pick one highly visible, high-impact process where employees are either reticent to execute on the process or openly complain about how cumbersome it is and look at the mechanism for that process. If there are better ways, streamlined steps, better UIs that could be done, then you have a candidate to reconfigure that process and make it more engaging. Looking to better engage your employees? Start here!

    Read the article

  • Microsoft Seeks Feedback on SQL Server Denali

    Dan Jones Principal Program Manager of Microsoft s SQL Server Manageability team recently created a blog post asking for feedback on three topics concerning SQL Server Code Name Denali. The feedback is essential to Jones and the Microsoft team as it helps them see how they can tweak the Denali adoption process to better suit user needs.... Display the VeriSign seal And increase sales by an average of 24%. Start your trial today

    Read the article

  • Google Compute Engine Office Hours: August 22, 2012

    Google Compute Engine Office Hours: August 22, 2012 Office hours with the Google Compute Engine Team on August 22, 2012. The slides can be viewed here: goo.gl The tech talk portion of this session was about OAuth and Service Accounts, an area which the Google Compute Engine team has done a great job simplifying. From: GoogleDevelopers Views: 80 7 ratings Time: 52:42 More in Science & Technology

    Read the article

  • Generate DROP statements for all extended properties

    - by jamiet
    This evening I have been attempting to migrate an existing on-premise database to SQL Azure using the wizard that is built-in to SQL Server Management Studio (SSMS). When I did so I received the following error: The following objects are not supported = [MS_Description] = Extended Property Evidently databases containing extended properties can not be migrated using this particular wizard so I set about removing all of the extended properties – unfortunately there were over a thousand of them so I needed a better way than simply deleting each and every one of them manually. I found a couple of resources online that went some way toward this: Drop all extended properties in a MSSQL database by Angelo Hongens Modifying and deleting extended properties by Adam Aspin Unfortunately neither provided a script that exactly suited my needs. Angelo’s covered extended properties on tables and columns however I had other objects that had extended properties on them. Adam’s looked more complete but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So, both great resources but I wasn’t able to use either on their own to get rid of all of my extended properties. Hence, I combined the excellent work that Angelo and Adam had provided in order to manufacture my own script which did successfully manage to generate calls to sp_dropextendedproperty for all of my extended properties. If you think you might be able to make use of such a script then feel free to download it from https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16707&parid=550F681DAD532637!16706&authkey=!APxPIQCatzC7BQ8. This script will remove extended properties on tables, columns, check constraints, default constraints, views, sprocs, foreign keys, primary keys, table triggers, UDF parameters, sproc parameters, databases, schemas, database files and filegroups. If you have any object types with extended properties on them that are not in that list then consult Adam’s aforementioned article – it should prove very useful. I repeat here the message that I have placed at the top of the script: /* This script will generate calls to sp_dropextendedproperty for every extended property that exists in your database. Actually, a caveat: I don't promise that it will catch each and every extended property that exists, but I'm confident it will catch most of them! It is based on this: http://blog.hongens.nl/2010/02/25/drop-all-extended-properties-in-a-mssql-database/ by Angelo Hongens. Also had lots of help from this: http://www.sqlservercentral.com/articles/Metadata/72609/ by Adam Aspin Adam actually provides a script at that link to do something very similar but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So I put together this version instead. Use at your own risk. Jamie Thomson 2012-03-25 */ Hope this is useful to someone! @Jamiet

    Read the article

  • A Technique for Performing Cross-host Upgrades to FMW 11gR1

    - by reza.shafii
    The main tool used for the upgrade of iAS 10g mid-tier (data not stored in 10g meta-data repository schemas) environments to Fusion Middleware (FMW) 11gR1 is the FMW Upgrade Assistant (UA). This tool performs what we call an out-of-place upgrade which in a nut-shell means the following: Upgrade is performed by pointing the UA to a 10g source topology as well as an 11g destination topology. The destination topology must be created, using the standard FMW 11g installation and configuration process, prior to the execution of the UA. The UA carries over all of the required changes from the source environment to the destination. This approach has a number of advantages rooted in the fact that the source environment - which is presumably working well and serving its needs - is not disturbed during the upgrade process as the UA only performs read-only operations on it. The UA today can only perform such out-of-place upgrades when the source and destination topologies reside on the same machine. This can sometimes be an issue when the host on which the iAS 10g environment is installed is running at full capacity and installing new hardware for the purpose of the upgrade (in most cases what would be needed is extra memory) is completely infeasible. In such cases, upgrade across a different host is still possible by using the following technique: Backup your source environment and restore it on to a target machine. The backup and restore procedures for the iAS 10.1.2 components are described within this section of the release's Administration Guide. As described in the docs, the Oracle Application Server Backup and Recovery Tool provides capabilities for backing up the installation on one machine and restoring it on another which is exactly what you want to do for the purpose of cross host upgrade. Ensure that the restored environment on your target host is fully functional. Go through the upgrade steps on the target machine to perform the out-of-place upgrade using the UA. Although this process does add another big step to the overall upgrade process, it does make it possible to perform a cross-host upgrade to 11gR1 when necessary. The easiest approach would of course be to find a way of ensuring that the required hardware capacity for upgrade is available on the original 10g host. Using techniques such as scheduling the upgrade at low traffic times and/or temporarily stopping other processes running on the machine to clear up some memory might provide you the sufficient memory needed to perform the out-of-place upgrade and save you the need for using the backup/restore technique I have described in this post.

    Read the article

  • JCP EC Nominations and Meet the Candidates Call

    - by heathervc
    The Nominations period for the 2012 JCP EC Elections closes tomorrow, 11 October at midnight pacific time.  Eligible JCP Members (all current JSPA 2 signers) may nominate themselves.  You will need your Elections credentials to complete the nomination, which were sent to the primary contacts of all eligible JCP Members via email last week. This year all ratified (there are 4 proposed ratified candidates) and elected (there are 7 candidates so far) will appear on one ballot; the top 2 candidates will win elected seats. This year, the selected EC Members will serve a single year term.  Following the 2012 Elections, there will be one merged EC (approved through JSR 355), and a new JCP version, JCP 2.9 will be in effect.  In 2013, all EC members will stand for election to complete the merge process described in the JCP 2.9 process document. All of the candidates' nominations materials are now available. The ratified candidates are:  Cinterion, Credit Suisse, Fujitsu and HP.The elected candidates are:  Cisco Systems, CloudBees, Giuseppe Dell'Abate, London Java Community, MoroccoJUG, Software AG, and Zero Turnaround. Next week, 18 October, we will hold an open teleconference for the Java Community to meet the candidates and ask questions regarding their nomination.  We hope you will be able to participate in the call.  Should the time be inconvenient, a recording will be made available for download, and candidate questions may be posted on this blog entry or sent to [email protected]. Topic: Meet the EC Candidates Date: Thursday, October 18, 2012 Time: 9:30 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 818 225 Meeting Password: MeetEC ------------------------------------------------------- To join the online meeting (Now from mobile devices) ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&RT=MiM0 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: MeetEC 4. Click "Join". To view in other time zones or languages, please click the link: https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&ORT=MiM0 ------------------------------------------------------- To join the audio conference only -------------------------------------------------------     +1 (866) 682-4770     Outside the US: global access numbers  https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=6279803 or +1 (408) 774-4073     Conference code: 9454597     Security code: JCPEC (52732)------------------------------------------------------- For assistance ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/mc 2. On the left navigation bar, click "Support".

    Read the article

  • Executable Resumes

    - by Liam McLennan
    Over the past twelve months I have been thinking a lot about executable specifications. Long considered the holy grail of agile software development, executable specifications means expressing a program’s functionality in a way that is both readable by the customer and computer verifiable in an automatic, repeatable way. With the current generation of BDD and ATDD tools executable specifications seem finally within the reach of a significant percentage of the development community. Lately, and partly as a result of my craftsmanship tour, I have decided that soon I am going to have to get a job (gasp!). As Dave Hoover describes in Apprenticeship Patters, “you … have mentors and kindred spirits that you meet with periodically, [but] when it comes to developing software, you work alone.” The time may have come where the only way for me to feel satisfied and enriched by my work is to seek out a work environment where I can work with people smarter and more knowledgeable than myself. Having been on both sides of the interview desk many times I know how difficult and unreliable the process can be. Therefore, I am proposing the idea of executable resumes. As a journeyman programmer looking for a fruitful work environment I plan to write an application that demonstrates my understanding of the state of the art. Potential employers can download, view and execute my executable resume and judge wether my aesthetic sensibility matches their own. The concept of the executable resume is based upon the following assertion: A line of code answers a thousand interview questions Asking people about their experiences and skills is not a direct way of assessing their value to your organisation. Often it simple assesses their ability to mislead an interviewer. An executable resume demonstrates: The highest quality code that the person is able to produce. That the person is sufficiently motivated to produce something of value in their own time. That the person loves their craft. The idea of publishing a program to demonstrate a developer’s skills comes from Rob Conery, who suggested that each developer should build their own blog engine since it is the public representation of their level of mastery. Rob said: Luke had to build his own lightsaber – geeks should have to build their own blogs. And that should be their resume. In honour of Rob’s inspiration I plan to build a blog engine as my executable resume. While it is true that the world does not need another blog engine it is as good a project as any, it is a well understood domain, and I have not found an existing blog engine that I like. Executable resumes fit well with the software craftsmanship metaphor. It is not difficult to imagine that under the guild system master craftsmen may have accepted journeymen based on the quality of the work they had produced in the past. We now understand that when it comes to the functionality of an application that code is the final arbiter. Why not apply the same rule to hiring?

    Read the article

  • Own a KINECT for MS-XBOX before anyone does

    Following is the announced by Richik Nandi from Microsoft team. Dear Customer, We believe that our privileged customers shouldn't have to wait for good things. So, here's a special offer exclusively for you. Be one of the first in India to own and experience Kinect for XBOX 360, few days before it is even launched in stores. Introducing the new Kinect for XBOX 360®. Kinect needs no controllers. You are the controller. Kinect brings games and entertainment to life in extraordinary new ways without using a controller. The sensor recognizes your face, eyes and body movements to deliver a superb gaming experience. Easy to use and great fun, Kinect gets everyone off the couch. See a ball? Kick it. Want to join a friend in the fun? Simply jump in. Imagine controlling movies and music with the wave of a hand or the sound of your voice! Kinect is all about fun for you and your family. And the best part is Kinect works with every Xbox 360®. There are two options you can choose from: •  Kinect sensor + 4GB Xbox 360 bundle + Kinect Adventures game at Rs 22,990/-and get Dance Central game worth Rs 1999 from Redington, 20% discount voucher from Starwood on food and beverages, T-shirt from PUMA and a Kinect adventure live card absolutely free using your unique promo code : XbTXXZl2Sb •  Kinect Sensor at Rs 9,500/-and get 20% discount voucher from Starwood on food and beverages, T-shirt from PUMA and a Kinect adventure live card absolutely free using your unique promo code : lDg6o8SuYh We want you to own your Kinect before the official launch. The promotion closes by 10th November. To know more about Kinect click here. To book your Kinect PRE-ORDER now! Enter your details along with the above mentioned promo code to avail of the free gifts offer. We will have your Kinect delivered by 19th November 2010. Enjoy being the controller. Enjoy the Kinect. span.fullpost {display:none;}

    Read the article

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • SQLAuthority News – New Book Released – SQL Server Interview Questions And Answers

    - by pinaldave
    Two days ago, on birthday of my blog – I asked simple question – Guess! What is in this box? I have received lots of interesting comments on the blog about what is in it. Many of you got it absolutely incorrect and many got it close to the right answer but no one got it 100% correct. Well, no issue at all, I am going to give away the price to whoever has the closest answer first in personal email. Here is the answer to the question about what is in the box? Here it is – the box has my new book. In fact, I should say our new book as I co-authored this book with my very good friend Vinod Kumar. We had real blast writing this book together and had lots of interesting conversation when we were writing this book. This book has one simple goal – “master the basics.” This book is not only for people who are preparing for interview. This book is for every one who wants to revisit the basics and wants to prepare themselves to the technology. One always needs to have practical knowledge to do their duty efficiently. This book talks about more than basics. There are multiple ways to present learning – either we can create simple book or make it interesting. We have decided the learning should be interactive and have opted for Interview Questions and Answer format. Here is quick interview which we have done together. Details of the books are here The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us. You can get the books from [Amazon] | [Flipkart]. Read Vinod‘s blog post. Do not forget to wish him happy birthday as today is his birthday and also book release day – two reason to wish him congratulations. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • AJI Report #12 | Tim Hibbard Talks .NET to iOS Development

    - by Jeff Julian
    In this AJI Report, Jeff and John talk with Tim Hibbard of Engraph Software about making the transition from a .NET developer to mobile applications using the iOS platform. Tim dives into what each experience was like from getting into XCode for the first time, using Third-party tools, Apple's design guidelines, and provisioning an app to the App Store. Tim has been a .NET developer since the framework was released in 2001 and now has two mobile applications in production. Listen to the Show Site: http://engraph.com/ Blog: http://timhibbard.com/blog/ Twitter: @timhibbard

    Read the article

  • NetBeans Podcast 62

    - by TinuA
    Download mp3: 49 minutes – 39.5 MB Subscribe to the NetBeans Podcast on iTunes NetBeans Community News with Geertjan and Tinu What's NEW? Recap of a SUCCESSFUL NetBeans Community Day at JavaOne2012! Want to know what you missed? Download slides for: NetBeans Community Keynote NetBeans and JavaFX panel NetBeans and Java EE panel NetBeans Platform panel Visit the JavaOne Content Catalog for slides, and audio and video recordings of all NetBeans sessions at JavaOne 2012. (Type in keyword "NetBeans".) NetBeans Governance Board elections are done. Congratulations to Anton Epple and Hermien Pellissier, the new members of the 20th Board! How would you grade the NetBeans team on NetBeans IDE 7.2? Take the NetBeans 7.2 Satisfaction Survey. NetBeans IDE 7.3 Beta 2 is available for download. The first beta debuted at JavaOne with support for HTML5. Watch videos of HTML5 support in NetBeans and visit Geertjan's blog for a beginner's guide to HTML5 development. It's a busy Fall on the NetBeans Calendar with stops at Devoxx 2012, JavaOne Latin America, Jay Day Munich, Jay Days Sweden  JavaOne 2012 Reflections NetBeans had a fantastic showing at JavaOne 2012--from the full-day lineup of NetBeans Community Day to the numerous BOFs, Labs, and sessions at the main conference. But better to hear it in these short interviews with members of the community who attended JavaOne 2012. Veteran attendees and first-timers, panel participants and award winners, the interviewees share their experience of the conference, from highlights and insights, to new discoveries and inspiration. Listen in to why attending JavaOne is a tech pilgrimage every Java developer ought to make.   07:50   Anton Epple - Eppleton Consulting (Germany); Recipient of 2012 NetBeans Community Recognition Award 17:10   Henry Arousell and Thomas Boqvist - Bjorn Lunden Information (Sweden) 24:45   Glenn Holmer - Weyco Group, Inc. (USA); Recipient of 2012 NetBeans Community Recognition Award 33:09   Timon Veenstra - Agrosense (The Netherlands); 2012 Duke's Choice Award winner (Agrosense in the Nov/Dec '12 issue of Java Magazine.) 40:19   Rob Terplowski, - Linden, Inc. (USA) More thoughts about NetBeans Day and JavaOne can also be found in two recent NetBeans Zone articles: "Reflections on JavaOne 2012 by the NetBeans Community: Part 1 and Part 2". *Have ideas for NetBeans Podcast topics? Send them to nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • Part 4, Getting the conversion tables ready for CS to DNN

    - by Chris Hammond
    This is the fourth post in a series of blog posts about converting from CommunityServer to DotNetNuke. A brief background: I had a number of websites running on CommunityServer 2.1, I decided it was finally time to ditch CommunityServer due to the change in their licensing model and pricing that made it not good for the small guy. This series of blog posts is about how to convert your CommunityServer based sites to DotNetNuke . Previous Posts: Part 1: An Introduction Part 2: DotNetNuke Installation...(read more)

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from Fields #001

    - by Pinal Dave
    I am very excited to announce a new series on this blog – Notes from Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from Fields. In this series we will have notes from various experts in the database world. My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words. As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file. Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions. This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”. In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size). When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements. It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post. If your SQL Server is facing any issue we can Fix Your SQL Server. Additional reading: Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace)  SQL SERVER – How to Stop Growing Log File Too Big Shrinking Truncate Log File – Log Full Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by Brian Dayton
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: ·         How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? ·         What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking ·         How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... ·         How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night.   I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise.    But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Basics of a Query Hint

    - by pinaldave
    This blog post is inspired from SQL Architecture Basics Joes 2 Pros: Core Architecture concepts – SQL Exam Prep Series 70-433 – Volume 3. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Basics of a Query Hint – A Primer. In the article we discussed various basics terminology of the query hints. The article further covers following important concepts of query hints. Expecting Seek and getting a Scan Creating an index for improved optimization Implementing the query hint Above three are the most important concepts related to query hint and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have the following query: DECLARE @UlaChoice TinyInt SET @Type = 1 SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice You have a nonclustered index named IX_Legal_Ula on the UlaChoice field. The Primary key is on the ID field and called PK_Legal_ID 99% of the time the value of the @UlaChoice is set to ‘YP101′. What query will achieve the best optimization for this query? SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(X_Legal_Ula)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(PK_Legal_ID)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice OPTION (Optimize FOR(@UlaChoice = ‘YP101′)) 2) You have the following query: SELECT * FROM CurrentProducts WHERE ShortName = ‘Yoga Trip’ You have a nonclustered index on the ShortName field and the query runs an efficient index seek. You change your query to use a variable for ShortName and now you are using a slow index scan. What query hint can you use to get the same execution time as before? WITH LOCK FAST OPTIMIZE FOR MAXDOP READONLY Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Frustrated where I am, but not sure where to go with my career [closed]

    - by Tom Pickles
    I work (3 years now) as a lead developer for a team developing internal tools and websites for a customer account within large outsourcing company. I'm a self taught programmer and my previous incarnation was a 3rd line support guy, so I have a solid infrastructure knowledge. We use VB.Net/MSSQL/SSIS/SSRS ASP.NET (nTier) in house and I have about 8 years coding experience. Without going into too much detail, my boss is very ambitious and uses our team as his footing to get up the ladder. I've been in the team from the start and the only new dev's we have brought in have been people with a bit of VBA/VBScript experience, much to my chagrin, to bolster his empire. It's been a lot of hard work to bring them up to a standard, but there's still a lot for them to learn. This makes my life stressful as I always get the high profile/complex project work to do as other's simply cannot do it, or it'd take them twice/three times longer to do it. My boss is always seeking stuff for us to build for people who haven't asked for it, which usually get's thrown to me as I have the most experience and can pick new API's (etc) up quicker. He doesn't give us proper requirements, we don't get time to design properly before we code, he wants us to throw something (quick and dirty as he calls it) together so we can get it out ASAP. I take pride in my work so I like to do it properly, make my code clean, maintainable etc, and I train the other guys in the team to do the same. But, we always fall on our faces. The customer we drop the apps on say it doesn't do what they need (due to few requirements), or my boss doesn't like it/changes the spec, so we have to rework it, it get's drawn out, and it makes us and me look and feel like fools. We then get accused by boss of not being reactive enough to change. I've had enough. In order to get my skills and knowledge gap's filled, I've been reading Code Complete 2nd Ed (McConnell) and the Head First Design Patterns books. I'm forcing myself to move into C# from VB at home to broaden my horizons. I'm not sure where to go from here. I don't want to code all my life as I'd like to move into a higher level design/architects role at some point in time (I'm 35). Where do I/can I go from here?

    Read the article

  • Calling Web Services using ADF 11g

    - by James Taylor
    One of the benefits of ADF is that fact that it can use multiple data sources. With SOA playing a big part in today’s IT landscape, applications need to be able to utilise this SOA framework to leverage functionality from multiple systems to provide a composite application. ADF provides functionality to expose web services via the ADF Business Component so if you know how to use Business Components for a database. Configuring ADF for web services is much the same. In this example I use an OSB web service that gets a customer. Create a new Fusion Web Application (ADF) Application and click OK    Provide an Application Name, GetCustomerADF and click Next    From the Project Technologies move Web Services into the Selected box. Accept the defaults and click Finish. Right-click the Model project and select New In the Gallery select Web Services –> Web Service Data Control then click OK. Provide a name GetCustomerDC and give the URL endpoint for the Web Service, then click Next Select the web service operation you want to use for the ADF application. In my example my web service only has one operation. Click Finish Save your work, File –> Save The data control has now been created, the next steps create the UI components. In your application created in step 1 find the ViewController project, right-click and choose New In the Gallery select JSF –> JSF Page Provide a name for the jsp page, GetCustomer, Also ensure that the ‘Create as XML Document (*.jsp) check box is checked. I have selected the page template, Oracle Three Column Layout but you can create a layout of your choice. I only want 2 columns so I delete the last column but right-clicking the right had panel and selecting Delete Drag the fields you require from the web service data control to the left pannel. In my example I only require the Customer ID. When you drag to the panel select Texts –>ADF Input Text w/Label In this example I want to search on a customer based on the ID. So Once I select the ID I want to execute the request. To do this I need a button. Drag the operation object under the fields created in step 15. Select Methods –> ADF Button. You now need to provide the mappings, Choose the ‘Show EI Expression Builder’ Navigate to the bindings, ADFBindings –> bindings –> parametersIterator –> currentRow Click OK Drag and drop the return information I just want the results shown in a form. I want to show all fields Now it is time to test, Right-click the jspx page created in steps 11 – 21 and select Run A browser should start, enter valid values and test  

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >