Search Results

Search found 19481 results on 780 pages for 'bi tools team'.

Page 207/780 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Latest Security Inside Out Newsletter Now Available

    - by Troy Kitch
    The September/October edition of the Security Inside Out Newsletter is now available. Learn about Oracle OpenWorld database security sessions, hands on labs, and demos you'll want to attend, as well as frequently asked question about Label-Based Access Controls in Oracle Database 11g. Subscriber here for the bi-monthly newsletter.  ...and if you haven't already done so, join Oracle Database on these social networks: Twitter Facebook LinkedIn Google+ 

    Read the article

  • 24 Hours of PASS: Whine, Whine, Whine

    - by Most Valuable Yak (Rob Volk)
    24 Hours of Pass (or 24HOP) is a great program offered by PASS to provide free, online training for anyone who wants to learn more about SQL Server.  They routinely have the best SQL Server presenters available for these sessions, and attract hundreds, perhaps even a thousand attendees from around the world.  This is definitely one of the best things they've started doing in the past few years, and every session I've attended has been excellent. So why am I so grumpy about it? I'm not really, pretty much everything here is a minor annoyance that I can deal with.  However since they're so minor they seem to be things that can be easily corrected and would make the process much better. First off, this is my biggest gripe, the registration page: https://www323.livemeeting.com/lrs/8000181573/Registration.aspx?pageName=lj6378f4fhf5hpdm What grinds my gears about this?  I have to scroll alllllllllllllllllllllllllllll the way to bottom to actually register for the sessions.  This wouldn't be so bad except all the details of the session, including the presenter, is in a separate list at the top.  Both lists contain info the other does not, and scrolling between them to determine "Should I make time to listen to this?  Who is speaking at this time anyway?" is really unnecessary. My preference would be to keep the top list and add the checkboxes and schedule info in separate columns.  This is a full-width design, so there's plenty of space for this data, which is pretty small anyway.  The other huge benefit is halving the size of the page, which improves performance and lowers bandwidth usage considerably.  And if you know HTML/ASP.Net, and you view the page source, you can find PLENTY of other things that can be reduced even further.  (not just viewstate) One nice thing that PASS does is send iCal reminders to your email address so you can accept them to your calendar.  Again, they leave off the presenter in the appointment details, while still duplicating the meeting title in the body.  Sometimes I make decisions based on speaker rather than content (Natalie Portman is reading the Yellow Pages??? I'M THERE!) and having the speaker in the iCal is helpful. Next minor annoyances are the necessity for providing a company name, and the survey questions.  I know PASS needs to market themselves effectively, and they need information to do that, and since this is a free event it's really not worth complaining about, but why ask the survey question twice? (once at registration, once again when joining the LiveMeeting)  Same thing for the company name.  All of this should be tied to email address, so that's all I should need to enter when joining the LiveMeeting. The last one is also minor, but it irks me in this day and age of multiple browsers and the decline of Internet Explorer as a dominant platform.  The registration page was originally created in Visual Studio 2003, and has a lot of IE-specific crud representative of the browser situation of 2003. (IE5 references? really? and is the aforementioned viewstate big enough?)  This causes some grief with other browsers like Firefox, Chrome, and sometimes IE8 or 9.  And don't get me started on using the page on a Mac or in Safari. My main point is that PASS is an international organization, welcoming everyone from all levels of SQL Server proficiency, and in that spirit I think it would help to accommodate a wider range of browser software, especially since the registration page is extremely simple.  I recognize that this page is not hosted on the PASS website and may be maintained by some division of Microsoft, but to me that's even worse if MS can't update their own pages.  They've deprecated IE6, so they don't need to maintain support on their own websites anymore. OK, I'll shut up now. #sqlpass #24HOP

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • Google I/O 2010 - ?Run corp apps on App Engine? Yes we do.

    Google I/O 2010 - ​Run corp apps on App Engine? Yes we do. Google I/O 2010 - ​Run corporate applications on Google App Engine? Yes we do. App Engine, Enterprise 201 Ben Fried, Irwin Boutboul, Justin McWilliams, Matthew Simmons Hear Google CIO Ben Fried and his team of engineers describe how Google builds on App Engine. If you're interested in building corp apps that run on Google's cloud, this team has been doing exactly that. Learn how these teams have been able to respond more quickly to business needs while reducing operational burden. For all I/O 2010 sessions, please go to code.google.com/events/io/2010/sessions.html From: GoogleDevelopers Views: 14 0 ratings Time: 55:53 More in Science & Technology

    Read the article

  • How to get current connection settings

    - by Peter Larsson
    SELECT name AS Setting,         CASE             WHEN @@OPTIONS & number = number THEN 'ON'             ELSE 'OFF'         END AS Value FROM    master..spt_values WHERE   type= 'SOP'         AND number > 0

    Read the article

  • How to block the ASP.NET page while ajax UpdateProgress is being displayed.

    Step 1: Copy the following styles to your aspx page. <style type="text/css">       .hide       {           display: none;       }       .show       {           display: inherit;       }        .progressBackgroundFilter       {           position: absolute;           top: 0px;           bottom: 0px;           left: 0px;           right: 0px;           overflow: hidden;           padding: 0;           margin: 0;           background-color: #000;           filter: alpha(opacity=50);           opacity: 0.5;           z-index: 1000;       }       .processMessage       {           position: absolute;           font-family:Verdana;           font-size:12px;           font-weight:normal;           color:#000066;           top: 30%;           left: 43%;           padding: 10px;           width: 18%;           z-index: 1001;           background-color: #fff;       }   </style> Step 2: Put the divs as shown below in UpdateProgress control. <asp:UpdateProgress ID="updPrgsBaselineTab" runat="server">        <ProgressTemplate>            <div id="progressBackgroundFilter" class="progressBackgroundFilter">            </div>            <div id="processMessage" class="processMessage">                <table width="100%">                    <tr style="width: 100%">                        <td style="width: 100%">                            Please Wait..........                        </td>                    </tr>                    <tr style="width: 100%">                        <td style="width: 100%" align="center">                            <img src="../Images/Update_Progress.gif" />                        </td>                    </tr>                </table>            </div>        </ProgressTemplate>    </asp:UpdateProgress> span.fullpost {display:none;}

    Read the article

  • Showing schema.org aggregate rating in Google rich snippets

    - by nickh
    After adding schema.org microdata markup for reviews and aggregate ratings, I expected review and rating information to show up in rich snippets. Unfortunately, neither are being shown. Google's Structured Data Testing Tool finds the microdata, and there're no errors or warnings on the page. Any idea what's wrong with the microdata markup? Example 1: Live Page: http://www.shelflife.net/ljn-thundercats-series-3/bengali Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fljn-thundercats-series-3%2Fbengali Example 2: Live Page: http://www.shelflife.net/star-wars-mighty-muggs/asajj-ventress Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fstar-wars-mighty-muggs%2Fasajj-ventress

    Read the article

  • Cloud Evolving, SQL Server Responding

    - by KKline
    Brent Ozar ( blog | twitter ) and I did an interview with TechTarget’s Brendan Cournoyer at last summer's Tech-Ed, which as turned into a podcast titled “Cloud efforts advance, SQL Server evolves.” The podcast covers all the major trends at the conference (like BI), virtualization features in Quest’s products (like Spotlight), Brent’s new book and MCM certification, and more. Here’s a link to hear it, appearing on 6/11/10: http://searchsqlserver.techtarget.com/podcast/Cloud-efforts-advance-SQL-Server-evolves....(read more)

    Read the article

  • Advantages of Singleton Class over Static Class?

    Point 1)Singleton We can get the object of singleton and then pass to other methods.Static Class We can not pass static class to other methods as we pass objectsPoint 2) Singleton In future, it is easy to change the logic of of creating objects to some pooling mechanism. Static Class Very difficult to implement some pooling logic in case of static class. We would need to make that class as non-static and then make all the methods non-static methods, So entire your code needs to be changed.Point3:) Singleton Can Singletone class be inherited to subclass? Singleton class does not say any restriction of Inheritence. So we should be able to do this as long as subclass is also inheritence.There's nothing fundamentally wrong with subclassing a class that is intended to be a singleton. There are many reasons you might want to do it. and there are many ways to accomplish it. It depends on language you use.Static Class We can not inherit Static class to another Static class in C#. Think about it this way: you access static members via type name, like this: MyStaticType.MyStaticMember(); Were you to inherit from that class, you would have to access it via the new type name: MyNewType.MyStaticMember(); Thus, the new item bears no relationships to the original when used in code. There would be no way to take advantage of any inheritance relationship for things like polymorphism. span.fullpost {display:none;}

    Read the article

  • Google I/O 2010 - Android UI design patterns

    Google I/O 2010 - Android UI design patterns Google I/O 2010 - Android UI design patterns Android 201 Chris Nesladek, German Bauer, Richard Fulcher, Christian Robertson, Jim Palmer In this session, the Android User Experience team will show the types of patterns you can use to build a great Android application. We'll cover things like how to use Interactive Titlebars, Quick Contacts, and Bottom bars as well some new patterns which will get an I/O-only preview. The team will be also available for a no holds barred Q&A session. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 58:42 More in Science & Technology

    Read the article

  • Interviews: Going Beyond the Technical Quiz

    - by Tony Davis
    All developers will be familiar with the basic format of a technical interview. After a bout of CV-trawling to gauge basic experience, strengths and weaknesses, the interview turns technical. The whiteboard takes center stage and the challenge is set to design a function or query, or solve what on the face of it might seem a disarmingly simple programming puzzle. Most developers will have experienced those few panic-stricken moments, when one’s mind goes as blank as the whiteboard, before un-popping the marker pen, and hopefully one’s mental functions, to work through the problem. It is a way to probe the candidate’s knowledge of basic programming structures and techniques and to challenge their critical thinking. However, these challenges or puzzles, often devised by some of the smartest brains in the development team, have a tendency to become unnecessarily ‘tricksy’. They often seem somewhat academic in nature. While the candidate straight out of IT school might breeze through the construction of a Markov chain, a candidate with bags of practical experience but less in the way of formal training could become nonplussed. Also, a whiteboard and a marker pen make up only a very small part of the toolkit that a programmer will use in everyday work. I remember vividly my first job interview, for a position as technical editor. It went well, but after the usual CV grilling and technical questions, I was only halfway there. Later, they sat me alongside a team of editors, in front of a computer loaded with MS Word and copy of SQL Server Query Analyzer, and my task was to edit a real chapter for a real SQL Server book that they planned to publish, including validating and testing all the code. It was a tough challenge but I came away with a sound knowledge of the sort of work I’d do, and its context. It makes perfect sense, yet my impression is that many organizations don’t do this. Indeed, it is only relatively recently that Red Gate started to move over to this model for developer interviews. Now, instead of, or perhaps in addition to, the whiteboard challenges, the candidate can expect to sit with their prospective team, in front of Visual Studio, loaded with all the useful tools in the developer’s kit (ReSharper and so on) and asked to, for example, analyze and improve a real piece of software. The same principles should apply when interviewing for a database positon. In addition to the usual questions challenging the candidate’s knowledge of such things as b-trees, object permissions, database recovery models, and so on, sit the candidate down with the other database developers or DBAs. Arm them with a copy of Management Studio, and a few other tools, then challenge them to discover the flaws in a stored procedure, and improve its performance. Or present them with a corrupt database and ask them to get the database back online, and discover the cause of the corruption.

    Read the article

  • Scheduling Jobs in SQL Server Express

    As we all know SQL Server 2005 Express is a very powerful free edition of SQL Server 2005. However it does not contain SQL Server Agent service. Because of this scheduling jobs is not possible. So if we want to do this we have to install a free or commercial 3rd party product. This usually isn't allowed due to the security policies of many hosting companies and thus presents a problem. Maybe we want to schedule daily backups, database reindexing, statistics updating, etc. This is why I wanted to have a solution based only on SQL Server 2005 Express and not dependent on the hosting company. And of course there is one based on our old friend the Service Broker.

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel(at)oracle.com
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an "Enterprise 2.0" environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Is your organization a social enterprise? How are you using Web 2.0 and Enterprise 2.0 technologies? Read this white paper to learn how Oracle WebCenter Suite enables organizations to become social enterprises and is the modern user experience platform for the enterprise and the Web.

    Read the article

  • Testing with Profiler Custom Events and Database Snapshots

    We've all had them. One of those stored procedures that is huge and contains complex business logic which may or may not be executed. These procedures make it an absolute nightmare when it comes to debugging problems because they're so complex and have so many logic offshoots that it's very easy to get lost when you're trying to determine the path that the procedure code took when it ran. Fortunately Profiler lets you define custom events that you can raise in your code and capture in a trace so you get a better window into the sub events occurring in your code. I found it very useful to use custom events and a database snapshot to debug some code recently and we'll explore both in this article. I find raising these events and running Profiler to be very useful for testing my stored procedures on my own as well as when my code is going through official testing and user acceptance. It's a simple approach and a great way to catch any performance problems or logic errors.

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • Sharing A Stage: JDeveloper/ADF & NetBeans/Java EE 6?

    - by Geertjan
    A highlight for me during last week's Oracle Developer Day in Romania (which I blogged about here) was meeting Jernej Kaše (who is from Slovenia, just like my philosopher hero Slavoj Žižek), who is an Oracle Fusion Middleware evangelist. At the conference, while I was presenting NetBeans and Java EE 6 in one room, Jernej was presenting JDeveloper and ADF in another room. The application he created looks as follows, i.e., a realistic CRUD app, with a master/detail view, a search feature, and validation: In a conversation during a break, we started imagining a scenario where the two of us would be on the same stage, taking turns talking about NetBeans/Java EE and JDeveloper/ADF. In that way, attendees at a conference wouldn't need to choose which of the two topics to attend, because they'd be handled in the same session, with the session possibly being longer so that sufficient time could be spent on the respective technologies. (The JDeveloper/ADF session would then not be competing with the NetBeans/Java EE 6 session, since they'd be handled simultaneously.) The session would focus on the similarities/differences between the two respective tools/solutions, which would be extremely interesting and also unique. The crucial question in making this kind of co-presentation possible is whether (and how quickly) an application such as the one created above with JDeveloper/ADF could be created with NetBeans/Java EE 6. The NetBeans/Java EE 6 story is extremely strong on the model and controler levels, but less strong on the view layer. Though there are choices between using PrimeFaces, RichFaces, and IceFaces, that support is quite limited in the absence of a visual designer or of other specific tools (e.g., code generators to generate snippets of PrimeFaces) connected to JSF component libraries. However, it so happens that in recent months we at NetBeans have established really good connections with the PrimeFaces team (more about that another time). So I asked them what it would take to write the above UI in PrimeFaces. The PrimeFaces team were very helpful. They sent me the following screenshot, which is of the UI they created in PrimeFaces, reproducing the ADF screenshot above: Of course, the above is purely the UI layer, there's no EJB and entity classes and data connection hooked into it yet. However, this is the Facelets file that the PrimeFaces team sent me, i.e., using the PrimeFaces component library, that produces the above result: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:p="http://primefaces.org/ui"> <f:view> <h:head> <style type="text/css"> .alignRight { text-align: right; } .alignLeft { text-align: left; } .alignTop { vertical-align: top; } .ui-validation-required { color: red; font-size: 14px; margin-right: 5px; position: relative; vertical-align: top; } .ui-selectonemenu .ui-selectonemenu-trigger .ui-icon { margin-top: 7px !important; } </style> </h:head> <h:body> <h:form prependId="false" id="form"> <p:panel header="Employees"> <h:panelGrid columns="4" id="searchPanel"> Search <p:selectOneMenu> <f:selectItem itemLabel="FirstName" itemValue="FirstName" /> <f:selectItem itemLabel="LastName" itemValue="LastName" /> <f:selectItem itemLabel="Email" itemValue="Email" /> <f:selectItem itemLabel="PhoneNumber" itemValue="PhoneNumber" /> </p:selectOneMenu> <p:inputText /> <p:commandLink process="searchPanel" update="@form"> <h:graphicImage name="next.gif" library="img" /> </p:commandLink> </h:panelGrid> <h:panelGrid columns="3" columnClasses="alignTop,,alignTop" style="width:90%;margin-left:10%"> <h:panelGrid columns="2" columnClasses="alignRight,alignLeft"> <h:outputLabel for="firstName">FirstName</h:outputLabel> <p:inputText id="firstName" /> <h:outputLabel for="lastName"> <sup class="ui-validation-required">*</sup>LastName </h:outputLabel> <p:inputText id="lastName" style="width:250px;" /> <h:outputLabel for="email"> <sup class="ui-validation-required">*</sup>Email </h:outputLabel> <p:inputText id="email" style="width:250px;" /> <h:outputLabel for="phoneNumber" value="PhoneNumber" /> <p:inputMask id="phoneNumber" mask="999.999.9999" /> <h:outputLabel for="hireDate"> <sup class="ui-validation-required">*</sup>HireDate</h:outputLabel> <p:calendar id="hireDate" pattern="MM/dd/yyyy" showOn="button" /> </h:panelGrid> <p:outputPanel style="min-width:40px;" /> <h:panelGrid columns="2" columnClasses="alignRight,alignLeft"> <h:outputLabel for="jobId"> <sup class="ui-validation-required">*</sup>JobId </h:outputLabel> <p:selectOneMenu id="jobId" > <f:selectItem itemLabel="Administration Vice President" itemValue="Administration Vice President" /> <f:selectItem itemLabel="Vice President" itemValue="Vice President" /> </p:selectOneMenu> <h:outputLabel for="salary">Salary</h:outputLabel> <p:inputText id="salary" styleClass="alignRight" /> <h:outputLabel for="commissionPct">CommissionPct</h:outputLabel> <p:inputText id="commissionPct" style="width:30px;" maxlength="3" /> <h:outputLabel for="manager">ManagerId</h:outputLabel> <p:selectOneMenu id="manager"> <f:selectItem itemLabel="Steven King" itemValue="Steven" /> <f:selectItem itemLabel="Michael Cook" itemValue="Michael" /> <f:selectItem itemLabel="John Benjamin" itemValue="John" /> <f:selectItem itemLabel="Dav Glass" itemValue="Dav" /> </p:selectOneMenu> <h:outputLabel for="department">DepartmentId</h:outputLabel> <p:selectOneMenu id="department"> <f:selectItem itemLabel="90" itemValue="90" /> <f:selectItem itemLabel="80" itemValue="80" /> <f:selectItem itemLabel="70" itemValue="70" /> <f:selectItem itemLabel="60" itemValue="60" /> <f:selectItem itemLabel="50" itemValue="50" /> <f:selectItem itemLabel="40" itemValue="40" /> <f:selectItem itemLabel="30" itemValue="30" /> <f:selectItem itemLabel="20" itemValue="20" /> </p:selectOneMenu> </h:panelGrid> </h:panelGrid> <p:outputPanel id="buttonPanel"> <p:commandButton value="First" process="@this" update="@form" /> <p:commandButton value="Previous" process="@this" update="@form" style="margin-left:15px;" /> <p:commandButton value="Next" process="@this" update="@form" style="margin-left:15px;" /> <p:commandButton value="Last" process="@this" update="@form" style="margin-left:15px;" /> </p:outputPanel> <p:tabView style="margin-top:25px"> <p:tab title="Job History"> <p:dataTable var="history"> <p:column headerText="StartDate"> <h:outputText value="#{history.startDate}"> <f:convertDateTime pattern="MM/dd/yyyy" /> </h:outputText> </p:column> <p:column headerText="EndDate"> <h:outputText value="#{history.endDate}"> <f:convertDateTime pattern="MM/dd/yyyy" /> </h:outputText> </p:column> <p:column headerText="JobId"> <h:outputText value="#{history.jobId}" /> </p:column> <p:column headerText="DepartmentId"> <h:outputText value="#{history.departmentIdId}" /> </p:column> </p:dataTable> </p:tab> </p:tabView> </p:panel> </h:form> </h:body> </f:view> </html> Right now, NetBeans IDE only has code completion to create the above. So there's not much help for creating such a UI right now. I don't believe that a visual designer is mandatory to create the above. A few code generators and file templates could do the job too. And I'm looking forward to seeing those kinds of tools for PrimeFaces, as well as other JSF component libraries, appearing in NetBeans IDE in upcoming releases. A related option would be for the NetBeans generated CRUD app to include the option of having a master/detail view, as well as the option of having a search feature, i.e., the application generators would provide the option of having additional features typical in Java enterprise apps. In the absence of such tools, there still is room, I believe, for NetBeans/Java EE and JDeveloper/ADF sharing a stage at a conference. The above file would have been prepared up front and the presenter would state that fact. The UI layer is only one aspect of a Java EE 6 application, so that the presenter would have ample other features to show (i.e., the entity class generation, the tools for working with servlets, with session beans, etc) prior to getting to the point where the statement would be made: "On the UI layer, I have prepared this Facelets file, which I will now show you can be connected to the lower layers of the application as follows." At that point, the session beans could be hooked into the Facelets file, the file would be saved, the browser refreshed, and then the whole application would work exactly as the ADF application does. So, Jernej, let's share a stage soon!

    Read the article

  • Finance: Friends, not foes!

    - by red@work
    After reading Phil's blog post about his experiences of working on reception, I thought I would let everyone in on one of the other customer facing roles at Red Gate... When you think of a Credit Control team, most might imagine money-hungry (and often impolite) people, who will do nothing short of hunting people down until they pay up. Well, as with so many things, not at Red Gate! Here we do things a little bit differently.   Since joining the Licensing, Invoicing and Credit Control team at Red Gate (affectionately nicknamed LICC!), I have found it fantastic to work with people who know that often the best way to get what you want is by being friendly, reasonable and as helpful as possible. The best bit about this is that, because everyone is in a good mood, we have a great working atmosphere! We are definitely a very happy team. We laugh a lot, even when dealing with the serious matter of playing table football after lunch. The most obvious part of my job is bringing in money. There are few things quite as satisfying as receiving a big payment or one that you've been chasing for a long time. That being said, it's just as nice to encounter the companies that surprise you with a payment bang on time after little or no chasing. It's always a pleasure to find these people who are generous and easy to work with, and so they always make me smile, too. As I'm in one of the few customer facing roles here, I get to experience firsthand just how much Red Gate customers love our software and are equally impressed with our customer service. We regularly get replies from people thanking us for our help in resolving a problem or just to simply say that they think we're great. Or, as is often the case, that we 'rock and are awesome'! When those are the kinds of emails you have to deal with for most of the day, I would challenge anyone to be unhappy! The best thing about my work is that, much like Phil and his counterparts on reception, I get to talk to people from all over the world, and experience their unique (and occasionally unusual) personality traits. I deal predominantly with customers in the US, so I'll be speaking to someone from a high flying multi-national in New York one minute, and then the next phone call will be to a small office on the outskirts of Alabama. This level of customer involvement has led to a lot of interesting anecdotes and plenty of in-jokes to keep us amused! Obviously there are customers who are infuriating, like those who simply tell us that they will pay "one day", and that we should stop chasing them. Then there are the people who say that they ordered the tools because they really like them, but they just can't afford to actually pay for them at the moment. Thankfully these situations are relatively few and far between, and for every one customer that makes you want to scream, there are far, far more that make you smile!

    Read the article

  • Architecture Forum in the North 2010 - Hosted by Black Marble

    - by Stuart Brierley
    On Thursday the 8th of December I attended the "Architecture Forum in the North 2010" hosted by Black Marble. The third time this annual event has been held, it was pitched as featuring Black Marble and Microsoft UK architecture experts focusing on “Tools and Methods for Architects.... a unique opportunity to provide IT Managers, IT and software architects from Northern businesses the chance to learn about the latest technologies and best practices from Microsoft in the field of Architecture....insightful information about the latest techniques, demonstrating how with Microsoft’s architecture tools and technologies you can address your current business needs." Following a useful overview of the Architecture features of Visual Studio 2010, the rest of the day was given over to various features and ways to make use of Microsoft's Azure offerings.  While I did feel that a wider spread of technologies could have been covered (maybe a bit of Sharepoint or BizTalk even), the technological and architectural overviews of the Azure platform were well presented, informative and useful. The day was well organised and all those involved were friendly and approachable for questions and discussions.  If you are in "the North" and get a chance to attend next year I would highly recommend it.

    Read the article

  • TDD with limited resources

    - by bunglestink
    I work in a large company, but on a just two man team developing desktop LOB applications. I have been researching TDD for quite a while now, and although it is easy to realize its benefits for larger applications, I am having a hard time trying to justify the time to begin using TDD on the scale of our applications. I understand its advantages in automating testing, improving maintainability, etc., but on our scale, writing even basic unit tests for all of our components could easily double development time. Since we are already undermanned with extreme deadlines, I am not sure what direction to take. While other practices such as agile iterative development make perfect since, I am kind of torn over the productivity trade-offs of TDD on a small team. Are the advantages of TDD worth the extra development time on small teams with very tight schedules?

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >