Search Results

Search found 7338 results on 294 pages for 'useful'.

Page 144/294 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Groovy Debugging

    - by Vijay Allen Raj
    Groovy Debugging - An Overview:ADF BC developers may express snippets of business logic (like the following) as embedded groovy expressions: default / calculated attribute valuesvalidation rules / conditionserror message tokensLOV input values (VO) This approach has the advantages that: Groovy has a compact, EL-like syntax for expressing simple logicADF has extended this syntax to provide useful built-insembedded Groovy expressions are customizableGroovy debugging support helps improve maintainability of business logic expressed in Groovy.Following is an example how groovy debugging works.Example:This example shows how a script expression validator can be created and the groovy script debugged. It shows Step over, breakpoint functionalities as well as syntax coloring.Let us create a ADFBC application based on Emp and Dept tables, and add a script expression validator based on the script:  if (Sal >= 5000){ //If EmpSal is greater than a property value set on the custom //properties on the root AM //raise a custom exception else raise a custom warning if (Sal >= source.DBTransaction.rootApplicationModule.propertiesMap.salHigh) { adf.error.raise("ExcGreaterThanApplicationLimit"); } else { adf.error.warn("WarnGreaterThan5000"); } } else if (EmpSal <= 1000) { adf.error.raise("ExcTooLow"); }return true;In the Emp.xml Flat editor, place breakpoints at various locations as shown below:Right click the appmodule and click Debug. Enter a value greater than 5000 and click next. You can see the debugging work as shown below:  The code can be also be stepped over and debugged.

    Read the article

  • The Exceptional EXCEPT clause

    - by steveh99999
    Ok, I exaggerate, but it can be useful… I came across some ‘poorly-written’ stored procedures on a SQL server recently, that were using sp_xml_preparedocument. Unfortunately these procs were  not properly removing the memory allocated to XML structures – ie they were not subsequently calling sp_xml_removedocument… I needed a quick way of identifying on the server how many stored procedures this affected.. Here’s what I used.. EXEC sp_msforeachdb 'USE ? SELECT DB_NAME(),OBJECT_NAME(s1.id) FROM syscomments s1 WHERE [text] LIKE ''%sp_xml_preparedocument%'' EXCEPT SELECT DB_NAME(),OBJECT_NAME(s2.id) FROM syscomments s2 WHERE [text] LIKE ''%sp_xml_removedocument%'' ‘ There’s three nice features about the code above… 1. It uses sp_msforeachdb. There’s a nice blog on this statement here 2. It uses the EXCEPT clause.  So in the above query I get all the procedures which include the sp_xml_preparedocument string, but by using the EXCEPT clause I remove all the procedures which contain sp_xml_removedocument.  Read more about EXCEPT here 3. It can be used to quickly identify incorrect usage of sp_xml_preparedocument. Read more about this here The above query isn’t perfect – I’m not properly parsing the SQL text to ignore comments for example - but for the quick analysis I needed to perform, it was just the job…

    Read the article

  • Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications

    - by mesriniv
    Blog by Meera Srinivasan (Oracle Product Management) Today afternoon (10/2/2012), Mohan Kamath, and I (Meera Srinivasan) delivered an Open World session on how Oracle Fusion Applications (the next generation business applications from Oracle), use Oracle BPM, Oracle SOA and Oracle ADF products. These adoption patterns can be applied in a generic manner to produce process-centric, user-centric, highly customizable and extensible next generation application. The session was well attended and we had lively discussions with the attendees during Q & A. We started with why as an application developer, you should look at BPM for creating a process-centric application and presented the following fusion adoption patterns Model driven agile development Customization and Extension Guided Process Interactions Personalization and Customization of End User Interfaces Approval Flows Fusion HCM, On Boarding Process - Activity Guide Interface was used as an example for the Guided Process Interactions adoption pattern and the Fusion CRM BPM Process Templates for Customization adoption pattern. In the Personalization and Customization of End User Interfaces section, we looked at how ADF is used within Oracle BPM and the various options available to customize end user interfaces. We also presented how Oracle Procurement does complex approvals using Rules and Approval Management Extensions. We hope you found the session useful, and please do try to attend Heidi’s session on dynamic case management: Case Management Patterns with Oracle Unified Business Process Management Suite. Marriott Marquis - Salon 7, Thu 11:15 AM - 12:15 PM

    Read the article

  • Is my JS/Jquery methodology good?

    - by absentx
    I always struggle with which of the stack sites is best to post "questions of theory" like this, but I think programmers is the best, if not, as usual a mod will move it etc... I am seeking critique on what has become my normal methodology of writing javascript. I have become heavily reliant on the Jquery library, but I think this has helped me learn the native language better also. Anyways, please critique the following style of JS coding...buried are a lot of questions of scope, if you could point out the strengths and weaknesses of this style I would appreciate it. var critique ={ start: function(){ globalness = 'GLOBAL-GLOBAL'; //available to all critique's methods var notglobalness = 'LOCAL-LOCAL';// only available to critiques start method //am I using the "method" teminology properly here?? $('#stuff').on('click','a.closer-target',function(){ $target = $(this); if($target.hasClass('active')){ $target.removeClass('active'); } else{ $target.addClass('active'); critique.madness($target); } }) console.log(notglobalness+': at least I am useful at home'); console.log('note here that: '+notglobalness+' is no longer available after this point, lets continue on:'); critique.madness(notglobalness); }, madness: function($e){ // do a bunch of awesomeness with $e //but continue to keep it seperate because you think its best to keep things isolated. //send to the next function when complete here console.log('here is globalness, which is still available from the start method of critique!! ' + globalness); console.log('lets see if the globalness carries on to a new var object!!'); console.log('the locally isolated variable of NOTGLOBALNESS is available here because it was passed to this method, lets show it:'+$e); carryOn.start(); } } //end critique var carryOn={ start: function(){ console.log('any chance critique.globalness will work here??? lets see: ' +globalness); console.log('it absolutely does'); } } $(document).ready(critique.start);

    Read the article

  • best way to enlarge system partition

    - by yuvi
    I have a problem - I need to enlarge my system partition. I mean - when I initially installed Ubuntu, I split the partition so I have 15GB for system and the rest (around 400) pointed at /home/. This is very useful if anything goes wrong someday and I want to format and completely re-install Ubuntu without losing any of my actual data. The problem is, 15GB isn't enough, so it seems. I already moved /var/ and /opt/ folder to /home/, adding symlinks at root, but I'm still at 86% usage and I'm having performance issues (mostly when booting or running a VM). I can use Ubuntu on a flash drive and externally enlarge the partition, but I'm really afraid with going forward with that plan. Also, despite what I said before, I'd like to avoid re-installing the system if at all possible. Any advice, suggestions or ideas on how to best approach this? Any warnings I should heed? Thanks in advance! update Here's the gparted screenshot - as you can see, there's windows on dual boot (sda1-5 are all related to the windows system), then I have a linux swap, 14GB (so uh... not even 15) of system and 435 of for /home.

    Read the article

  • Virtually the fastest way to try Solaris 11 (and Solaris 10 zones)

    - by dminer
    If you're looking to try out Solaris 11, there are the standard ISO and USB image downloads on the main page.  Those are great if you're looking to install Solaris 11 on hardware, and we hope you will.  But if you take the time to look down the page, you'll find a link off to the Oracle Solaris 11 Virtual Machine downloads.  There are two downloads there:A pre-built Solaris 10 zoneA pre-built Solaris 11 VM for use with VirtualBoxIf you're looking to try Solaris 11 on x86, the second one is what you want.  Of course, this assumes you have VirtualBox already (and if you don't, now's the time to try it, it's a terrific free desktop virtualization product).  Once you complete the 1.8 GB download, it's a simple matter of unzipping the archive and a few quick clicks in VirtualBox to get a Solaris 11 desktop booted.  While it's booting, you'll get to run through the new system configuration tool (that'll be the subject of a future posting here) to configure networking, a user account, and so on.So what about that pre-built Solaris 10 zone download?  It's a really simple way to get yourself acquainted with the Solaris 10 zones feature, which you may well find indispensible in transitioning an existing Solaris 10 infrastructure to Solaris 11.  Once you've downloaded the file, it's a self-extracting executable that'll configure the zone for you, all you have to supply is an IP address for the zone.  It's really quite slick!I expect we'll do a lot more pre-built VM's and zones going forward, as that's a big part of being a cloud OS; if there's one that would be really useful for you, let us know.

    Read the article

  • Solaris 11 Customer Maintenance Lifecycle

    - by user12244672
    Hi Folks, Welcome to my new blog, http://blogs.oracle.com/Solaris11Life , which is all about the Customer Maintenance Lifecycle for Image Packaging System (IPS) based Solaris releases, such as Solaris 11. It'll include policies, best practices, clarifications, and lots of other stuff which I hope you'll find useful as you get up to speed with Solaris 11 and IPS.   Let's start with a version of my Solaris 11 Customer Maintenance Lifecycle presentation which I gave at this year's Oracle Open World and at the recent Deutsche Oracle Anwendergruppe (DOAG - German Oracle Users Group) conference in Nürnberg. Some of you may be familiar with my Patch Corner blog, http://blogs.oracle.com/patch , which fulfilled a similar purpose for System V [five] Release 4 (SVR4) based Solaris releases, such as Solaris 10 and below. Since maintaining a Solaris 11 system is quite different to maintaining a Solaris 10 system, I thought it prudent to start this 2nd parallel blog for Solaris 11. Actually, I have an ulterior motive for starting this separate blog.  Since IPS is a single tier packaging architecture, it doesn't have any patches, only package updates.  I've therefore banned the word "patch" in Solaris 11 and introduced a swear box to which my colleagues must contribute a quarter [$0.25] every time they use the word "patch" in a public forum.  From their Oracle Open World presentations, John Fowler owes 50 cents, Liane Preza owes $1.25, and Bart Smaalders owes 75 cents.  Since I'm stinging my colleagues in what could be a lucrative enterprise, I couldn't very well discuss IPS best practices on a blog called "Patch Corner" with a URI of http://blogs.oracle.com/patch.  I simply couldn't afford all those contributions to the "patch" swear box. :) Feel free to let me know what topics you'd like covered - just post a comment in the comment box on the blog. Best Wishes, Gerry.

    Read the article

  • Syncing sharepoint 2010 with outlook

    - by uruit
    Technorati Tags: Sharepoint 2010,Outlook SharePoint offers the possibility to connect to content in a Document Library directly from Outlook, edit the documents offline and then sync when connection is restored. This is very useful if we are working at home and we want to access a shared document (ex. VPN connection settings) or continue working directly on a file. Steps to configure the connection: 1. Browse online to SharePoint Document library you want to connect and click on "Connect to Outlook": (click to enlarge) 2. Click Allow to confirm: 3. In Outlook you will see the documents as outlook email items with the ability to preview them. When a document is updated, Outlook will notify you that you have items unread. If you want to edit a file, the corresponding office tool (Word, Excel, PowerPoint) will ask if you want to update the server after saving a change, it is really straightforward. (click to enlarge) 4. Finally, I recommend to add the IP address of your SharePoint server in the secure sites in order to prevent Outlook to ask for your windows credentials every time you open Outlook: (click to enlarge) Outlook is a great tool, letting you work in a really integrated way, don't miss this amazing feature. This feature is also available in SharePoint Online :)   Post by: Marcelo Martinez UruIT (www.uruit.com/sharepoint_outsourcing.html) Leaders in Nearshore Outsourcing from South America

    Read the article

  • GitHub: Are there external tools for managing issues list vs. project backlog

    - by DXM
    Recently I posted one of my the projects1 on GitHub and as I was exploring capabilities of the site, I noticed they have a rather decent issue tracking section. I want to use that section as a) other people can report bugs if they'd like and b) other people can see which bugs I'm aware of. However, as others have noted, issues list cannot be prioritized in order to create a project backlog. For now my backlog has been a text file, but I'd like to be able to have it integrated so the same information isn't maintained in different places. Having a fully ordered list, which is something we also practice at work, has been very useful as I can open one file, start with line 1 and fire off 2 or 3 items in one sitting without having to go back to a full issues/stories bucket. GitHub doesn't offer this. What GitHub does offer is a very nice and clean API so issues can easily be exported into anything else. I've searched to see if there are other websites (like Trello) that integrate with GitHub issues, but did not find anything. Does anyone know of such a product, service or offline tool? Those that use GitHub, what is your experience in managing backlog? I kinda hate the idea of manually managing two disconnected lists like some people seem to be doing with Wiki project pages. 1 - are shameless plugs allowed no this site? Searched but didn't find a definite answer. If it's bad practice, STOP and don't read further As a developer I got sick and tired of navigating to same set of folders 30 times a day, so I wrote a little, auto-collapsible utility that gets stuck to the desktop and allows easy access to the folders you constantly use.

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Do programmers need a union? [closed]

    - by James A. Rosen
    In light of the acrid responses to the intellectual property clause discussed in my previous question, I have to ask: why don't we have a programmers' union? There are many issues we face as employees, and we have very little ability to organize and negotiate. Could we band together with the writers', directors', or musicians' guilds, or are our needs unique? Has anyone ever tried to start one? If so, why did it fail? (Or, alternatively, why have I never heard of it, despite its success?) later: Keith has my idea basically right. I would also imagine the union being involved in many other topics, including: legal liability for others' use/misuse of our work, especially unintended uses evaluating the quality of computer science and software engineering higher education programs -- unlike many other engineering disciplines, we are not required to be certified on receiving our Bachelor's degrees evangelism and outreach -- especially to elementary school students certification -- not doing it, but working with the companies like ISC(2) and others to make certifications meaningful and useful continuing education -- similar to previous conferences -- maintain a go-to list of organizers and other resources our members can use I would see it less so as a traditional trade union, with little emphasis on: pay -- we tend to command fairly good salaries outsourcing and free trade -- most of use tend to be pretty free-market oriented working conditions -- we're the only industry with Aeron chairs being considered anything like "standard"

    Read the article

  • Best practices for using namespaces in C++.

    - by Dima
    I have read Uncle Bob's Clean Code a few months ago, and it has had a profound impact on the way I write code. Even if it seemed like he was repeating things that every programmer should know, putting them all together and putting them into practice does result in much cleaner code. In particular, I found breaking up large functions into many tiny functions, and breaking up large classes into many tiny classes to be incredibly useful. Now for the question. The book's examples are all in Java, while I have been working in C++ for the past several years. How would the ideas in Clean Code extend to the use of namespaces, which do not exist in Java? (Yes, I know about the Java packages, but it is not really the same.) Does it make sense to apply the idea of creating many tiny entities, each with a clearly define responsibility, to namespaces? Should a small group of related classes always be wrapped in a namespace? Is this the way to manage the complexity of having lots of tiny classes, or would the cost of managing lots of namespaces be prohibitive?

    Read the article

  • Import SSIS Project in Denali CTP1

    For years Analysis Services has had the ability to take an existing database from a server and reverse engineer it into a BIDS project.  This is extremely useful when all you have is the running instance of the database and the project that created it has long since disappeared.  Reverse engineering has never been a feature of SSIS until now. Let me walk you through the simple steps. The first step is that you obviously have to have a project deployed to an SSIS Catalog.  I will do a video on this soon but in case you can’t wait then my good buddy Jamie Thomson has written it up here As you can see I have a project called imaginatively “Denali1” with one package “Package.dtsx” The next thing we need to do is fire up BIDS and choose the right project type (Integration Services Import Project) Now we just follow the wizard.  We make sure we specify on which server to find the Catalog and in which folder to look for the project. Next the setting are validated and we are greeted with the familiar review screen before the creation of our new project from the deployed project happens Hit Import and away we go The result is just what we wanted.

    Read the article

  • Do I need to uninstall lxde before installing kde-standard?

    - by A Roy
    I have ubuntu 12.04 (upgraded from 10.04) and since I disliked the default desktop, I installed lxde (sudo apt-get install lxde). This was good except that occasionally there would be trouble with Firefox (blinking on panel) so that finally I had to close it and then a error message from Ubuntu was issued. I had asked about it before but there was no useful response so now I want to move to another desktop which will hopefully not create the problem I have now. My doubt is, should I first uninstall lxde and then install kde (sudo apt-get install kde-standard) or is it enough to install kde without uninstalling lxde? In case it is necessary to uninstall, should I use the command sudo apt-get remove lxde or is there a better command for it? You may also help me with choice of desktop. I installed lxde since this is simple and lightweight. I am assuming that kde will not be as simple but hopefully not create problem like above. But I hate if it takes too long to log in or to launch a program like Firefox etc and also there should not be icons fixed on the left part of terminal (I hate to keep icons on desktop since these are distracting). Some of these issues were present with default Ubuntu 12.04. So is my choice of kde-standard appropriate or are there better desktop alternatives?

    Read the article

  • ArchBeat Link-o-Rama for 10-18-2012

    - by Bob Rhubart
    WebLogic Server 11gR1 Interactive Quick Reference | WebLogic Partner Community EMEA "The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark Rittman Oracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. The yearly AMIS Review from Oracle Open World and JavaOne – slides available | Lucas Jellema Oracle ACE Director Lucas Jellema presents the complete collection of presentations from the latest edition of AMIS Technology's annual review of "news, trends, announcements, special finds and interesting rumors" from Oracle OpenWorld and JavaOne. Fujitsu: Cloud Building with Oracle VM and Oracle Enterprise Manager 12c In this video, Oracle ACE Director Debra Lilley from Fujitsu discusses Cloud Services delivery using Oracle VM 3 and Oracle Enterprise Manager 12c. Webcast: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter – October 30 Learn how ResCare solves content lifecycle challenges with Oracle WebCenter. Speakers: Joe Lichtefeld, VP of Application Services & PMO, ResCare Wayne Boerger, Product Manager, TEAM Informatics Doug Thompson, EVP Global Development, TEAM Informatics Date: Tuesday, October 30, 2012 Time: 10:00 a.m. PT / 1:00 p.m. ET Thought for the Day "There is only one thing more painful than learning from experience and that is not learning from experience." — Archibald McLeish (May 7, 1892 – April 20, 1982) Source: softwarequotes.com

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • SSIS Denali as part of “Enterprise Information Management”

    - by jorg
    When watching the SQL PASS session “What’s Coming Next in SSIS?” of Steve Swartz, the Group Program Manager for the SSIS team, an interesting question came up: Why is SSIS thought of to be BI, when we use it so frequently for other sorts of data problems? The answer of Steve was that he breaks the world of data work into three parts: Process of inputs BI   Enterprise Information Management All the work you have to do when you have a lot of data to make it useful and clean and get it to the right place. This covers master data management, data quality work, data integration and lineage analysis to keep track of where the data came from. All of these are part of Enterprise Information Management. Next, Steve told Microsoft is developing SSIS as part of a large push in all of these areas in the next release of SQL. So SSIS will be, next to a BI tool, part of Enterprise Information Management in the next release of SQL Server. I'm interested in the different ways people use SSIS, I've basically used it for ETL, data migrations and processing inputs. In which ways did you use SSIS?

    Read the article

  • How does one network at software conferences?

    - by Billy ONeal
    Well... I'm still at Microsoft TechEd -- and the response to my last question was overwhelmingly "networking is the most useful part of software conferences". Problem: I have no idea how to even approach that task. I've always been kind of an introvert. At school and at work I've generally not had issues because there are enough extroverts around that approach me that I've made some awesome friends over the years. However, at conferences, it seems most are introverted like myself, and those who aren't seem to be salespeople. The couple of times I've felt okay approaching people it's been after a session where there's been healthy discussion throughout the whole room, and just when I get the nerve to go up and talk to some people, they leave and go on to other things. Are there books I can read? Advice I can take? Anything as far as approaching people one does not know? 'Cause every time I try I just feel like an awkward mess. :( (Oddly enough, I don't have problems speaking to a group of people -- it's the one-on-one things that trip me up :P) (Oh, and by the way, if anyone from here is also there and would like to meet to talk about things, I'm game :P)

    Read the article

  • Be There: Tinkerforge/NetBeans Platform Integration Course

    - by Geertjan
    Tinkerforge is an electronic construction kit. It exposes a number of API bindings, including, of course, Java. The nice thing also is that Tinkerforge products are open source, both on the hardware and software levels, so that you can take their bases as a starting point for your own modifications. "The TinkerForge system is a set of pre-built electronics boards that are built in such a way that you can stack the boards (known as bricks), attach accessories (known as bricklets), and have your prototype and and running quickly. Unlike systems, such as the Arduino or Launchpad, the TinkerForge has to be attached to a computer and the computer does all of the work. With an easy set of application programming interfaces (APIs) available in C/C++, C#, Java, PHP, and Ruby, the system is easy to interface and program over USB in a snap." (from this useful article) Henning Krüp, who has arranged several NetBeans Platform Certified Training Courses in the past, in the Nordhorn/Lingen area in Germany, had the inspired idea to focus the next course on integration with Tinkerforge. In other words, the whole course will be focused on creating a standalone Java desktop application that leverages the NetBeans Platform to interact with Tinkerforge! Interested in joining the course or setting up something similar yourself? The course organized by Henning will be held from 19 to 21 September, as explained here, together with contact details.  If you'd like to organize a similar course at a location of your choosing, leave a comment at the end of this blog entry and we'll set something up together!

    Read the article

  • Working with multiple interfaces on a single mock.

    - by mehfuzh
    Today , I will cover a very simple topic, which can be useful in cases we want to mock different interfaces on our expected mock object.  Our target interface is simple and it looks like:   public interface IFoo : IDisposable {     void Do(); } Now, as we can see that our target interface has implemented IDisposable and in normal cases if we have to implement it in class where language rules require use to implement that as well[no doubt about it] and whether or not there can be more complex cases, we want to ensure that rather having an extra call(..As()) or constructs to prepare it for us, we should do it in the simplest way possible. Therefore, keeping that in mind, first we create a mock of IFoo var foo = Mock.Create<IFooDispose>(); Then, as we are interested with IDisposable, we simply do: var iDisposable = foo as IDisposable;   Finally, we proceed with our existing mock code. Considering the current context, we I will check if the dispose method has invoked our mock code successfully.   bool called = false;   Mock.Arrange(() => iDisposable.Dispose()).DoInstead(() => called = true);     iDisposable.Dispose();   Assert.True(called);   Further, we assert our expectation as follows: Mock.Assert(() => iDisposable.Dispose(), Occurs.Once());   Hopefully that will help a bit and stay tuned. Enjoy!!

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • EBS Workflow Overview & Best Practices - US

    - by Annemarie Provisero
    ADVISOR WEBCAST:  EBS Workflow Overview & Best Practices - US PRODUCT FAMILY:  ATG - Workflow   February 17, 2011 at 17:00 UK / 18:00 CET / 09:00 am Pacific / 10:00 am Mountain / 12:00 Eastern This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Tools and Utilities available to get a closer look into the Java Virtual Machine used in an E-Business Suite Environment and how to tune it. TOPICS WILL INCLUDE: Introduction of Workflow Useful Utilities and Tools Best Practices Q&A A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Do you have a contract between the Product Owner and the Team?

    - by Martin Hinshelwood
    Working in Scrum it is useful to define a Sprint Contract between the Product Owner (PO) and the implementation Team. Doing this helps to improve common understanding in, and sometimes to enforce, the relationship between the PO and the Team. This is simply an agreement between the PO for one Sprint and is not really a commercial contract and should be confirmed via an e-mail at the beginning of every Sprint. “The implementation team agrees to do its best to deliver an agreed on set of features (scope) to a defined quality standard by the end of the sprint. (Ideally they deliver what they promised, or even a bit more.) The Product Owner agrees not to change his instructions before the end of the Sprint.” - Agile Project management (http://agilesoftwaredevelopment.com/blog/peterstev/10-agile-contracts#Sprint) Each of the Sprints in a Scrum project can be considered a mini-project that has Time (Sprint Length), Scope (Sprint Backlog), Quality (Definition of Done) and Cost (Team Size*Sprint Length). Only the scope can vary and this is measured every sprint. Figure: Good Example, the product owner should reply to the team and commit to the contract This Rule has been added to SSW’s Rules to better Scrum with TFS   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • Game editor integration with the engine?

    - by Daniel
    What I am trying to figure out is what is the best way to integrate the editor(level, effects, model, etc...) in the most effective way? Now the first thing I thought would be to create the game engine(*) extremely modular. For example I took the example of game states. You could have multiple game states that all have their own update() and draw() methods among others. Each game state class would inherit from a base GameState class. This allows for a more modular approach and a useful one at that. Now would the most efficient approach be to implement the editor along with the modular engine, or create two different designs for both the game, and editor? I thought to take the game state example and extend it to window states, and well could be used for a lot more systems. Is there a better implementation of this design(game state) for use in other systems used in the engine? *: Now I know the term game engine is sorta irrelevant, and misused in many situations. What I am referring to as the "game engine" is the combination of the systems that the game must interact with for short. Also this is more of a theory / design question than an implementation. Even though both mix, i'd rather like to have a more general idea on how the editor is built in an efficient way and still using the same engine code as what the game uses. Thanks, Daniel P.S If you need more clarification or extra bits just leave a comment.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >