Search Results

Search found 48844 results on 1954 pages for 'first steps'.

Page 518/1954 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Book review: Peopleware: Productive Projects and Teams

    - by DigiMortal
       Peopleware by Tom DeMarco and Timothy Lister is golden classic book that can be considered as mandatory reading for software project managers, team leads, higher level management and board members of software companies. If you make decisions about people then you cannot miss this book. If you are already good on managing developers then this book can make you even better – you will learn new stuff about successful development teams for sure. Why peopleware? Peopleware gives you very good hints about how to build up working environment for project teams where people can really do their work. Book also covers team building topics that are also important reading. As software developer I found practically all points in this book to be accurate and valid. Many times I have found my self thinking about same things and Peopleware made me more confident about my opinions. Peopleware covers also time management and planning topics that help you do way better job on using developers time effectively by minimizing the amount of interruptions by phone calls, pointless meetings and i-want-to-know-what-are-you-doing-right-now questions by managers who doesn’t write code anyway. I think if you follow suggestions given by Peopleware your developers are very happy. I suggest you to also read another great book – Death March by Edward Yourdon. Death March describes you effectively what happens when good advices given by Peopleware are totally ignored or worse yet – people are treated exactly opposite way. I consider also Death March as golden classics and I strongly recommend you to read this book too. Table of Contents Acknowledgments Preface to the Second Edition Preface to the First Edition Part 1: Managing the Human Resource Chapter 1: Somewhere Today, a Project Is Failing Chapter 2: Make a Cheeseburger, Sell a Cheeseburger Chapter 3: Vienna Waits for You Chapter 4: Quality-If Time Permits Chapter 5: Parkinson's Law Revisited Chapter 6: Laetrile Part II: The Office Environment Chapter 7: The Furniture Police Chapter 8: "You Never Get Anything Done Around Here Between 9 and 5" Chapter 9: Saving Money on Space Intermezzo: Productivity Measurement and Unidentified Flying Objects Chapter 10: Brain Time Versus Body Time Chapter 11: The Telephone Chapter 12: Bring Back the Door Chapter 13: Taking Umbrella Steps Part III: The Right People Chapter 14: The Hornblower Factor Chapter 15: Hiring a Juggler Chapter 16: Happy to Be Here Chapter 17: The Self-Healing System Part IV: Growing Productive Teams Chapter 18: The Whole Is Greater Than the Sum of the Parts Chapter 19: The Black Team Chapter 20: Teamicide Chapter 21: A Spaghetti Dinner Chapter 22: Open Kimono Chapter 23: Chemistry for Team Formation Part V: It't Supposed to Be Fun to Work Here Chapter 24: Chaos and Order Chapter 25: Free Electrons Chapter 26: Holgar Dansk Part VI: Son of Peopleware Chapter 27: Teamicide, Revisited Chapter 28: Competition Chapter 29: Process Improvement Programs Chapter 30: Making Change Possible Chapter 31: Human Capital Chapter 32:Organizational Learning Chapter 33: The Ultimate Management Sin Is Chapter 34: The Making of Community Notes Bibliography Index About the Authors

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • HighPoint RocketRAID 62x Controller

    - by TeXnewbie
    I have the subject card recently installed in Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-31-generic x86_64). See partial lspci -vv listing below (complete listing played havoc with pre tags): 03:00.0 RAID bus controller: HighPoint Technologies, Inc. Device 0622 (rev 01) Subsystem: HighPoint Technologies, Inc. Device 0001 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 11 Region 0: I/O ports at 9c00 [size=8] Region 1: I/O ports at 9800 [size=4] Region 2: I/O ports at 9400 [size=8] Region 3: I/O ports at 9000 [size=4] Region 4: I/O ports at 8c00 [size=16] Region 5: Memory at fdbff000 (32-bit, non-prefetchable) [size=2K] Expansion ROM at fdbe0000 [disabled] [size=64K] Capabilities: I followed instructions I found at https://help.ubuntu.com/community/RocketRaid to compile the drivers for it, and although performing the process described there seemed to work fine with no noticeable errors, when I rebooted after performing that procedure I could not boot. During dkms steps, I noticed messages indicating that (If next boot fails, revert to initrd.img-3.2.0-31-generic.old-dkms image) update-initramfs................ so I booted using a Ubuntu 12.10 LiveDVD and reverted to the old-dkms initrd.img as suggested above, but this failed to repair the boot problem. Ultimately, I used https://help.ubuntu.com/community/Boot-Repair in Ubuntu-Secure-Remix to fix the boot problem and was able to boot normally again, but now with the newly generated initrd.img in place again (which now boots normally), when I modprobe the rr62x kernel module, I immediately get a hard crash with messages to console about a kernel paging request that seems to have caused the problem. I've tried on multiple occasions now to use the newly built kernel module so as to allow me to use an eSATA port multiplier plugged into the card, but to no avail. Any suggestions on fixes or workarounds (I've read that some of the HighPoint cards (2720SGL) seem to work as a host bus adapter and thus may not need a custom driver, but that seems not to be the case for mine) would be most appreciated. My goal is to use the card as described here and with software RAID mdadm utilities. If necessary, I can hand-copy the console messages after the hard crash into a follow-up message, but I obviously can't do a cut/paste. I'll gladly provide any other details that are needed, but not sure what those would be at this point, so I'll refrain from adding other details for now. Thanks in advance for any help.

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • SCCM 2007 Collections per OU

    - by VirtualizeIT
    Recently I wanted to create our SCCM collections setup as our Active Directory structure. I finally figured out how to create collections per OU of the domain. I decided to create a simple tutorial that may help other IT professionals the steps to complete this task.   1. Open the ConfigMgr and navigation to the collections. To navigate to the collections go to Site Database>Computer Management>Collections. 2. In the ‘Collections’ right-click and select New Collections. Then it will pop up a Wizards so you can enter the name of the collection and any notes that you may want to add that is associated with the collection.                       3. Next, select the database icon. In the ‘Name’ textbox enter the name of the query. I named mine ‘Query’ just for simplicity sake. After you enter the name select ‘Edit Query Statement…’ 4. Select the ‘Criteria’ tab 5. Select the icon that looks like a sun. 6. At this point you should see a dialog box like this…                     7. Next, click the ‘select’ button. 8. Under the ‘Attribute class’ scroll through until you see ’System Resource’ and for the ‘Attribute"’ scroll through you see ‘System OU Name’. It should look something like this…                 9. After that select OK. 10. In the ‘Value’ textbox enter the string that is associated with the OU in your domain. NOTE: If you don’t know your string name for your OU you can simply go to “Active Directory Users and Computers” and right-click on the OU and select properties. In the ‘object’ tab you should see the string under the ‘Canonical name of object”. That is the string that you put in the ‘Value’ text box. 11. After you enter the OU string name press OK>OK>OK>NEXT>NEXT>FINISH.   That’s it!   I hope this tutorial has help you understand how to create a collection through your OU structure.

    Read the article

  • LightSwitch Tutorial - Adding Image to a LightSwitch Screen

    - by ChrisD
    Last week, I have discussed how to control Screen Layouts in LightSwitch. Now, I will talk about how to add an image to the LightSwitch screen. In this demo, I will try to upload the image to the screen and will save the image into the database. The first step we need to do is start the VS 2010, create LightSwitch Desktop application with the name “AddingImageIntoScreenInLSBeta2” as shown in the following figure. The second steps, create a table as shown in the screen by selecting the "create a table" option in the start up screen. Then, we need to add a New Data Screen to our demo application. See the following figure which is the default screen layouts for the screen we have created. So we have to change the layout of this screen so that the uploading and using the image in the screen can be easily explained. Before adding the Model Window we have to prepare the layout. So delete the Highlighted fields as shown in the above figure. After preparing the layout to add the image, just add a new Group to the Person Property Rows Layout. To add a new group, [No: 1] – Select the Rows layout, it will shows you the Add button. [No: 2] – Click the Add button to select the new group. [No: 3] – Select the New Group. After adding the new group change the Layout type to Columns Layout. Here, -          Change the rows layout to columns layouts and give the display name as Uploading Image Example. -          Click on the add button to add the Photo field under the column layout. Add a new group under the Column layout group. Follow the [No: #] to create a new group under the columns layout group. After adding a new group of rows layout add the fields to the newly created group. [No: 1] – Select the Rows Layout group and change the display name as Details. [No: 2] – Click on Add button to select the appropriate fields to add to the group. [No: 3] – Add the fields to the group The above snippet shows the complete layout tree for our screen. Now the screen for uploading the image is ready. Just press the Play button. And see the result.

    Read the article

  • SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion

    - by pinaldave
    Earlier I talked about what kind of questions, I do not like when I get asked. Today we will go over the question which I like when I get asked the same. One of the reader practices various steps in my earlier blog post Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS. While reading the blog post he noticed that Data Quality Services is not providing very helpful suggestions. He wrote an email to me about it. Let us go over his email. “Pinal, I noticed in one of your images that DQS is not providing very helpful suggestions. First of all DQS should be able to make intelligent guesses and make the necessary correction by itself. If it cannot do the same, in that case, it should give us intelligent suggestions but in the image included here, I see the suggestions are not there as well. Why is it so? Would you please tell me how to increase the numbers of suggestion? I do understand this may not be preferable solution in many case but all the business cases go on it depends. There are cases when the high sensitivity required and there are cases when higher sensitivities are not required. I would like to seek your help here. –Sriram MD” This is indeed a great question. I see that Sriram understands that every system is different and every application has a different need. I will not have to tell him this most important concept. The question is about how to change the sensitivity of suggestions for correction in DQS. Well, this option is available under the configuration tab in the DQS client. Once you click on Configuration you will see the following screen. Click the Tab of General Settings. You will see the section of Interactive Cleansing. Under this second there is the first option of “Min score for suggestions”. As this is set to 0.7 every suggestion which matches 0.7 probabilities or higher probability are displayed under the suggestion tab. You can see in the following image that there is no suggestion as the min score for suggestions is set to 0.7 and there is no record which qualifies to that much confidence. Now let us change the value of Min Score for suggestion to 0.5. The lower value increased the confidence of DQS to give further suggestion to values which are over 0.5. However, in our case the suggestions which it provides are also accurate. This may not be true for your sample. Every sample is different so you should manually review it before approving them. I guess, this is a simple blog post to demonstrate how to change the confidence value for the suggestions which Data Quality Services provides. Use this feature with care and always tune it according to your datasets and record diversity. Reference: Pinal Dave (http://blog.SQLAuthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • Mind Reading with the Raspberry Pi

    - by speakjava
    Mind Reading With The Raspberry Pi At JavaOne in San Francisco I did a session entitled "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi".  As part of this I showed some demonstrations of things I'd done using Java on the Raspberry Pi.  This is the first part of a series of blog entries that will cover all the different aspects of these demonstrations. A while ago I had bought a MindWave headset from Neurosky.  I was particularly interested to see how this worked as I had had the opportunity to visit Neurosky several years ago when they were still developing this technology.  At that time the 'headset' consisted of a headband (very much in the Bjorn Borg style) with a sensor attached and some wiring that clearly wasn't quite production ready.  The commercial version is very simple and easy to use: there are two sensors, one which rests on the skin of your forehead, the other is a small clip that attaches to your earlobe. Typical EEG sensors used in hospitals require lots of sensors and they all need copious amounts of conductive gel to ensure the electrical signals are picked up.  Part of Neurosky's innovation is the development of this simple dry-sensor technology.  Having put on the sensor and turned it on (it powers off a single AAA size battery) it collects data and transmits it to a USB dongle plugged into a PC, or in my case a Raspberry Pi. From a hacking perspective the USB dongle is ideal because it does not require any special drivers for any complex, low level USB communication.  Instead it appears as a simple serial device, which on the Raspberry Pi is accessed as /dev/ttyUSB0.  Neurosky have published details of the command protocol.  In addition, the MindSet protocol document, including sample code for parsing the data from the headset, can be found here. To get everything working on the Raspberry Pi using Java the first thing was to get serial communications going.  Back in the dim distant past there was the Java Comm API.  Sadly this has grown a bit dusty over the years, but there is a more modern open source project that provides compatible and enhanced functionality, RXTXComm.  This can be installed easily on the Pi using sudo apt-get install librxtx-java.  Next I wrote a library that would send commands to the MindWave headset via the serial port dongle and read back data being sent from the headset.  The design is pretty simple, I used an event based system so that code using the library could register listeners for different types of events from the headset.  You can download a complete NetBeans project for this here.  This includes javadoc API documentation that should make it obvious how to use it (incidentally, this will work on platforms other than Linux.  I've tested it on Windows without any issues, just by changing the device name to something like COM4). To test this I wrote a simple application that would connect to the headset and then print the attention and meditation values as they were received from the headset.  Again, you can download the NetBeans project for that here. Oracle recently released a developer preview of JavaFX on ARM which will run on the Raspberry Pi.  I thought it would be cool to write a graphical front end for the MindWave data that could take advantage of the built in charts of JavaFX.  Yet another NetBeans project is available here.  Screen shots of the app, which uses a very nice dial from the JFxtras project, are shown below. I probably should add labels for the EEG data so the user knows which is the low alpha, mid gamma waves and so on.  Given that I'm not a neurologist I suspect that it won't increase my understanding of what the (rather random looking) traces mean. In the next blog I'll explain how I connected a LEGO motor to the GPIO pins on the Raspberry Pi and then used my mind to control the motor!

    Read the article

  • Gradle for NetBeans RCP

    - by Geertjan
    Start with the NetBeans Paint Application and do the following to build it via Gradle (i.e., no Gradle/NetBeans plugin is needed for the following steps), assuming you've set up Gradle. Do everything below in the Files or Favorites window, not in the Projects window. In the application directory "Paint Application". Create a file named "settings.gradle", with this content: include 'ColorChooser', 'Paint' Create another file in the same location, named "build.gradle", with this content: subprojects { apply plugin: "announce" apply plugin: "java" sourceSets { main { java { srcDir 'src' } resources { srcDir 'src' } } } } In the module directory "Paint". Create a file named "build.gradle", with this content: dependencies { compile fileTree("$rootDir/build/public-package-jars").matching { include '**/*.jar' } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } Note: The above is a temporary solution, as you can see, the expectation is that the JARs are in the 'build/public-packages-jars' folder, which assumes an Ant build has been done prior to the Gradle build. Now run 'gradle classes' in the "Paint Application" folder and everything will compile correctly. So, this is how the Paint Application now looks: Preferable to the second 'build.gradle' would be this, which uses the JARs found in the NetBeans Platform... netbeansHome = '/home/geertjan/netbeans-dev-201111110600' dependencies { compile files("$rootDir/ColorChooser/release/modules/ext/ColorChooser.jar") def projectXml = new XmlParser().parse("nbproject/project.xml") projectXml.configuration.data."module-dependencies".dependency."code-name-base".each { if (it.text().equals('org.openide.filesystems')) { def dep = "$netbeansHome/platform/core/"+it.text().replace('.','-')+'.jar' compile files(dep) } else if (it.text().equals('org.openide.util.lookup') || it.text().equals('org.openide.util')) { def dep = "$netbeansHome/platform/lib/"+it.text().replace('.','-')+'.jar' compile files(dep) } else { def dep = "$netbeansHome/platform/modules/"+it.text().replace('.','-')+'.jar' compile files(dep) } } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } However, when you run 'gradle classes' with the above, you get an error like this: geertjan@geertjan:~/NetBeansProjects/PaintApp1/Paint$ gradle classes :Paint:compileJava [ant:javac] Note: Attempting to workaround javac bug #6512707 [ant:javac] [ant:javac] [ant:javac] An annotation processor threw an uncaught exception. [ant:javac] Consult the following stack trace for details. [ant:javac] java.lang.NullPointerException [ant:javac] at com.sun.tools.javac.util.DefaultFileManager.getFileForOutput(DefaultFileManager.java:1058) No idea why the above happens, still trying to figure it out. Once the above works, we can start figuring out how to use the NetBeans Maven repo instead and then the user of the plugin will be able to select whether to use local JARs or JARs from the NetBeans Maven repo. Many thanks to Hans Dockter who put the above together with me today, via Skype!

    Read the article

  • Does your analytic solution tell you what questions to ask?

    - by Manan Goel
    Analytic solutions exist to answer business questions. Conventional wisdom holds that if you can answer business questions quickly and accurately, you can take better business decisions and therefore achieve better business results and outperform the competition. Most business questions are well understood (read structured) so they are relatively easy to ask and answer. Questions like what were the revenues, cost of goods sold, margins, which regions and products outperformed/underperformed are relatively well understood and as a result most analytics solutions are well equipped to answer such questions. Things get really interesting when you are looking for answers but you don’t know what questions to ask in the first place? That’s like an explorer looking to make new discoveries by exploration. An example of this scenario is the Center of Disease Control (CDC) in United States trying to find the vaccine for the latest strand of the swine flu virus. The researchers at CDC may try hundreds of options before finally discovering the vaccine. The exploration process is inherently messy and complex. The process is fraught with false starts, one question or a hunch leading to another and the final result may look entirely different from what was envisioned in the beginning. Speed and flexibility is the key; speed so the hundreds of possible options can be explored quickly and flexibility because almost everything about the problem, solutions and the process is unknown.  Come to think of it, most organizations operate in an increasingly unknown or uncertain environment. Business Leaders have to take decisions based on a largely unknown view of the future. And since the value proposition of analytic solutions is to help the business leaders take better business decisions, for best results, consider adding information exploration and discovery capabilities to your analytic solution. Such exploratory analysis capabilities will help the business leaders perform even better by empowering them to refine their hunches, ask better questions and take better decisions. That’s your analytic system not only answering the questions but also suggesting what questions to ask in the first place. Today, most leading analytic software vendors offer exploratory analysis products as part of their analytic solutions offerings. So, what characteristics should be top of mind while evaluating the various solutions? The answer is quite simply the same characteristics that are essential for exploration and analysis – speed & flexibility. Speed is required because the system inherently has to be agile to handle hundreds of different scenarios with large volumes of data across large user populations. Exploration happens at the speed of thought so make sure that you system is capable of operating at speed of thought. Flexibility is required because the exploration process from start to finish is full of unknowns; unknown questions, answers and hunches. So, make sure that the system is capable of managing and exploring all relevant data – structured or unstructured like databases, enterprise applications, tweets, social media updates, documents, texts, emails etc. and provides flexible Google like user interface to quickly explore all relevant data. Getting Started You can help business leaders become “Decision Masters” by augmenting your analytic solution with information discovery capabilities. For best results make sure that the solution you choose is enterprise class and allows advanced, yet intuitive, exploration and analysis of complex and varied data including structured, semi-structured and unstructured data.  You can learn more about Oracle’s exploratory analysis solutions by clicking here.

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • New Oracles VM RAC template with support for oracle vm 3 built-in

    - by wcoekaer
    The RAC team did it again (thanks Saar!) - another awesome set of Oracle VM templates published and uploaded to My Oracle Support. You can find the main page here. What's special about the latest version of DeployCluster is that it integrates tightly with Oracle VM 3 manager. It basically is an Oracle VM frontend that helps start VMs, pass arguments down automatically and there is absolutely no need to log into the Oracle VM servers or the guests. Once it completes, you have an entire Oracle RAC database setup ready to go. Here's a short summary of the steps : Set up an Oracle VM 3 server pool Download the Oracle VM RAC template from oracle.com Import the template into Oracle VM using Oracle VM Manager repository - import Create a public and private network in Oracle VM Manager in the network tab Configure the template with the right public and private virtual networks Create a set of shared disks (physical or virtual) to assign to the VMs you want to create (for ASM/at least 5) Clone a set of VMs from the template (as many RAC nodes as you plan to configure) With Oracle VM 3.1 you can clone with a number so one clone command for, say 8 VMs is easy. Assign the shared devices/disks to the cloned VMs Create a netconfig.ini file on your manager node or a client where you plan to run DeployCluster This little text file just contains the IP addresses, hostnames etc for your cluster. It is a very simple small textfile. Run deploycluster.py with the VM names as argument Done. At this point, the tool will connect to Oracle VM Manager, start the VMs and configure each one, Configure the OS (Oracle Linux) Configure the disks with ASM Configure the clusterware (CRS) Configure ASM Create database instances on each node. Now you are ready to log in, and use your x node database cluster. x No need to download various products from various websites, click on trial licenses for the OS, go to a Virtual Machine store with sample and test versions only - this is production ready and supported. Software. Complete. example netconfig.ini : # Node specific information NODE1=racnode1 NODE1VIP=racnode1-vip NODE1PRIV=racnode1-priv NODE1IP=192.168.1.2 NODE1VIPIP=192.168.1.22 NODE1PRIVIP=10.0.0.22 NODE2=racnode2 NODE2VIP=racnode2-vip NODE2PRIV=racnode2-priv NODE2IP=192.168.1.3 NODE2VIPIP=192.168.1.23 NODE2PRIVIP=10.0.0.23 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=mydomain.com DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=racnode12-scan SCANIP=192.168.1.50

    Read the article

  • How Do You Actually Model Data?

    Since the 1970’s Developers, Analysts and DBAs have been able to represent concepts and relations in the form of data through the use of generic symbols.  But what is data modeling?  The first time I actually heard this term I could not understand why anyone would want to display a computer on a fashion show runway. Hey, what do you expect? At that time I was a freshman in community college, and obviously this was a long time ago.  I have since had the chance to learn what data modeling truly is through using it. Data modeling is a process of breaking down information and/or requirements in to common categories called objects. Once objects start being defined then relationships start to form based on dependencies found amongst other existing objects.  Currently, there are several tools on the market that help data designer actually map out objects and their relationships through the use of symbols and lines.  These diagrams allow for designs to be review from several perspectives so that designers can ensure that they have the optimal data design for their project and that the design is flexible enough to allow for potential changes and/or extension in the future. Additionally these basic models can always be further refined to show different levels of details depending on the target audience through the use of three different types of models. Conceptual Data Model(CDM)Conceptual Data Models include all key entities and relationships giving a viewer a high level understanding of attributes. Conceptual data model are created by gathering and analyzing information from various sources pertaining to a project during the typical planning phase of a project. Logical Data Model (LDM)Logical Data Models are conceptual data models that have been expanded to include implementation details pertaining to the data that it will store. Additionally, this model typically represents an origination’s business requirements and business rules by defining various attribute data types and relationships regarding each entity. This additional information can be directly translated to the Physical Data Model which reduces the actual time need to implement it. Physical Data Model(PDMs)Physical Data Model are transformed Logical Data Models that include the necessary tables, columns, relationships, database properties for the creation of a database. This model also allows for considerations regarding performance, indexing and denormalization that are applied through database rules, data integrity. Further expanding on why we actually use models in modern application/database development can be seen in the benefits that data modeling provides for data modelers and projects themselves, Benefits of Data Modeling according to Applied Information Science Abstraction that allows data designers remove concepts and ideas form hard facts in the form of data. This gives the data designers the ability to express general concepts and/or ideas in a generic form through the use of symbols to represent data items and the relationships between the items. Transparency through the use of data models allows complex ideas to be translated in to simple symbols so that the concept can be understood by all viewpoints and limits the amount of confusion and misunderstanding. Effectiveness in regards to tuning a model for acceptable performance while maintaining affordable operational costs. In addition it allows systems to be built on a solid foundation in terms of data. I shudder at the thought of a world without data modeling, think about it? Data is everywhere in our lives. Data modeling allows for optimizing a design for performance and the reduction of duplication. If one was to design a database without data modeling then I would think that the first things to get impacted would be database performance due to poorly designed database and there would be greater chances of unnecessary data duplication that would also play in to the excessive query times because unneeded records would need to be processed. You could say that a data designer designing a database is like a box of chocolates. You will never know what kind of database you will get until after it is built.

    Read the article

  • AccelerometerInput XNA GameComponent

    - by Michael B. McLaughlin
    Bad accelerometer controls kill otherwise good games. I decided to try to do something about it. So I create an XNA GameComponent called AccelerometerInput. It’s still a beta project but you are welcome to try it, use it, modify it, etc. I’m releasing under the terms of the Microsoft Public License. Important info: First, it only supports tilt-style controls currently. I have not implemented motion-style controls yet (and make no promises as to when I might find time to do so). Second, I commented it heavily so that you can (hopefully) understand what it is doing. Please read the comments and examine the sample game for a usage overview. There are configurable parameters which I encourage you to make use of (both by modifying the default values where your testing shows it to be appropriate and also by implementing a calibration mechanism in your game that lets the user adjust those configurable values based on his or her own circumstances). Third, even with this code, accelerometer controls are still a fairly advanced topic area; you will likely find nothing but disappointment if you simply plunk this into some project without testing it on a device (or preferably on several devices). Fourth, if you do try this code and find that something doesn’t work as expected on your phone, please let me know as I want to improve it and can only do so with your help. Let me know what phone model it is, what you tried doing, what you expected, and what result you had instead. I may or may not be able to incorporate it into the code, but I can let others know at the very least so that they can make appropriate modifications to their games (I’m hopeful that all phones are reasonably similar in their workings and require, at most, a slight calibration change, but I simply don’t know). Fifth, although I’ll do my best to answer any questions you may have about it, I’m very busy with a number of things currently so it might take a little while. Please look through the code and examine the comments and sample game first before asking any questions. It’s likely that the answer is in there. If not, or if you just aren’t really sure, ask away. Sixth, there are differences between a portrait-mode game and a landscape mode game (specifically in the appropriate default tilt adjustment for toward the user/away from the user calculations). This is documented and the default is set for landscape. If you use this for a portrait game, make the appropriate change (look for the TODO: comment in AccelerometerInput.cs). Seventh, no provision whatsoever is made for disabling screen locking. It is up to you to implement that and to take appropriate measures to detect when the user has been idle for too long and timeout the game. That code is very game-specific. If you have questions about such matters, consult the relevant MSDN documentation and, if you still have questions, visit the App Hub forums and ask there. I answer questions there a lot and so I may even stumble across your question and answer it. But that’s a much better forum than the comments section here for questions of that sort so I would appreciate it if you asked idle detection-related questions there (or on some other suitable site that you may be more familiar and comfortable with). Eighth, this is an XNA GameComponent intended for XNA-based games on WP7. A sufficiently knowledgeable Silverlight developer should have no problem adapting it for use in a Silverlight game or app. I may create a Silverlight version at some point myself. Right now I do not have the time, unfortunately. Ok. Without further ado: http://www.bobtacoindustries.com/developers/utils/AccelerometerInput.zip Have a great St. Patrick’s Day!

    Read the article

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • questions about dual-boot install Ubuntu 10.04 and Windows 7 on same hard drive

    - by Tim
    I'd like to dual-boot install Ubuntu 10.04 on the same hard drive as Windows 7 which has already been installed. As to sources on the internet: I found a website iinet about dual-boot installation of Ubuntu 10.10 and Windows 7 on the same hard drive, which I think more specific than the one on Ubuntu Community without specific version of the OSes. Since I am installing Ubuntu 10.04 instead of 10.10, my question is whether their installers are same or almost same and if I can follow iinet for my dual-boot installation? Or are there better websites for information about dual-boot installtion of Ubuntu 10.04 and Windows 7? As to shrinking Windows partitions to make free space for Ubuntu partitions: iinet uses the partition software in Ubuntu's installer to shrink the Windows partition. But I saw in many website that the partition software in Ubuntu's installer cannot guarantee shrinking Windows 7 partitions successfully, so they recommended in general to shrink Windows partitions under Windows itself using its softwares. For example, in Ubuntu Community, it says: Some people think that the Windows partition must be resized only from within Windows Vista and Windows 7 using the shrink/resize option. ... If you use GParted Partition Editor in the Ubuntu Live CD be careful. So I was wondering which way to go in my situation? As to partition for bootloader files: In iinet, I don't see there is a partition created and dedicated to boot files (i.e. Grub files). However, I saw in many websites strongly suggesting using a boot partition for Grub files, especially for the purpose of separation and protection from installed OS files. I was wondering which way I should choose and why? As to installing bootloader Grub, in iinet, I see that to install Grub it only needs to specify the hard drive device for bootloader installation. However, in ubuntuguide(for more than 2 OSes and Ubuntu 9.04), some commands are needed to run in order to put Grub configuration files in MBR, and OS partition, for the chain-load process (where to find the files for the next stage). In Ubuntu Community, there are some related sentences which I don't quite understand how to do in practice: the only thing in your computer outside of Ubuntu that needs to be changed is a small code in the MBR (Master Boot Record) of the first hard disk. The MBR code is changed to point to the boot loader in Ubuntu. If you have a problem with changing the MBR code, you might prefer to just install the code for pointing to GRUB to the first sector of your Ubuntu partition instead. If you do that during the Ubuntu installation process, then Ubuntu won't boot until you configure some other boot manager to point to Ubuntu's boot sector. Windows Vista no longer utilizes boot.ini, ntdetect.com, and ntldr when booting. Instead, Vista stores all data for its new boot manager in a boot folder. Windows Vista ships with an command line utility called bcdedit.exe, which requires administrator credentials to use. You may want to read http://go.microsoft.com/fwlink/?LinkId=112156 about it. Using a command line utility always has its learning curve, so a more productive and better job can be done with a free utility called EasyBCD, developed and mastered in during the times of Vista Beta already. EasyBCD is user friendly and many Vista users highly recommend EasyBCD. In what is quoted above, I was wondering how exactly I should change the MBR code to point to the bootloader in Ubuntu? if I fail to change MBR code, are the other suggested boot managers being bcdedit.exe and EasyBCD in Windows? With the three sources above, which one shall I follow? Thanks and regards

    Read the article

  • SQL Authority News – Secret Tool Box of Successful Bloggers: 52 Tips to Build a High Traffic Top Ranking Blog

    - by Pinal Dave
    When I started this blog, it was meant as a bookmark for myself for helpful tips and tricks.  Gradually, it grew into a blog that others were reading and commenting on.  While SQL and databases are my first love and the reason I started this blog, the side effect was that I discovered I loved writing.  I discovered a secret goal I didn’t even know I wanted – I wanted to become an author.  For a long time, writing this blog satisfied that urge.  Gradually, though, I wanted to see my name in print. 12th Book Over the past few years I have authored and co-authored a number of books – they are all based on my knowledge of SQL Server, and were meant to spread my years of experience into the world, to share what I have learned with my community.  I currently have elevan of these “manuals” available for sale.  As exciting as it was to see my name in print, I still felt that there was more I could do as an author. That is when I realized that I am more than just a SQL expert.  I have been writing this blog now for more than 10 years, and it grew from a personal bookmark to a thriving website with over 2 million views per month.  I thought to myself “I could write a book about how to create a successful blog!”  And that is exactly what I did.  I am extremely excited to share with all of you my new book – “Secret Toolbox of Successful Bloggers.” A Labor of Love This project has been a labor of love for me.  It started out as a series for this blog – I would post one article a week until I felt the topic had been covered.  I found that as I wrote, new topics kept popping up in my mind, and eventually this small blog series grew into a full book.  The blog series was large enough to last a whole year, so I definitely thought that it could be a full book.  Ideas on how to become a successful blogger were so frequent that, I will admit, I feel like there is so much I left out of this book.  I had a lot more to say than I originally thought! I am so excited to be sharing this book with all of you.  I am so passionate about this topic, and I feel like there are so many people who can benefit from this book.  I know that when I started this blog, I did not know what I was doing, and I would have loved a “helping hand” to tell what to do and what not to do.  If this book can act that way to any of my readers, I feel it is a success. Rules of Thumb If you are interested in the topic of becoming a blogger, as you read this book, keep in mind that it is suggestions only.  Blogging is so new to the world that while there are “rules of thumb” about what to do and what not to do, a map of steps (“first, do x, then do y”) is not going to work for every single blogger.  This book is meant to encourage new bloggers to put their content out there in the world, to be brave and create a community like the one I have here at SQL Authority.  I have gained so much from this community, I wanted to give something back, and this book is just one small part. I hope that everyone who reads this books finds at least one helpful tip, and that everyone can experience the joy of blogging.  That is the whole reason I wrote this book, and what I hope everyone takes away from it. Where Can You Get It? You can get the book from following URL: Kindle eBook | Print Book Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    5-Day Training for Partners: 29th October - 2nd November 2012, London (UK): REGISTER Here This FREE for Partners 5-day workshop is designed to provide implementation instruction on Oracle Hyperion EPM Planning.  This boot-camp is intended for prospective implementers of the Planning and Budgeting functionality of Oracle EPM or implementers that are currently familiar with the basics of EPM Planning and looking to strengthen their base of knowledge in the product. The class begins with an overview of Essbase, the foundation of Hyperion Planning. It provides a general overview of Planning and Planning terms, the architecture of all the Planning components, and how they are commonly used. The course goes over all the steps to create an application from scratch. This involves some preparation work outside of Planning and leads to developing the application in both the Planning Windows and Web clients. Participants will modify existing dimensions and build out the hierarchies using the Web client. Topics Covered The boot-camp shows developers how to build out dimensions using Classic Planning and by using EPMA. It covers the mechanics and cover strategies for automating the build process such as interface tables. It reviews data loads using Load Rules to load the Planning database. The course focuses on tasks that end-users must perform during the planning cycle. It walks students through creating and modifying forms, working with forms to enter data, adding annotations, and the rest of the form features such as running business rules and managing task lists. It covers how to use the forms in the Smart View client and finishes up the end-user perspective by going through Workflow Management and the process of submitting a plan for review. The final section of the course covers Security and other administration topics such as automation and deployment. Prerequisites Ideal participants are Oracle partners (SIs and resellers) with a background in business information systems and a clientele of customers with ongoing or prospective EPM initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants. Further online OPN guided learning path information and webinars are available at: Oracle Hyperion Planning 11 Essentials. Please note that attendees are required to bring a laptop. View here laptop requirements and detailed agenda. ·       REGISTER Here : acceptance is subject to availability and your place will be confirmed within two weeks  ( and for help see the Partner Registration Guide ). Training Location: Oracle Corporation UK Ltd Columbus Room Customer Visit Center 1 South Place London EC2M 2RB Training Dates: 29th October - 2nd November  9:30 am – 5:00 pm BST For more information please contact [email protected].

    Read the article

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • Selecting the correct installer to install Oracle Weblogic Server

    - by PratikS -- Oracle
    When ever we start learning about a software product, the first step is to get the software installer and install it.Before we start with "How to install Oracle Weblogic Server?", lets understand the different kinds of installers available for Oracle Weblogic Server and select the correct installer.There are three different kinds of Weblogic server Installers: Package Installer Development-only and supplemental installers Upgrade Installer 1) Package Installer: If you have never installed Oracle Weblogic Server and this is the first time you are installing it, then what you need is a Oracle Weblogic Server's Package Installer.Again there are two different kinds of Package Installers:    a) Generic Package installer:         It does not include JAVA runtime. (When using "Generic Package installer" it is a prerequisite that a supported JDK should be installed)         If you want to install weblogic server with 64bit JVM, you have to use "Generic Package installer".         "Generic Package installer" is platform independent and can be used to install weblogic server on any supported 32bit or 64bit platform.     b) OS-specific Package installer         As the name suggests the installer is platform specific.         It is meant for installation with a 32bit JVM only.         Both SUN and JROCKIT 32 bit JDKs come bundled with "OS-specific Package installer", so no need to install the JDK in advance. 2) Development-only and supplemental installers:         If you have no plans to use the Oracle Weblogic Server in Production and need a simple installer for testing purpose only, then use this installer.         Download the zip distribution, unzip it and its ready to use. 3) Upgrade Installer:         Upgrade installer is used to upgrade a Oracle Weblogic Server installation from one minor version to a higher minor version.         There are no installers available to upgrade Oracle Weblogic Server Installation from one major version to another, though Domain Upgrade is always available. Note:Following are the different versions of Oracle Weblogic Server in ascending order(excluding versions before WLS 9.2): WLS 9.2.x WLS 10.0.x WLS 10.3.x WLS 12.1.x Where "x" denotes the minor version, 9.2, 10.0,10.3 and 12.1 are the major versions.So you may use the upgrade installer to upgrade from WLS 10.3.1 to 10.3.6, or 10.0.1 to 10.0.2 etc.  ------------------------------------- Important links to refer: Oracle Weblogic Server Documentation Supported Configuration Installation Guide for Oracle WebLogic Server

    Read the article

  • PARTNER WEBCAST- INNOVATIONS IN PRODUCTS PROGRAM (FORMERLY KNOWN AS COMPETENCE VIRTUAL)

    - by mseika
    PARTNER WEBCAST- INNOVATIONS IN PRODUCTS PROGRAM (FORMERLY KNOWN AS COMPETENCE VIRTUAL) JULY 2ND, 2012 AT 04:00 PM CET (03:00 PM GMT)I am pleased to invite you to join the Innovations in Products –webcast. Innovations in Products will present Oracle Applications' Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other interested Oracle Applications system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A Document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle Applications products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle Application portfolio – for your and your customer’s benefit. Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall Applications portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Note: At the latter part of this email you have also 17 links into the recent Applications Products presentations and 6 links into the Public Sector Value Proposition presentations that were presented in Innovations in Industries -program. Product breakout sessions: Fusion Applications Technology and Extensibility Fusion Applications - Transforming your Back-Office Accounting Function Fusion HCM & Talent Overview & Extensibility Fusion HCM Compensation Planning Enterprise PLM for the Product Value Chain Oracle's Asset Management and Maintenance Solution For more details please visit Innovations in Products and other breakout sessions on OPN page. Delivery Format Innovations in Products –program is a series of FREE prerecorded Applications product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle’s contribution to Partners. Then you’ll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months.The next Innovations in Products web casts will be presented as follows: July 2nd 2012 October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. DurationMaximum 1 hour For further information please contact me Markku Rouhiainen.

    Read the article

  • PARTNER WEBCAST- INNOVATIONS IN PRODUCTS PROGRAM (FORMERLY KNOWN AS COMPETENCE VIRTUAL)

    - by mseika
    PARTNER WEBCAST- INNOVATIONS IN PRODUCTS PROGRAM (FORMERLY KNOWN AS COMPETENCE VIRTUAL) JULY 2ND, 2012 AT 04:00 PM CET (03:00 PM GMT)I am pleased to invite you to join the Innovations in Products –webcast. Innovations in Products will present Oracle Applications' Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other interested Oracle Applications system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A Document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle Applications products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle Application portfolio – for your and your customer’s benefit. Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall Applications portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Note: At the latter part of this email you have also 17 links into the recent Applications Products presentations and 6 links into the Public Sector Value Proposition presentations that were presented in Innovations in Industries -program. Product breakout sessions: Fusion Applications Technology and Extensibility Fusion Applications - Transforming your Back-Office Accounting Function Fusion HCM & Talent Overview & Extensibility Fusion HCM Compensation Planning Enterprise PLM for the Product Value Chain Oracle's Asset Management and Maintenance Solution For more details please visit Innovations in Products and other breakout sessions on OPN page. Delivery Format Innovations in Products –program is a series of FREE prerecorded Applications product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle’s contribution to Partners. Then you’ll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months.The next Innovations in Products web casts will be presented as follows: July 2nd 2012 October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. DurationMaximum 1 hour For further information please contact me Markku Rouhiainen.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >