Search Results

Search found 2286 results on 92 pages for 'benefits'.

Page 69/92 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • Fast Data Executive Round Table FY14 event kit

    - by JuergenKress
    We are very interested to run joint marketing events jointly with you as our partners! At our SOA Community Workspace (SOA Community membership required) you can find a new Fast Data Executive Round Table FY14 event kit. This event is designed at senior IT and executives level for the purposes of education, awareness, and thought leadership around the subject of big data; and a specific flavor of big data - Fast Data - that has begun to spark the imagination of many Oracle customers. Fast Data is not new. It’s a term that was invented initially by Ovum’s Tony Baer as a way to represent the collection of ‘high velocity’ solutions with respect to the big data. For Oracle, the Fast Data campaign in FY13 began as a way to tie a broader set of solutions together (SOA/Business Process Management, Data Integration and Business Analytics) under a set of use cases focused on real-time, high velocity data. It has helped to give Oracle a leap-frog advantage over many of the niche integration vendors (i.e. Informatica, Pega, Tibco, Software AG, Terracotta) who haven’t been able to address these types of end-to-end use cases which rely on the combination of filtering, in-memory data processing, correlation, real-time data movement and transformation, end-to-end analytics, and business process management. Only Oracle can address all the dimensions of fast data, and only Oracle can provide a set of engineered solutions to address this space. This event is designed to continue that thought leadership momentum and raise the awareness about what Oracle Fast Data solutions are designed to solve. It’s designed to highlight real customer solutions and articulate the business benefits that fast data can address. This is not an event that gets into the esoteric technical standards of Hadoop, NoSQL, and in-memory data grids. This is an event that instead gets into the heart of business problems that big data has left un-addressed and charts the path for next steps in fast data. Get the Fast Data Executive Round Table FY14 event kit here. Support marketing campaigns We can support such events by: Oracle speakers - contact your partner manager Marketing budget - contact your A&C marketing manager Event location - free use of Oracle Customer Visitor Centers conference rooms Promote your event at events.oracle.com: http://tinyurl.com/eventspecialized E-Blast: invite customers to your event – contact your A&C marketing manager For additional marketing kits e.g for Business Process Managementplease visit our SOA Community Workspace. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags:

    Read the article

  • How to be successful at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Oracle Application in DMZ (Demilitarized Zone)

    - by PRajkumar
     Business Needs Large Organizations want to expose their Oracle Application services outside their private network (HTTP/HTTPS and SSL). Usually these exposures must exist to promote external communication. So they want to separate an external network from directly referencing an internal network   Business Challenges ·         Business does not want to compromise with security information ·         Business cannot expose internal domain or internal URL information   Business Solution DMZ is the solution of this problem. In Oracle application we can achieve this by following way –   ·         Oracle Application consists of fleet nodes (FND_NODES) so first decide which node have to expose to public ·         To expose the node to public use the profile “Node Trust Level” ·         Set node to Public/Private (Normal -> private, External -> public) ·         Set "Responsibility Trust Level" profile to decide whether to expose Application Responsibility to inside or outside firewall         Solution Features   ·         Exposed web services can be accessed by both internal and external users ·         Configurable and can be very easily rolled out ·         Internal network and business data is secured from outside traffic ·         Unauthorized access to internal network from outside is prohibited ·         No need for VPN and Secure FTP server   Benefits  ·       Large Organizations having Oracle Application can expose their web services like (HTTP/HTTPS and SSL) to the internet without compromise with security information and without exposing their internal domain   Possible Week Points  ·         If external firewall is compromised, then external application server is also compromised, exposing an attack on E-Business Suite database ·         There’s nothing to prevent internal users from attacking internal application server, also exposing an attack on E-Business Suite database   Reference Links  ·         https://blogs.oracle.com/manojmadhusoodanan/tags/dmz

    Read the article

  • Enum types, FlagsAttribute & Zero value – Part 2

    - by nmgomes
    In my previous post I wrote about why you should pay attention when using enum value Zero. After reading that post you are probably thinking like Benjamin Roux: Why don’t you start the enum values at 0x1? Well I could, but doing that I lose the ability to have Sync and Async mutually exclusive by design. Take a look at the following enum types: [Flags] public enum OperationMode1 { Async = 0x1, Sync = 0x2, Parent = 0x4 } [Flags] public enum OperationMode2 { Async = 0x0, Sync = 0x1, Parent = 0x2 } To achieve mutually exclusion between Sync and Async values using OperationMode1 you would have to operate both values: protected void CheckMainOperarionMode(OperationMode1 mode) { switch (mode) { case (OperationMode1.Async | OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Async | OperationMode1.Sync): throw new InvalidOperationException("Cannot be Sync and Async simultaneous"); break; case (OperationMode1.Async | OperationMode1.Parent): case (OperationMode1.Async): break; case (OperationMode1.Sync | OperationMode1.Parent): case (OperationMode1.Sync): break; default: throw new InvalidOperationException("No default mode specified"); } } but this is a by design constraint in OperationMode2. Why? Simply because 0x0 is the neutral element for the bitwise OR operation. Knowing this singularity, replacing and simplifying the previous method, you get: protected void CheckMainOperarionMode(OperationMode2 mode) { switch (mode) { case (OperationMode2.Sync | OperationMode2.Parent): case (OperationMode2.Sync): break; case (OperationMode2.Parent): default: break; } This means that: if both Sync and Async values are specified Sync value always win (Zero is the neutral element for bitwise OR operation) if no Sync value specified, the Async method is used. Here is the final method implementation: protected void CheckMainOperarionMode(OperationMode2 mode) { if (mode & OperationMode2.Sync == OperationMode2.Sync) { } else { } } All content above prove that Async value (0x0) is useless from the arithmetic perspective, but, without it we lose readability. The following IF statements are logically equals but the first is definitely more readable: if (OperationMode2.Async | OperationMode2.Parent) { } if (OperationMode2.Parent) { } Here’s another example where you can see the benefits of 0x0 value, the default value can be used explicitly. <my:Control runat="server" Mode="Async,Parent"> <my:Control runat="server" Mode="Parent">

    Read the article

  • Is return-type-(only)-polymorphism in Haskell a good thing?

    - by dainichi
    One thing that I've never quite come to terms with in Haskell is how you can have polymorphic constants and functions whose return type cannot be determined by their input type, like class Foo a where foo::Int -> a Some of the reasons that I do not like this: Referential transparency: "In Haskell, given the same input, a function will always return the same output", but is that really true? read "3" return 3 when used in an Int context, but throws an error when used in a, say, (Int,Int) context. Yes, you can argue that read is also taking a type parameter, but the implicitness of the type parameter makes it lose some of its beauty in my opinion. Monomorphism restriction: One of the most annoying things about Haskell. Correct me if I'm wrong, but the whole reason for the MR is that computation that looks shared might not be because the type parameter is implicit. Type defaulting: Again one of the most annoying things about Haskell. Happens e.g. if you pass the result of functions polymorphic in their output to functions polymorphic in their input. Again, correct me if I'm wrong, but this would not be necessary without functions whose return type cannot be determined by their input type (and polymorphic constants). So my question is (running the risk of being stamped as a "discussion quesion"): Would it be possible to create a Haskell-like language where the type checker disallows these kinds of definitions? If so, what would be the benefits/disadvantages of that restriction? I can see some immediate problems: If, say, 2 only had the type Integer, 2/3 wouldn't type check anymore with the current definition of /. But in this case, I think type classes with functional dependencies could come to the rescue (yes, I know that this is an extension). Furthermore, I think it is a lot more intuitive to have functions that can take different input types, than to have functions that are restricted in their input types, but we just pass polymorphic values to them. The typing of values like [] and Nothing seems to me like a tougher nut to crack. I haven't thought of a good way to handle them. I doubt I am the first person to have had thoughts like these. Does anybody have links to good discussions about this Haskell design decision and the pros/cons of it?

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • ATG Live Webcast Nov. 8th: Advanced Management of EBS with Oracle Enterprise Manager

    - by Bill Sawyer
    The task of managing and monitoring Oracle E-Business Suite environments can be very challenging. The Application Management Pack plug-in is part of Oracle Enterprise Manager 12c Application Management Suite for Oracle E-Business Suite. The Application Management Pack plug-in is designed to monitor and manage all the different technologies that constitute Oracle E-Business Suite applications, including midtier, configuration, host, and database management—to name just a few. Customers that have implemented Oracle Enterprise Manager have experienced dramatic improvements in system visibility, diagnostic capability, and administrator productivity. This webcast will highlight the key features and benefits of Oracle Enterprise Manager, the latest version of the Oracle Application Management Suite for Oracle E-Business Suite. Advanced Management of Oracle E-Business Suite with Oracle Enterprise Manager Date:                Thursday, November 8, 2012Time:               8:00 AM - 9:00 AM Pacific Standard TimePresenters:   Angelo Rosado, Principal Product Manager, E-Business Suite ATG                         Lauren Cohn, Principal Curriculum Developer, E-Business Suite ATGWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103191To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591460967 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Blank desktop after updates today, only unity2d works now [closed]

    - by NewUbuntuUser
    Possible Duplicate: Unity doesn't load, no Launcher, no Dash appears I have been using 12.04 (wubi) since a week now and this is my 1st exposure to linux. Everything was going on fine till now , but today as soon as I updated through the update manager , docky gave a message that compositing is required or something, and i got an error window which asked me to report the error and then the update manager asked me to reboot. After rebooting , i just get a blank desktop screen , no launcher, no toolbar. I have to restart with the power key. However if i login through Unity 2d everything is fine except the benefits of the 3d environment. I guess something got messed up after the update and i cant figure out which program or file caused this mess. I would highly appreciate if someone could help me out with this as I really liked working on ubuntu after windows 7. Thanks! SOLVED-- Thanks @jrg for the link provided.. it helped me to come out of this mess. Actually some update of compizconfig made the unity plugin inactive did something to the Animation add ons , the one which you use for the burn effect etc. What i did was : On the blank desktop Pressed keys Ctrl + Alt + T to bring up the terminal and typed ccsm This brought up the compiz config system manager, there i enabled the unity plugin and rebooted. Everything started working fine. But then again when i started the addons plugin ,everything went back to square one.. :( . This time pressing Ctrl + Alt + T also did not help. Then i tried Ctrl + Alt + F1 , it brought up a terminal or whatever u call it, and then i typed unity --reset , it did some resetting and in the end it showed some comositing done , I pressed Ctrl + Alt + F7 to come back to the desktop , and here it was , my old sweet desktop.. it had the animation effects too like wobbly windows, excepting the addons like the burn effect, i guess something got wrong with the last update of that addon plugin and now whenever i try turning that on , everything goes poof!! 6 hours wasted , but i guess i learnt something new.. not bad for an orthodontist i guess :))

    Read the article

  • It’s that Time of Year Again… OPN Survey Time!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Whether you’re still carving pumpkins or getting ready to carve a turkey, be sure to carve out your chance to win one of 5 Apple i-Pads, by taking our Oracle PartnerNetwork Survey! On October 12th, Oracle Alliances and Channels sent out invitations asking you to complete the Oracle PartnerNetwork Survey and provide feedback on Oracle’s Enablement, Partner Business Center, OPN Portal, Communications, Benefits, OPN Competency Center, Solutions Catalog and Marketing.  So, if you’re feeling as festive as we are, and received an invitation to participate, we hope you take this opportunity to provide us feedback so we can continue to drive meaningful programs and interactions with you, our partners! If you didn’t receive an invitation but still want to get the get the holidays started with your very own i-Pad, you’re welcome to send us a note at [email protected] with your candid feedback, or request to be added to the list for future surveys. The drawing for the Apple i-Pads will take place after the survey closes mid November. Thank you in advance for your participation and we look forward to receiving your response. Let the holiday season begin! The OPN Communications Team

    Read the article

  • Reasons NOT to use JSF [closed]

    - by Vain Fellowman
    I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web-development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front-controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Benchmarking a file server

    - by Joel Coel
    I'm working on building a new file server... a simple Windows Server box with a few terabytes of disk space to share on the LAN. Pain for current hard drive prices aside :( -- I would like to get some benchmarks for this device under load compared to our old server. The old server was installed in 2005 and had 5 136GB 10K disks in RAID 5. The new server has 8 1TB disks in two RAID 10 volumes (plus a hot spare for each volume), but they're only 7.2K rpm, and of course with a much larger cache size. I'd like to get an idea of the performance expectations of the new server relative to the old. Where do I get started? I'd like to know both raw potential under different kinds of load for each server, as well an idea of what our real-world load looks like and how it will translate. Will disk load even matter, or will performance be more driven by the network connection? I could probably fumble through some disk i/o and wait counters in performance monitor, but I don't really know what to look for, which counters to watch, or for how long and when. FWIW, I'm expecting a nice improvement because of the benefits of having two different volumes and the better RAID 10 performance vs RAID 5, in spite of using slower disks... but I'd like to get an idea of how much.

    Read the article

  • Reverse Proxies and AJAX

    - by osij2is
    A client of ours is using IBM/Tivoli WebSEAL, a reverse-proxy server for some of their internal users. Our web application (ASP.NET 2.0) and is a fairly straightforward web/database application. Currently, our client users that are going through the WebSEAL proxy are having problems with a .NET 3rd party control. Users who are not going through the proxy have no issues. The 3rd party control is nothing more than an AJAX dynamic tree that on each click requests all the nodes for each leaf. Now our clients claim that once users click on a node in the control, the control itself freezes in such a way that they don't see anything populate. Users see "Loading..." message appear but no new activity there afterwards. They have to leave the page and go back to the original page in order to view the new nodes. I've never worked with a reverse proxy before so I have googled quite a bit on the subject even found an article on SF. IBM/Tivoli has mentioned this issue before but this is about all they mention at all. While the IBM doc is very helpful, all of our AJAX is from the 3rd party control. I've tried troubleshooting using Firebug but by not being behind the reverse proxy, I'm unable to truly replicate the problem. My question is: does anyone have experience with reverse proxies and issues with AJAX sites? How can I go about proving what the exact issue is? Currently we're negotiating remote access so assume for the greater part that I will have access to a machine that's using the WebSEAL proxy. P.S. I realize this question might teeter on the StackOverFlow/ServerFault jurisdictional debate, but I'm trying to investigate from the systems perspective. I have no experience with reverse proxies (and I'm unclear on the benefits) and little with forwarding proxies.

    Read the article

  • Best Practice: Apache File Upload

    - by matnagel
    I am looking for a soultion for trusted users to upload pdf files via html forms (with maybe php involved). This is quite a standard ubuntu linux server with apache 2.x and php 5. I am wonderiung what are the benefits of the apache file upload module. There were no updates for some time, is it actively maintained? What are the advantages over traditional php upload with apache 2 without this module? http://commons.apache.org/fileupload I remember traditional php file upload is difficult with some pitfalls, will the apache file upload module improve the situation? The solution I am looking for will be part of an existing website and be integrated into the admin web frontend. Things I am not considering are webdav, ssh, ftp, ftps, ftp over ssh. Should work with a browser and without installing special client software, so I am asking about a browser based upload without special client side requirements. I can request a modern browser like firefox = 3.5 or modern webkit broser like chrome or safari from the users.

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Make dhcp assign same IP and hostname for different interfaces at one machine

    - by Egeshi
    I have a feeling that question itself looks stupid but it is not. Please let me clarify. I have dynamic DNS with BIND and NIS configured at my LAN and have laptop which I am using in both wireless and wired mode. I mean that sometimes I have to use wired interface to achieve higher throughput but most of time I don't need it and using wireless mode. Everything works great. Issue is that I want both interfaces get same IP from DHCP. Just for convenient firewall setup. If I add both hosts to dhcp in this manner # bt wireless host bt { hardware ethernet 00:1f:1f:62:60:28; fixed-address 172.16.77.110; } # bt wired host bt { hardware ethernet 00:14:22:b7:5a:de; fixed-address 172.16.77.110; } DHCP says logs following message dhcpd: Dynamic and static leases present for 172.16.77.110 dhcpd: Remove host declaration bt-wired or remove 172.16.77.110 dhcpd: from the dynamic address pool for 172.16/16 Host records are added outside of any subnet, but it makes no difference if I put them there, effect is still the same. This is not critical but either is not my whim because even if DHCP seems to work fine for that "bt" host, I cannot make connection TO it from remote machine anymore with this definitely incorrect DHCP config. I'd be thankful if one spares a minute for advice about how to configure DHCPD correctly. UPDATE. I realize that there's a soulution to assign different hostname in DHCP config but would like to use benefits of short host names.

    Read the article

  • 'txn-current-lock': Permission denied [500, #13] - Subversion + Apache Configuration Issue

    - by wfoster
    Current Setup Fedora 13 32bit Apache 2.2.16 Subversion repositories setup under /var/www/svn I have two different repositories under this directory so my /etc/httpd/conf.d/subversion.conf setup in this way; LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /svn> DAV svn SVNListParentPath on SVNParentPath /var/www/svn <LimitExcept GET PROPFIND OPTIONS REPORT> AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/httpd/.htpasswd Require valid-user </LimitExcept> </Location> After copying over my repos and using; chmod 755 -R /var/www/svn chcon -R -t httpd_sys_content_t /var/www/svn chown apache:apache -R /var/www/svn I can browse my repos fine through the browser, and I can update all my working copies, however when I try to check in from anywhere I get the same error Can't open file '/var/www/svn/repo/db/txn-current-lock':Permission denied I have been working on this issue for a while now and cant seem to find a solution to my issues. It might be of some use to know that the repo existed on a different server before this, it has been now moved to this new server. Everything I have read seems to indicate that the permissions for apache are incorrect, however apache is set to run as User apache and Group apache. So as far as I can tell my setup is correct. The behavior is not though. Any Ideas? Solution The only way I was able to get this to work is to disable SELinux, it could also be done by setting the proper booleans with SELinux via setsetbool and getsebool since this is just a home server, I decided to disable SELinux and am reaping the benefits now.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >