Search Results

Search found 11985 results on 480 pages for 'legal issues'.

Page 270/480 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Low-level game engine renderer design

    - by Mark Ingram
    I'm piecing together the beginnings of an extremely basic engine which will let me draw arbitrary objects (SceneObject). I've got to the point where I'm creating a few sensible sounding classes, but as this is my first outing into game engines, I've got the feeling I'm overlooking things. I'm familiar with compartmentalising larger portions of the code so that individual sub-systems don't overly interact with each other, but I'm thinking more of the low-level stuff, starting from vertices working up. So if I have a Vertex class, I can combine that with a list of indices to make a Mesh class. How does the engine determine identical meshes for objects? Or is that left to the level designer? Once we have a Mesh, that can be contained in the SceneObject class. And a list of SceneObject can be placed into the Scene to be drawn. Right now I'm only using OpenGL, but I'm aware that I don't want to be tying OpenGL calls right in to base classes (such as updating the vertices in the Mesh, I don't want to be calling glBufferData etc). Are there any good resources that discuss these issues? Are there any "common" heirachies which should be used?

    Read the article

  • Is it a bad idea to release software on the night before Christmas?

    - by Conor
    We're about to go live with a new version of our system. It's getting really close to Christmas. I work in a very small company. Everyone will be on leave or sporadically available over the next week or so. I've argued with my boss that this is very risky and that we should go live in the new year when everyone is back and when we can provide full support. He is unflinching - he argues that we need to go live sooner - so that we can get new users and more revenue which we need. The number of new users will be minor amount over the next week or so. There has been a decent amount of system testing performed on the system. However a new live system, in my experience, needs a lot of care and attention in the first few days. Am I being pessimistic or realistic? Update - January: The system did not go live over Christmas. Ongoing system testing revealed various problems. So no support issues to deal with. Still preparing for release...

    Read the article

  • Should you put personal beliefs in your program?

    - by TheLQ
    Recently I've found two examples of programmer's personal beliefs in programs that have removed or crippled useful functionality uTorrent using KB (rarely used) vs Kb (what most ISPs and other programs use as their metric) in their current connection speed. Various attempts by others and me to give options to at least give an option to show in Kb have ended with "ISPs should use KB" Kleopatra (gpg4Win key manager) not having PGP Key Pictures since they "give a false sense of security" and "increase the size of certificates". While the latter is true, the former is debatable. Both of these hurt the program and its usefulness to me. uTorrent's forums used to be filled with people saying they have 10 Mb download pipe but uTorrent only goes up to 2 MB (not knowing that Mb != MB), and with feature requests to show in Mb. Kleopatra has lost usefulness to me since I don't have the functionality to add pictures to PGP keys. These all are political statements; developers attempting to make change in, in their minds important, issues. But should this come at a cost to end user functionality? If a programmer heavily believes X but everyone else believes Y, should the programmer refuse to add support for Y because in their mind X is horrible? In short, should a programmer make political statements in their program?

    Read the article

  • How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

    - by Orgad Kimchi
    This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle Solaris for a MongoDB cluster: • You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster. • In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort. • You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools. • ZFS built-in compression provides optimized disk I/O utilization for better I/O performance. In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

    Read the article

  • Is a disk/ata timeout exception dangerous?

    - by j-g-faustus
    I have a few hard drives in mdadm RAID 5 configured to go to standby after a few minutes of inactivity. (Using hdparm.conf spindown_time.) At irregular intervals I get messages like these in dmesg: [ 1840.251661] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 1840.251722] ata4.00: failed command: SMART [ 1840.251758] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [ 1840.251759] res 40/00:14:50:2e:04/00:00:02:00:00/40 Emask 0x4 (timeout) [ 1840.251858] ata4.00: status: { DRDY } [ 1840.251888] ata4: hard resetting link [ 1840.600742] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1840.601521] ata4.00: configured for UDMA/133 [ 1840.601547] ata4: EH complete [337877.713988] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [337877.714019] ata4.00: failed command: SMART [337877.714038] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [337877.714039] res 40/00:04:90:10:81/00:00:00:00:00/40 Emask 0x4 (timeout) [337877.714089] ata4.00: status: { DRDY } [337877.714107] ata4: hard resetting link [337878.063085] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [337878.063743] ata4.00: configured for UDMA/133 [337878.063764] ata4: EH complete I think the exception is caused by smartd when a drive does not wake up quickly enough. There are no issues (that I can tell) in accessing the drives normally through the file system - it takes a few seconds longer than normal when they are asleep, but there are no exceptions. Is this something I should worry about, as a potential symptom on something that could corrupt a drive over time? Or can I safely ignore it as part of normal operation? Edit: By request: smartctl -a for sdaand sde, both disks are members of the array. If ata4is the same as scsi-4 then sde is the one that gave the error above, according to /dev/disk/by-path.

    Read the article

  • CIFS shares do not mount after upgrade to 12.10 from 12.04

    - by Mothball
    I have seen issues close to my problem but no one seems to have a definitive answer as to what is going on and why the failure occurs. I have a number of NAS devices on my home network and on a previous install of 12.04 and version prior mounting at login worked using this entry for each in fstab: //servername/sharename /media/windowsshare cifs guest,uid=1000,iocharset=utf8,codepage=cp850,cp850 0 0 Now when I use this, 12.10 reports the standard - cannot mount bad option ... blah blah... The kern log reports that the CIFS option "codepage" unknown... changed entry to "unicode" and received the same error message. There are no other error messages or log entries that would indicate another issue, but this is the statement I used for quite awhile with version 12.04 and before. Is the codepage option obsolete in 12.10/CIFS now? Is there a codepage support program that I must load? Is there some kind of helper program that is required to supports the codepage option? A current review of the man pages at samba.org does not make mention of the option "codepage". Extremely confused - any help/insight would be greatly appreciated.

    Read the article

  • How To Change Attachment Size in WorkItems in TFS 2010

    - by Ravi
    Recently, I came across an issue where I had to change the size limit for WorkItem Attachments in TFS 2010. I searched all around the internet only to find very little information around it which wasn’t clear honestly. So after breaking my head for sometime, I was successful in doing it. Here are my conclusions and the procedure to do it. 1. You DON’T 'have to' programmatically change it. You can do it directly from IIS webservices. 2. You CAN change it programmatically too, by making an entry into TFS Registry using a small piece of code. Let me show you how it is done from IIS. This is to change the size of attachment to your required value for workItems in TFS 2010 for each collection individually. You must be a TFS Admin to do this ( Login with setup account ) Browse to /WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">http://localhost:8080/tfs/<YOUR-COLLECTION-NAME>/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx You’ll see 3 asmx services – GetMaxAttachmentSize, GetWorkItemTrackingVersion and SetMaxAttachmentSize. 4. To know what is the current value of the maximum attachment size for a collection, click on the first service and you’ll the current existing value for this particular collection when you click on ‘invoke’ button. ( value is in bytes ) 5. Now click on the ‘SetMaxAttachmentSize’ webservice and fill in the value of your choice. 6. Reset IIS ( not required honestly, but I did it, just to be sure ) 7. Now try attaching a file greater than the size you’ve set. It’ll fail successfully   Below is the error which you’d see in such scenarios. Let me know if you see any issues & I’ll be happy to help..!

    Read the article

  • Simple NAS setup for Ubuntu

    - by Sean Houlihane
    So, I want to connect my shiny new NAS (QNAP TS-210p II) to my Ubuntu 11.10 box. I have a 2nd Ubuntu machine, but I don't use that enough to be too worried about it. Use is ideally to offload all storage except for system (which can then run off a 30GB flash drive). The main machine also runs Myth front/backends. Both are on a wired network currently. I have got a basic setup working, mounted over NFS, but have some issues, and whenever I look for answers, I seem to get unto the UbuntuServer domain, with what seems like more detail than I want in an answer. Questions I have identified so far: 1) Sharing UIDs to get file permissions correct. LDAP or something else? What is necessary to make this work? 2) Mounting an NFS device reliably. Do I just add it in fstab or is some sort of auto-mount advisable? Its a wired network, but I shouldn't need to worry about reboot after power-outages and sequencing...

    Read the article

  • Ubuntu 13.10 AMD/ATI proprietary driver slow boot time, lengthy login/logout delays

    - by NahsiN
    Ubuntu 13.10 is causing me major headaches with my AMD/ATI HD 5770 GPU. Below is a list of problems I am currently encountering. 1) The boot time is extended by at least 25s after installing catalyst 13.4. Using open source radeon drivers, my boot time till the login screen is ~10s. With catalyst 13.4 installed, the boot time increases to ~35s. This was not the case in Ubuntu 13.04, 12.10 or 12.04. I have done the driver installation manually (instructions from wiki.cchtml.com) and using software center and there is no difference. I have not tried the catalyst 13.8 beta driver. 2) After manual installation of catalyst 13.4, I get stuck at a black screen after logging in. I have to purge fglrx to resolve the problem. I tried sudo amdconfig --initial -f but it didn't help. 3) The delay between logging in and unity being displayed is ~10-15s for BOTH open source and proprietary drivers. During the delay, it's just a black screen. Whenever I logout, there is again a ~10-15s delay with the login screen appearing stuck before lightdm allows me to enter my password again. This is ridiculous! Yes, I could stick with open source radeon drivers but I would like to install Steam and play my Valve collection on the machine. Is anybody else encountering similar issues?

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Multiple monitors showing same screen but different resolutions

    - by Luis Alvarado
    Is it possible to have 2 or more monitors showing the same screen, for example the same desktop but with different resolutions. Like the clone option in Nvidia or the mirror option using the Display settings in Ubuntu but instead of showing the same output with the same resolution, the both show the same output using a resolution that is native for each monitor connected. In my case if I have a netbook that has max resolution of 1360x768 and a TV that has 1280x1024, the would both show the same desktop but each with their own resolution that is compatible for each device. This would help in trying to find a resolution that works on both monitors and in cases like a mini netbook and a huge TV it would solve issues like having max 800x600 in one monitor and min 1024x768 in the other. In the case I tested I was using an HDMI cable but this question also involves VGA and any other connection. I have 3 tests scenarios for this: Scenario 1 - Laptop HP DV6000 (Intel Integrated Video) with 1360x760 connected to a Samsung LED 42 TV that has 1280x900. Scenario 2 - Laptop EEE with 1024x600 (Intel Integrated Video) connected to Sony LCD TV that supports 1280x900. Scenario 3 - Intel Desktop with Nvidia 440 GT with HDMI connected to Soneview 32' TV that supports 1920x1080 and VGA connected to an Epson Video Beam that supports 1280x1024 max. In this 3 scenarios I need to be able to show the same desktop and same views but on different resolutions for each output device. UPDATE: Tested with Xubuntu and the way it handles multiple monitors is precisely what I am asking. The ability to handle the resolution of different monitors showing the same thing.

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

  • Using EC2 instance as main development platform

    - by David
    My problem I am working as a consultant for various companies. Each company provides me with a laptop with their software on and I also have my own, where I have my development environment. I tend to buy a new laptop every second year and find myself spending lots of time configuring and installing software. I also spend a lot of time waiting for my laptop to process things. To solve all these issues, I am now considering using EC2 (running windows instances) as my main development platform and just access this from any PC I happen to be at. I calculated that running the Large instance (cheapest 64-bit) for 8 hours a day for a year costs me 960$ per year, which is acceptable. I imagine that when I approach the workplace each day, I will make a single tap on my phone to fire up the instance, so it is ready when I get to work. I should have different icons on my phone to fire up the various instance types. The same software should of course automatically be loaded on the various hardware (sometimes I would even need their instance with 68.4 GB of memory). Another advantage is that if I am having a specific problem with my instance, I could fire up another instance and have someone look into the problem and update the image. My question: Does anyone have experience with such a setup on EC2? What kind of problems do you foresee?

    Read the article

  • What is Stackify?

    - by Matt Watson
    You have developers, applications, and servers. Stackify makes sure that they are all working efficiently. Our mission is to give developers the integrated tools they need to better troubleshoot and monitor the applications they create and the servers that they run on. Traditional IT operations tools are designed for network and system administrators. Developers commonly spend 30% of their time working with IT Operations remediating application service problems. Developers currently lack tools to efficiently support the applications they create. Stackify delivers the application support functionality that developers need:View application deployment locations, versions, and historyBrowse files on servers to ensure proper deploymentsAccess configuration and log files on serversRemotely restart windows services, scheduled tasks, and web applicationsBasic server monitoring and alertsCollects all application exceptions to a centralized pointLog and report on custom applications eventsStackify is building an integrated DevOps solution delivered from the cloud designed to meet the needs of developers but also help unify the working relationship with IT operations teams and existing security roles. Our goal is to help unify the interaction between developers and IT operations. Stackify allows both teams to have visibility that they never had before  to solve complex application service issues easier and faster. Stackify’s CEO and CTO both have experience managing very large and high growth software development teams. That experience is driving our design in Stackify to deliver the integrated tools we always wished we had, the next generation of development operations tools.

    Read the article

  • Writing Web "server less" applications

    - by crodjer
    TL;DR What are the prospects of write applications which are completely based on a REST database server (CouchDB) and web applications which directly access the DB instead of having a web server in between? I recently started looking up some NoSQL databases. MongoDB seems to be a popular choices. I also liked the project. But I personally liked the REST interface of CouchDB. So what I wanted to know is if there was the possibility of applications (maybe cached apps in web browser, a chrome extension etc.) which could just just query the database directly with no requirement of a webserver in between. All the computational logic would reside in the client application and the database will do what it does, CRUD. Since mostly (I don't know which doesn't) client frameworks support REST quaries, it could be a good way writing applications well optimized for respective framework. These applications though won't be doing complicated computation, but still provide enough functionality which could replace lots of conventional applications. Are existing resources and projects which would help me move towards writing such applications and also the scope and moving towards developing in this way? Are their any technical/security issues with this? This post will help me decide to look into project like CouchDB (and maybe Dive into Erlang later) or stay with the conventional frameworks (like django) and SQL databases. Update A specific point of such apps I had in mind is creation of offline applications just by replicating couchdb data on client.

    Read the article

  • Upcoming events : OBUG Connect Conference 2012

    - by Maria Colgan
    The Oracle Benelux User Group (OBUG) have given me an amazing opportunity to present a one day Optimizer workshop at their annual Connect Conference in Maastricht on April 24th. The workshop will run as one of the parallel tracks at the conference and consists of three 45 minute sessions. Each session can be attended stand alone but they will build on each other to allow someone new to the Oracle Optimizer or SQL tuning to come away from the conference with a better understanding of how the Optimizer works and what techniques they should deploy to tune their SQL. Below is a brief description of each of the sessions Session 7 - 11:30 am Oracle Optimizer: Understanding Optimizer StatisticsThe workshop opens with a discussion on Optimizer statistics and the features introduced in Oracle Database 11g to improve the quality and efficiency of statistics-gathering. The session will also provide strategies for managing statistics in various database environments. Session 27 -  14:30 pm Oracle Optimizer: Explain the Explain PlanThe workshop will continue with a detailed examination of the different aspects of an execution plan, from selectivity to parallel execution, and explains what information you should be gleaning from the plan. Session 47 -  15:45 pm Top Tips to get Optimal Execution Plans Finally I will show you how to identify and resolving the most common SQL execution performance problems, such as poor cardinality estimations, bind peeking issues, and selecting the wrong access method.   Hopefully I will see you there! +Maria Colgan

    Read the article

  • A short but intense GCC Gathering in London

    - by user817571
    About one week ago I joined in London many long time GCC friends and acquaintances for a gathering organized by Google (in particular I guess should be thanked Diego and Ian). Only a weekend, and I wasn't able to attend on Sunday morning, but a very good occasion to raise some issues in a very relaxed way, in particular those at the border between areas of competence, which are the most difficult to discuss during the normal work days. If you are interested in a general overview and some notes this is a good link: http://gcc.gnu.org/wiki/GCCGathering2011 As you may easily guess, the third topic is mine, which I managed to have up quite early on Friday morning thanks to the votes of some good friends like Dodji (the ordering of the topics resulted from democratic voting on Friday evening!). I learned a lot from the discussion: for example that certainly the new C++11 'final' should be exploited largely in the c++ front-end; the various reasons why devirtualization can be quite trick (but I'm really confident that Martin and Honza are going to make a good progress also basing on a set of short testcases which I promised to collect); that, as explained by Ian, the gold linker already implements the nice --icf (Identical Code Folding) facility, which some friends of mine are definitely going to like (however, see: http://sourceware.org/bugzilla/show_bug.cgi?id=12919). I also enjoyed the observations made by Lawrence, where he remarked that in C+11 we are going to see more pointer iterations implicitly produced by the new range-based for-loop and we really want to make sure the loop optimizers are able to deal with those as well as loops explicitly using a counter. All in all, I really hope we are going to do it again!

    Read the article

  • How can rotating release managers improve a project's velocity and stability?

    - by Yannis Rizos
    The Wikipedia article on Parrot VM includes this unreferenced claim: Core committers take turns producing releases in a revolving schedule, where no single committer is responsible for multiple releases in a row. This practice has improved the project's velocity and stability. Parrot's Release Manager role documentation doesn't offer any further insight into the process, and I couldn't find any reference for the claim. My first thoughts were that rotating release managers seems like a good idea, sharing the responsibility between as many people as possible, and having a certain degree of polyphony in releases. Is it, though? Rotating release managers has been proposed for Launchpad, and there were some interesting counterarguments: Release management is something that requires a good understanding of all parts of the code and the authority to make calls under pressure if issues come up during the release itself The less change we can have to the release process the better from an operational perspective Don't really want an engineer to have to learn all this stuff on the job as well as have other things to take care of (regular development responsibilities) Any change of timezones of the releases would need to be approved with the SAs and: I think this would be a great idea (mainly because of my lust for power), but I also think that there should be some way making sure that a release manager doesn't get overwhelmed if something disastrous happens during release week, maybe by have a deputy release manager at the same time (maybe just falling back to Francis or Kiko would be sufficient). The practice doesn't appear to be very common, and the counterarguments seem reasonalbe and convincing. I'm quite confused on how it would improve a project's velocity and stability, is there something I'm missing, or is this just a bad edit on the Wikipedia article? Worth noting that the top voted answer in the related "Is rotating the lead developer a good or bad idea?" question boldly notes: Don't rotate.

    Read the article

  • Correct nvidia+intel graphics setup in 14.04

    - by Espressofa
    Just upgraded to 14.04 to try to fix some other issues. Now, something has gone wrong with my graphics. I have a Thinkpad T530 with Intel and Nvidia graphics cards. $ inxi -SGx System: Host: xyz Kernel: 3.13.0-24-generic x86_64 (64 bit, gcc: 4.8.2) Desktop: N/A Distro: Ubuntu 14.04 trusty Graphics: Card-1: Intel 3rd Gen Core processor Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GF108M [NVS 5400M] bus-ID: 01:00.0 X.Org: 1.15.1 drivers: fbdev,vesa,intel,nouveau (unloaded: nvidia) Resolution: [email protected] GLX Renderer: N/A GLX Version: N/A Direct Rendering: N/A $ glxinfo name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". I'm not sure what I did but now something is wrong with my graphics, as should be visible from the above commands. nvidia-detector says "none" as well. I used to have bumblebee but then some website said to remove it and now something's clearly wrong. What's the right way to set things up? Should I try to add bumblebee back? Here's what's installed now: $ dpkg --get-selections | grep nvidia nvidia-319 install nvidia-331 install nvidia-libopencl1-331 install nvidia-opencl-icd-331 install nvidia-prime install nvidia-settings install nvidia-settings-319 install

    Read the article

  • Tree Surgeon 2.0 - The future on the T4 Express

    - by Malcolm Anderson
    If you've never been a fan of TreeSurgeon (http://treesurgeon.codeplex.com/) then skip this post.However, if have been there have been some interesting developments over the last couple of years.The biggest one is T4Recently Bill Simser wrote a detailed post about the potential future of tree surgeon, called "Tree Surgeon - Alive and Kicking or Dead and Buried" He raised the question:Times have changed. Since that last release in 2008 so much has changed for .NET developers. The question is, today is the project still viable? Do we still need a tool to generate a project tree given that we have things like scaffolding systems, NuGet, and T4 templates. Or should we just give the project its rightful and respectful send off as its had a good life and has outlived its usefulness.For myself, the answer is, keep it.I've spent the last couple of years doing agile engineering coaching and architecture and from my experience, I can tell you, there are a lot of shops out there that would benefit from having Tree Surgeon as a viable product.  Many would benefit simply from having the software engineering information that is embedded in the tree surgeon site be floating around their conversation.Little things like, keep all of your software needed to run the build, with the build in the version control system.Have your developers and the build system using the same build.Have a one-touch buildSeparate your code from your interfacePut unit tests in first, not lastI've seen companies with great developers suffer from the problems that naturally come from builds taking 3 and 4 hours to run.  It takes work to get that build down to 10 minutes, but the benefits are always worth it.  Tree Surgeon gives you a leg up, by starting you off with a project that you can drop into your Continuous Integration system, right out of the box.Well, it used to be right out of the box.  Today, you have to play with the project to make it work for you, but even with the issues (it hasn't been updated since 2008) it still gives you a framework, with logical separations that you can build from.If you have used Tree Surgeon in the past, take a few minutes and drop a comment about what difference it made in your development style, and what you are doing differently today because of it.

    Read the article

  • Oracle OpenWorld Session: “Business Driven Development with BPM: Lessons from the Real World”

    - by Ajay Khanna
    One of key values that BPM promises is “Business Empowerment”. People closest to the processes, who participate in the process every day, are the ones who know most about the process. These are the people who run day-to-day operations, people who triage customer issues, people who envision improvements and innovations. It is, therefore, imperative that when a company decides to use BPM technology to automate their business processes, business people take the driver’s seat. BPM is not an IT only project. Oracle BPM suite has been designed keeping this core tenet of BPM, Business Empowerment, in mind. The result is business user centered design of Process Composer. Process Composer is designed to let business users document their processes, analyze them using simulation, create web forms, specify business rules and even run them in testing mode using process player, to see if the designed process meets their needs. This does not mean that IT has no role in this process. In fact, Oracle BPM Suite has made it very easy for Business and IT to collaborate. The same process can be shared among business, and IT stakeholders and each can collaborate to create model-driven, process based executable applications. A process may need to integrate with multiple systems via various mechanisms, and IT leads system and data integration effort. IT helps fine tune the performance of process applications and ensures that the deployment of process application meets scalability and failover standards. In this session, we saw Harish Gaur and Satya Narayanan from Oracle demonstrate roles Business and IT play in BPM projects and how Oracle BPM Suite enables business and IT collaboration to design and automate process based applications. They also discussed real life customer stories. Some key takeaways from this session: There are no IT projects, only business initiatives, requiring IT support Identify high impact processes – critical, better BPM ROI Identify key metrics to measure process performance Align business with IT layer

    Read the article

  • Distinguishing between UI command & domain commands

    - by SonOfPirate
    I am building a WPF client application using the MVVM pattern that provides an interface on top of an existing set of business logic residing in a library which is shared with other applications. The business library followed a domain-driven architecture using CQRS to separate the read and write models (no event sourcing). The combination of technologies and patterns has brought up an interesting conundrum: The MVVM pattern uses the command pattern for handling user-interaction with the view models. .NET provides an ICommand interface which is implemented by most MVVM frameworks, like MVVM Light's RelayCommand and Prism's DelegateCommand. For example, the view model would expose a number of command objects as properties that are bound to the UI and respond when the user performs actions like clicking buttons. Many implementations of the CQRS use the command pattern to isolate and encapsulate individual behaviors. In my business library, we have implemented the write model as command / command-handler pairs. As such, when we want to do some work, such as create a new order, we 'issue' a command (CreateOrderCommand) which is routed to the command-handler responsible for executing the command. This is great, clearly explained in many sources and I am good with it. However, take this scenario: I have a ToolbarViewModel which exposes a CreateNewOrderCommand property. This ICommand object is bound to a button in the UI. When clicked, the UI command creates and issues a new CreateOrderCommand object to the domain which is handled by the CreateOrderCommandHandler. This is difficult to explain to other developers and I am finding myself getting tongue-tied because everything is a command. I'm sure I'm not the first developer to have patterns overlap like this where the naming/terminology also overlap. How have you approached distinguishing your commands used in the UI from those used in the domain? (Edit: I should mention that the business library is UI-agnostic, i.e. no UI technology-specific code exists, or will exists, in this library.)

    Read the article

  • Looking for a small, light scene graph style abstraction lib for shader based OpenGL

    - by Pris
    I'm looking for a 'lean and mean' c/c++ scene graph library for OpenGL that doesn't use any deprecated functionality. It should be cross platform (strictly speaking I just dev on Linux so no love lost if it doesn't work on Windows), and it should be possible to deploy to mobile targets (ie OpenGLES2, and no crazy mandatory dependencies that wouldn't port well to modern mobile frameworks like iOS, Android, etc), with a license that's compatible with closed source software (LGPL or more liberal). Specific nice-to-haves would be: Cameras and Viewers (trackball, fly-by, etc) Object transform hierarchies (if B is a child of A, and you move A, B has the same transform applied to it) Simple animation Scene optimization (frustum culling, use VBOs, minimize state changes, etc) Text I've played around with OpenSceneGraph a lot and it's pretty amazing for fixed function pipeline stuff, but I've had a few of problems using it with the programmable pipeline and after going through their mailing list, it seems several people have had similar issues (going back years). Kitware's VES looks neat (http://www.vtk.org/Wiki/VES), but VES + VTK is pretty heavy. VTK is also typically for analyzing scientific data and I've read that it's not that appropriate for a general use case (not that great at rendering a lot of objects on scene,etc) I'm currently looking at VisualizationLibrary (http://www.visualizationlibrary.org/documentation/pag_gallery.html) which looks like it offers some of the functionality I'd like, but it doesn't explicitly support mobile targets. Other solutions like Ogre, Horde3D, Irrlicht, etc tend to be full on game engines and that's not really what I'm looking for. I'd like some suggestions for other libraries that I may have missed... please note I'm not willing to roll my own solution from scratch.

    Read the article

  • Upcoming EBS Webcasts for June, July, August 2012

    - by user793553
    See the following upcoming webcasts for June, July and August 2012. Flag Doc ID 740966.1 as a favourite, to keep up to date with latest advisor schedule. Additionally, see Doc ID 740964.1 for access to all archived advisor webcasts Oracle E-Business Suite Oracle E-Business Suite Title Date Summary None at this time.     EBS Agile Title Date Summary None at this time.     EBS Applications Technologies Group (ATG) Title Date Summary EBS – OAM Tuning and Monitoring EMEA July 10, 2012 Abstract EBS – OAM Tuning and Monitoring US July 11, 2012 Abstract Workflow Analyzer Followup EMEA July 24, 2012 Abstract Workflow Analyzer Followup US July 25, 2012 Abstract EBS CRM & Industries Title Date Summary None at this time.     EBS Financials Title Date Summary EBS Fixed Assets: Achieve Success Using Proactive Tools For Fixed Assets Support July 10, 2012 Abstract Overview and Flow of Oracle Project Resource Management July 17, 2012 Abstract Leveraging My Oracle Support To Increase Knowledge July 30, 2012 Abstract EBS HCM (HRMS) Title Date Summary Oracle Time and Labor (OTL) Rollback Functionality Session 1 July 25, 2012 Abstract Oracle Time and Labor (OTL) Rollback Functionality Session 2 July 25, 2012 Abstract EBS Manufacturing Title Date Summary Using Personalization in Oracle eAM June 21, 2012 Abstract OM Guided Resolutions - Finding Known Resolutions Easily July 17, 2012 Abstract Material Move Orders Flow July 25, 2012 Abstract Diagnosing Signal 11 Issues In ASCP Planning August 9, 2012 Abstract Interface Trip Stop - Best Practices and Debugging August 21, 2012 Abstract EBS Procurement Title Date Summary Punchout in iProcurement June 26, 2012 Abstract

    Read the article

  • Payroll Customers Must Apply Mandatory Patches to Maintain Your Supportability

    - by DanaD
    The HRMS Suite of products has minimum required Rollup Patch (RUP) levels as well as additional mandatory patches that our customers must apply to ensure they are in compliance for support.  Without these patches, customers risk not being able to apply any fixes for issues they encounter as these RUPs and mandatory patches are the minimum patch level expected by Oracle Support and Oracle Development.  Core Payroll and International Payroll customers must apply the yearly Rollup Patch within 12 months of its issue. Legislative Payroll customers have additional requirements for the Rollup Patch, as the RUP generally is a pre-requisite for the next Year End/Fourth Quarter/Year Begin payroll processing supported by Oracle. These minimum RUP patches and other mandatory patches for your product or legislation are created with the following goals in mind: Compliance: Manage the people in your organization within the requirements of a specific country. Supportability: Ensure you are on a common code base so that if problems are identified, patches can be readily provided to you. Reliability: Reliable code with multiple customer downloads and comprehensive testing. For the listing of Mandatory Rollup Patches for Oracle Payroll please view: Doc ID 295406.1: Mandatory Family Pack/Rollup Patch (RUP) Levels for Oracle Payroll. For the listing of Mandatory Patches for the HRMS Suite please view: Doc ID 1160507.1: Oracle E-Business Suite HCM Information Center – Consolidated HRMS Mandatory Patch List. For information on the latest Rollup Patches (RUPs) for the HRMS Suite please view: Doc ID 135266.1: Oracle HRMS Product Family – Release 11i & 12 Information.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >