Search Results

Search found 2993 results on 120 pages for 'distributed transactions'.

Page 56/120 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Where can I find "magic numbers" for classic game play mechanics?

    - by MrDatabase
    I'd like to find some "magic numbers" for the classic helicopter game. For example the numbers that determine how fast the helicopter accelerates up and down. Also perhaps the "randomness" of the obstacles (uniformly distributed? Gaussian?). Where can I find these numbers? p.s. I don't care about the particular platform... Flash on the desktop browser is just as good as some implementation on a mobile device.

    Read the article

  • Using EPEL repos with Oracle Linux

    - by wcoekaer
    There's a Fedora project called EPEL which hosts a set of additional packages that can be installed on top of various distributions such as Red Hat Enterprise Linux, CentOS, Scientific Linux and of course also Oracle Linux. These packages are not distributed by the distribution vendor and as such also not supported by the vendors (including Oracle) however for users that want to pick up some extras that are useful, it's very easy to do this. All you need to do is download the EPEL RPM from the website, install it on Oracle Linux 5 or Oracle Linux 6 and run yum install or yum search to get the packages. example : # wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm # rpm -ivh epel-release-6-5.noarch.rpm # yum repolist Loaded plugins: refresh-packagekit, rhnplugin repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,124 The folks that build these repositories are doing a great job at adding very useful packages. They are free, but also unsupported of course.

    Read the article

  • Secure Enterprise Search 11.2.2.2 Now Available for PeopleTools 8.53

    - by Matthew Haavisto
    We are pleased to announce that Oracle Secure Enterprise Search (SES) 11.2.2.2 is now available to PeopleSoft Customers on PeopleTools 8.53.  The minimum PeopleTools Patch Version Required to adopt SES 11.2.2.2 is PeopleTools 8.53.06.  This version of SES provides some important benefits for PeopleSoft Customers, particularly in the areas of platform support, distributed architecture support, and RAC support.  You can get all the details on this update on My Oracle Support.  This MOS document lists the fixes and configurations needed for PeopleTools certification of SES 11.2.2.2. For other useful information on PeopleTools and SES, see this Oracle forum.

    Read the article

  • Problem downloading .exe file from Amazon S3 with a signed URL in IE

    - by Joe Corkery
    I have a large collection of Windows exe files which are being stored/distributed using Amazon S3. We use signed URLs to control access to the files and this works great except in one case when trying to download a .exe file using Internet Explorer (version 8). It works just fine in Firefox. It also works fine if you don't use a signed URL (but that is not an option). What happens is that the IE downloader changes the name from 'myfile.exe' to 'myfile[1]' and Windows no longer recognizes it as an executable. Any advice would greatly be appreciated. Thanks.

    Read the article

  • Java Developers: Is Ant still in the "main stream" for builds? Do we push new developers to learn it?

    - by Sam Goldberg
    We have been slowly replacing batch command files (windows .bat) which were simply jarring up the classes compiled in the developers IDE, with more comprehensive Ant builds (i.e. get from CVS, clean compile, jar, archive, email, etc.) I've spent a lot of time learning (and debugging issues) with Ant, so I'm most comfortable using it for these tasks. But I wonder if Ant is still in as wide usage as it was when I first started learning, or whether "the world has moved on" to something newer (and maybe slicker). (I've started to see more Maven build stuff distributed, which I've never used, for example.) The practical import of this question, is whether I push new developers to learn Ant, or whether they should be learning something else for builds? I'm never too on top of the trends, so it would be great to hear from other Java developers what they think is the best build tool, and what they think new developers should be learning.

    Read the article

  • Game Patching Mac/PC

    - by Centurion Games
    Just wondering what types of solutions are available to handle patching of PC/Mac games that don't have any sort of auto updater built into them. In windows do you just spin off some sort of new install shield for the game that includes the updated files, hope you can read a valid registry key to point to the right directory, and overwrite files? If so how does that translate over to Mac where the game is normally just distributed as straight up .app file? Is there a better approach than the above for an already released product? (Assuming direct sells, and not through a marketplace that features auto-updating like Steam.) Are there any off the shelf auto-updater type libraries that could also be easily integrated with a C/C++ code base even after a game has been shipped to make this a lot simpler, and that are cross platform? Also how do auto-updaters work with new OS's that want applications and files digitally signed?

    Read the article

  • How can I distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web Spring application running on a MySQL or PostgreSQL database. The traffic is becoming so high and the amount of data is becoming so big that a distributed database solution needs to be implemented to address scalability issue. Let's also assume this application is using Hibernate and the data access layer is cleanly separated with DAOs. Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. What would be the best strategy to scale this database? Is it possible to minimize sharding code (Shard) in the application?

    Read the article

  • Set secondary receiver in PayPal Chained Payment after the initial transaction

    - by CJxD
    I'm running a service whereby customers seek the services of 'freelancers' through our web platform. The customer will make a 'bid' which is immediately taken from their accounts as security. Once the job is completed, the customer marks it as accepted and the bid gets distributed to the freelancer(s) as a reward. After initially storing these rewards in the accounts of the freelancers and relying on MassPay to sort out paying them later, I realised that your business needs to be turning over at least £5000/month before MassPay is switched on. Instead, I was referred to Delayed Chained Payments in PayPal's Adaptive Payments API. This allows the customer to pay the primary receiver (my business) before the payment is later triggered to be sent to the secondary receivers (the freelancers). However, at the time that the customer initiates this transaction, you must understand that nobody yet knows who will receive the reward. So, before I program this whole Adaptive Payments system, is it even possible to change or add the secondary receivers after the customer has paid? If not, what can I do?

    Read the article

  • Are the technologies used in an application part of the architecture, or do they represent implementation/detailed design details?

    - by m3th0dman
    When designing and writing documentation for a project an architecture needs to be clearly defined: what are the high-level modules of the system, what are their responsibilities, how do they communicate with each other, what protocols are used etc. But in this list, should the concrete technologies be specified or this is actually an implementation detail and need to be specified at a lower level? For example, consider a distributed application that has two modules which communicate asynchronously via AMQP protocol, mediated by a message broker. The fact that these modules use the Spring AMQP library for sending and receiving messages is a fact that needs to be specified in the architecture or is a lower-level detailed design/implementation detail?

    Read the article

  • How to distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web spring application running on a MySql or PostGre kind of database. The traffic is becoming so high and the amount of data is becoming so big that a distributed dataase solution needs to be implemented. It is a scalability issue. Let's assume this application is using Hibernate and the data access layer is cleanly separated with DAO objects. What would be the best strategy to scale this database? Does anyone have hands on experience to share? Is it possible to minimize sharding code (Shard) in the application? Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. I am not looking for you could go for sharding or you could go no sql kind of answers. I am looking for deeper answers from people with experience.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Load balancing on Ubuntu Server

    - by SabreWolfy
    I have Ubuntu 10.04.4 server (32-bit) installed on a headless quad-core machine with 2GB RAM. I'm running a command-line analysis which is analyzing a large amount of data, but which does not require a large amount of RAM. The tool does not provide any multi-threading, so the CPU load is sitting at 1.00 (or sometimes just a little over). I ran top and pressed 1 to see the load on each of the cores and noticed that "Cpu1" is always running at 100%. I thought that the load would be distributed between the cores, rather than loading one core all the time. I'm sure I've seen this load-balancing behaviour before in Ubuntu or Debian Desktop versions. Why would the Server edition work differently? The analysis will likely take several hours to run, so loading one core at 100% for many hours while the other 3 remain idle is surely not the best approach?

    Read the article

  • How to encourage version control adoption

    - by Man Wa kileleshwa
    I have recently started working in a team where there is no version control. Most of the team members are not used to any kind of version control. I've been using mercurial privately to track my work. I would like to encourage others to adopt it, and at the very least start to version their code as they develop changes. Can anyone give me advice on how I can encourage adoption of a distributed version control such as mercurial. Any advice on how to win people including managers to DVCS would be much appreciated.

    Read the article

  • What makes Erlang suitable for cloud applications?

    - by Duncan
    We are starting a new project and implementing on our corporations's instantiation of an openstack cloud (see http://www.openstack.org/). The project is security tooling for our corporation. We currently run many hundreds of dedicated servers for security tools and are moving them to our corporations instantiation of openstack. Other projects in my company currently use erlang in several distributed server applications, and other Q/A point out erlang is used in several popular cloud services. I am trying to convince others to consider where it might be applicable on our project. What are erlang's strengths for cloud programming? Where are areas it is particularly appropriate to use erlang?

    Read the article

  • Implementation of a Rules Engine in Your Business Applicaitons

    - by enonu
    I'm for an experience driven answer from a few software engineers who have implemented a rules engine in their internal business applications. How has it affected your business in the following ways: Ability to launch and iterate over business driven logic Ability to have "business users" perform the actual modification of those rules rather than developers. Ability to comprehend the business rules in general. Quality of the software releases. More or less bugs from the end-user's POV? Speed of the applications. If you had to do it all over again, what would you do differently? Lastly, I'm looking for a qualification of your answer w/ respect to the architecture. Would you do the same thing if you were deploying to a 1-machine setup vs. your architecture vs. a multi-tier cloud-based distributed architecture using 1000s of machines? How would it be different? Thanks!

    Read the article

  • Does Agile (scrum) require one server environment?

    - by Matt W
    Is it necessary/recommend/best practice/any other positive to use only one server environment to perform all development, unit testing and QA? If so, is it then wise/part of Agile to then have only one staging environment before Live? Considering that this could mean internationally distributed teams of developers and testers in different time zones is this wise? This is something being implemented by our QA manager. The opinion put forward is that doing all the dev and testing on a single server is "Agile." The staging environment would be a second environment, and then live.

    Read the article

  • What are options for 3rd Party Centralized Software Settings Management?

    - by Jeff Martin
    I am an architect in an enterprise looking to build a SaaS solution. Our products are distributed over many different deployable containers, Web Services, Web UI's, etc. I am looking for some open-source or 3rd party software solution to manage the settings of our application. These would be similar to the settings you might find in Word or Eclipse or Visual Studio. The settings would control various behaviors and features of the product. (Probably not settings like which database to connect to but more like, should I show line numbers on the page or not by default..). Ideally, we would be able to store values for different dimensions (by tenant, by user, by application environment... ) Because we have so many different deployables, I am looking for a centralized solution that can provide a web service that each of the deployables can get their individual settings from. Does anyone know of a centralized service providing this sort of features or give me some help in searching for an alternative to rolling our own?

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

  • Mercurial release management. Rejecting changes that fail testing

    - by MYou
    Researching distributed source control management (specifically mercurial). My question is more or less what is the best practice for rejecting entire sets of code that fail testing? Example: A team is working on a hello world program. They have testers and a scheduled release coming up with specific features planned. Upcoming Release: Add feature A Add feature B Add feature C So, the developers make their clones for their features, do the work and merge them into a QA repo for the testers to scrutinize. Let's say the testers report back that "Feature B is incomplete and in fact dangerous", and they would like to retest A and C. End example. What's the best way to do all this so that feature B can easily be removed and you end up with a new repo that contains only feature A and C merged together? Recreate the test repo? Back out B? Other magic?

    Read the article

  • Can i use aac in an commercial app for free?

    - by Jason123
    I was wondering if i can use the aac codec in my commercial app for free (through lgpl ffmpeg). It says on the wiki: No licenses or payments are required to be able to stream or distribute content in AAC format.[36] This reason alone makes AAC a much more attractive format to distribute content than MP3, particularly for streaming content (such as Internet radio). However, a patent license is required for all manufacturers or developers of AAC codecs. For this reason free and open source software implementations such as FFmpeg and FAAC may be distributed in source form only, in order to avoid patent infringement. (See below under Products that support AAC, Software.) But the xSplit program had to cancel the AAC for free members because they have to pay royalties per person. Is this true (that you have to pay per each person that uses aac)? If you do have to pay, which company do you pay to and how does one apply?

    Read the article

  • rotating menu with Actors in libgdx

    - by joecks
    I am intending to build a circular menu, with menu items equally distributed around the circle. When clicking on a menu item the circle should rotate so that the selected item is facing the top. I am using libgdx and I am not very familiar with the Actors concept, so I intuitivly tried to implement an Actor, who is drawing a texture and then transforming it by using Actions, with no success: class CircleActor extends Actor { @Override public void draw(SpriteBatch batch, float parentAlpha) { batch.draw(texture1, 100, 100); } @Override public Actor hit(float x, float y) { return this; } } and the rotate action: CircleActor circleActor = new CircleActor(); circleActor.action(Forever.$(RotateBy.$(0.1f, 0.1f))); // stage.addActor(); stage.addActor(circleActor); The texture is rectangular, but it doe not work. 1. What is wrong? 2. Is it a good approach to solve the task? Thanks!

    Read the article

  • What can I do to encourage teams to lighten up? [closed]

    - by Rahul
    I work with a geographically distributed team (different timezones) with people from various cultures and background. Some of us have never met each other in person but we communicate with each other over phone, chat and email almost on an hourly basis. Most of our meetings and discussions are dead serious and boring. What's worse, any attempt at humor is not very well received because of cultural differences. I feel that we are all taking our work a bit too seriously. We don't shy away from painful arguments, nasty emails and heated discussions when things go wrong but never attempt to develop camaraderie or friendships in better times. I would like to know your experiences with such situations and what, if anything, did you do to lighten things up at workplace.

    Read the article

  • Microsoft Azure Diagnostics Part 1: Introduction

    Having a well thought-out plan for diagnostic data is important for on-premises applications, but it is arguably more important for distributed, highly scalable cloud applications. Michael Collier has provided a clear introduction to Microsoft Azure Diagnostics, including the Diagnostics Agent and how to extract the data. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Subdomains vs. subdirectory – status as of 2012.

    - by Quintin Par
    This following question by Jeff was in 2010 and I wanted to check how things have changed in the past 2 years. My problem: I run a site with most of the content distributed to subdomains that’s are user based. E.g: Joe.example.com John.example.com Jil.example.com So all of these subdomains have the content and the main site example.com becomes a mere dummy listing all the subdomains. Now the question is, as of 2012, how is google treating domain authority and page rank in this case? I understand the notion of page rank as page per se but when it comes to domain authority will the parent domain have the cumulative effect of the domain authority or will it be spread out?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >