Search Results

Search found 1706 results on 69 pages for 'distributed'.

Page 26/69 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • My Rhythmbox plugin can't meet the Ubuntu Software Center "my-app" requirements

    - by allquixotic
    At http://developer.ubuntu.com/publish/my-apps-packages/ the following technical requirements are cited: Technical requirements In order for your application to be distributed in the Software Centre it must: Be in one, self-contained directory when installed Be able to be installed into the /opt/ directory (*) Be executable by all users from the /opt/ directory (**) Write all configuration settings to ~/.config/ (This can be one file or a directory containing multiple configuration files) A Rhythmbox plugin cannot satisfy any of these requirements. Rhythmbox has compiled-in locations where it looks for installed plugins. So, is there no way for me to publish my app in Ubuntu Software Center? Would it have to go into Universe repository (which would require tremendously more work and political maneuvering to get it accepted)? I already have all the Debian package infrastructure built for it, so I have made a PPA for it.

    Read the article

  • Advertising Opportunity – Profit Magazine For Oracle OpenWorld

    - by tfryer
    With Oracle OpenWorld fast approaching, Profit Magazine is offering Oracle Specialized partners the opportunity to extend their brand to executive-level Oracle customers and top prospects in the Profit Magazine: Specialized Partner Edition. The printed magazine will be distributed to Oracle attendees at Oracle OpenWorld San Francisco, and the digital copy will be distribution to over 500,000 customers in the Profit readers circle. In addition, the magazine will be promoted via social media such as Facebook, LinkedIn, and Twitter. For a very affordable advertising opportunity, please contact Tom Cometa at [email protected] or +1.510.339.2403. Reserve before July 27th. Hurry! An early bird discount of 15% applies if booked before July 18th.

    Read the article

  • What is the politically correct way of refactoring other's code?

    - by dukeofgaming
    I'm currently working in a geographically distributed team in a big company. Everybody is just focused on today's tasks and getting things done, however this means sometimes things have to be done the quick way, and that causes problems... you know, same old, same old. I'm bumping into code with several smells such as: big functions pointless utility functions/methods (essentially just to save writing a word), overcomplicated algorithms, extremely big files that should be broken down into different files/classes (1,500+ lines), etc. What would be the best way of improving code without making other developers feel bad/wrong about any proposed improvements?

    Read the article

  • Cloud Integration White Paper - Now Available

    - by Bruce Tierney
    Interested in expanding your existing application infrastructure to integrate with cloud applications?  Download the new Oracle White Paper "Cloud Integration - A Comprehensive Solution" to learn not just about connectivity but the other key aspects of successful cloud integration. The paper includes three technical examples of cloud integration with Oracle Fusion Applications, Saleforce, and Workday and follows with the importance of taking a comprehensive approach to also include service aggregation, service virtualization, cloud security considerations and the benefit of maintaining a unified approach to monitoring and management despite an increasingly distributed hybrid infrastructure. To keep the integration architecture from being defined "accidentally" as new business units subscribe to additional cloud vendors outside the participation of IT, a discussion on the "Accidental SOA Cloud Architecture" is included: As shown in the table of contents below, the white paper provides a combination of high-level awareness about key considerations as well as a technical deep dive of the steps needed for cloud integration connectivity: Hope you find the White Paper valuable.  Please download from the following link

    Read the article

  • Where can I find "magic numbers" for classic game play mechanics?

    - by MrDatabase
    I'd like to find some "magic numbers" for the classic helicopter game. For example the numbers that determine how fast the helicopter accelerates up and down. Also perhaps the "randomness" of the obstacles (uniformly distributed? Gaussian?). Where can I find these numbers? p.s. I don't care about the particular platform... Flash on the desktop browser is just as good as some implementation on a mobile device.

    Read the article

  • Secure Enterprise Search 11.2.2.2 Now Available for PeopleTools 8.53

    - by Matthew Haavisto
    We are pleased to announce that Oracle Secure Enterprise Search (SES) 11.2.2.2 is now available to PeopleSoft Customers on PeopleTools 8.53.  The minimum PeopleTools Patch Version Required to adopt SES 11.2.2.2 is PeopleTools 8.53.06.  This version of SES provides some important benefits for PeopleSoft Customers, particularly in the areas of platform support, distributed architecture support, and RAC support.  You can get all the details on this update on My Oracle Support.  This MOS document lists the fixes and configurations needed for PeopleTools certification of SES 11.2.2.2. For other useful information on PeopleTools and SES, see this Oracle forum.

    Read the article

  • Using EPEL repos with Oracle Linux

    - by wcoekaer
    There's a Fedora project called EPEL which hosts a set of additional packages that can be installed on top of various distributions such as Red Hat Enterprise Linux, CentOS, Scientific Linux and of course also Oracle Linux. These packages are not distributed by the distribution vendor and as such also not supported by the vendors (including Oracle) however for users that want to pick up some extras that are useful, it's very easy to do this. All you need to do is download the EPEL RPM from the website, install it on Oracle Linux 5 or Oracle Linux 6 and run yum install or yum search to get the packages. example : # wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm # rpm -ivh epel-release-6-5.noarch.rpm # yum repolist Loaded plugins: refresh-packagekit, rhnplugin repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,124 The folks that build these repositories are doing a great job at adding very useful packages. They are free, but also unsupported of course.

    Read the article

  • Java Developers: Is Ant still in the "main stream" for builds? Do we push new developers to learn it?

    - by Sam Goldberg
    We have been slowly replacing batch command files (windows .bat) which were simply jarring up the classes compiled in the developers IDE, with more comprehensive Ant builds (i.e. get from CVS, clean compile, jar, archive, email, etc.) I've spent a lot of time learning (and debugging issues) with Ant, so I'm most comfortable using it for these tasks. But I wonder if Ant is still in as wide usage as it was when I first started learning, or whether "the world has moved on" to something newer (and maybe slicker). (I've started to see more Maven build stuff distributed, which I've never used, for example.) The practical import of this question, is whether I push new developers to learn Ant, or whether they should be learning something else for builds? I'm never too on top of the trends, so it would be great to hear from other Java developers what they think is the best build tool, and what they think new developers should be learning.

    Read the article

  • Game Patching Mac/PC

    - by Centurion Games
    Just wondering what types of solutions are available to handle patching of PC/Mac games that don't have any sort of auto updater built into them. In windows do you just spin off some sort of new install shield for the game that includes the updated files, hope you can read a valid registry key to point to the right directory, and overwrite files? If so how does that translate over to Mac where the game is normally just distributed as straight up .app file? Is there a better approach than the above for an already released product? (Assuming direct sells, and not through a marketplace that features auto-updating like Steam.) Are there any off the shelf auto-updater type libraries that could also be easily integrated with a C/C++ code base even after a game has been shipped to make this a lot simpler, and that are cross platform? Also how do auto-updaters work with new OS's that want applications and files digitally signed?

    Read the article

  • Problem downloading .exe file from Amazon S3 with a signed URL in IE

    - by Joe Corkery
    I have a large collection of Windows exe files which are being stored/distributed using Amazon S3. We use signed URLs to control access to the files and this works great except in one case when trying to download a .exe file using Internet Explorer (version 8). It works just fine in Firefox. It also works fine if you don't use a signed URL (but that is not an option). What happens is that the IE downloader changes the name from 'myfile.exe' to 'myfile[1]' and Windows no longer recognizes it as an executable. Any advice would greatly be appreciated. Thanks.

    Read the article

  • How can I distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web Spring application running on a MySQL or PostgreSQL database. The traffic is becoming so high and the amount of data is becoming so big that a distributed database solution needs to be implemented to address scalability issue. Let's also assume this application is using Hibernate and the data access layer is cleanly separated with DAOs. Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. What would be the best strategy to scale this database? Is it possible to minimize sharding code (Shard) in the application?

    Read the article

  • Are the technologies used in an application part of the architecture, or do they represent implementation/detailed design details?

    - by m3th0dman
    When designing and writing documentation for a project an architecture needs to be clearly defined: what are the high-level modules of the system, what are their responsibilities, how do they communicate with each other, what protocols are used etc. But in this list, should the concrete technologies be specified or this is actually an implementation detail and need to be specified at a lower level? For example, consider a distributed application that has two modules which communicate asynchronously via AMQP protocol, mediated by a message broker. The fact that these modules use the Spring AMQP library for sending and receiving messages is a fact that needs to be specified in the architecture or is a lower-level detailed design/implementation detail?

    Read the article

  • Set secondary receiver in PayPal Chained Payment after the initial transaction

    - by CJxD
    I'm running a service whereby customers seek the services of 'freelancers' through our web platform. The customer will make a 'bid' which is immediately taken from their accounts as security. Once the job is completed, the customer marks it as accepted and the bid gets distributed to the freelancer(s) as a reward. After initially storing these rewards in the accounts of the freelancers and relying on MassPay to sort out paying them later, I realised that your business needs to be turning over at least £5000/month before MassPay is switched on. Instead, I was referred to Delayed Chained Payments in PayPal's Adaptive Payments API. This allows the customer to pay the primary receiver (my business) before the payment is later triggered to be sent to the secondary receivers (the freelancers). However, at the time that the customer initiates this transaction, you must understand that nobody yet knows who will receive the reward. So, before I program this whole Adaptive Payments system, is it even possible to change or add the secondary receivers after the customer has paid? If not, what can I do?

    Read the article

  • How to distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web spring application running on a MySql or PostGre kind of database. The traffic is becoming so high and the amount of data is becoming so big that a distributed dataase solution needs to be implemented. It is a scalability issue. Let's assume this application is using Hibernate and the data access layer is cleanly separated with DAO objects. What would be the best strategy to scale this database? Does anyone have hands on experience to share? Is it possible to minimize sharding code (Shard) in the application? Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. I am not looking for you could go for sharding or you could go no sql kind of answers. I am looking for deeper answers from people with experience.

    Read the article

  • Load balancing on Ubuntu Server

    - by SabreWolfy
    I have Ubuntu 10.04.4 server (32-bit) installed on a headless quad-core machine with 2GB RAM. I'm running a command-line analysis which is analyzing a large amount of data, but which does not require a large amount of RAM. The tool does not provide any multi-threading, so the CPU load is sitting at 1.00 (or sometimes just a little over). I ran top and pressed 1 to see the load on each of the cores and noticed that "Cpu1" is always running at 100%. I thought that the load would be distributed between the cores, rather than loading one core all the time. I'm sure I've seen this load-balancing behaviour before in Ubuntu or Debian Desktop versions. Why would the Server edition work differently? The analysis will likely take several hours to run, so loading one core at 100% for many hours while the other 3 remain idle is surely not the best approach?

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • How to encourage version control adoption

    - by Man Wa kileleshwa
    I have recently started working in a team where there is no version control. Most of the team members are not used to any kind of version control. I've been using mercurial privately to track my work. I would like to encourage others to adopt it, and at the very least start to version their code as they develop changes. Can anyone give me advice on how I can encourage adoption of a distributed version control such as mercurial. Any advice on how to win people including managers to DVCS would be much appreciated.

    Read the article

  • What makes Erlang suitable for cloud applications?

    - by Duncan
    We are starting a new project and implementing on our corporations's instantiation of an openstack cloud (see http://www.openstack.org/). The project is security tooling for our corporation. We currently run many hundreds of dedicated servers for security tools and are moving them to our corporations instantiation of openstack. Other projects in my company currently use erlang in several distributed server applications, and other Q/A point out erlang is used in several popular cloud services. I am trying to convince others to consider where it might be applicable on our project. What are erlang's strengths for cloud programming? Where are areas it is particularly appropriate to use erlang?

    Read the article

  • What are options for 3rd Party Centralized Software Settings Management?

    - by Jeff Martin
    I am an architect in an enterprise looking to build a SaaS solution. Our products are distributed over many different deployable containers, Web Services, Web UI's, etc. I am looking for some open-source or 3rd party software solution to manage the settings of our application. These would be similar to the settings you might find in Word or Eclipse or Visual Studio. The settings would control various behaviors and features of the product. (Probably not settings like which database to connect to but more like, should I show line numbers on the page or not by default..). Ideally, we would be able to store values for different dimensions (by tenant, by user, by application environment... ) Because we have so many different deployables, I am looking for a centralized solution that can provide a web service that each of the deployables can get their individual settings from. Does anyone know of a centralized service providing this sort of features or give me some help in searching for an alternative to rolling our own?

    Read the article

  • Implementation of a Rules Engine in Your Business Applicaitons

    - by enonu
    I'm for an experience driven answer from a few software engineers who have implemented a rules engine in their internal business applications. How has it affected your business in the following ways: Ability to launch and iterate over business driven logic Ability to have "business users" perform the actual modification of those rules rather than developers. Ability to comprehend the business rules in general. Quality of the software releases. More or less bugs from the end-user's POV? Speed of the applications. If you had to do it all over again, what would you do differently? Lastly, I'm looking for a qualification of your answer w/ respect to the architecture. Would you do the same thing if you were deploying to a 1-machine setup vs. your architecture vs. a multi-tier cloud-based distributed architecture using 1000s of machines? How would it be different? Thanks!

    Read the article

  • Does Agile (scrum) require one server environment?

    - by Matt W
    Is it necessary/recommend/best practice/any other positive to use only one server environment to perform all development, unit testing and QA? If so, is it then wise/part of Agile to then have only one staging environment before Live? Considering that this could mean internationally distributed teams of developers and testers in different time zones is this wise? This is something being implemented by our QA manager. The opinion put forward is that doing all the dev and testing on a single server is "Agile." The staging environment would be a second environment, and then live.

    Read the article

  • Mercurial release management. Rejecting changes that fail testing

    - by MYou
    Researching distributed source control management (specifically mercurial). My question is more or less what is the best practice for rejecting entire sets of code that fail testing? Example: A team is working on a hello world program. They have testers and a scheduled release coming up with specific features planned. Upcoming Release: Add feature A Add feature B Add feature C So, the developers make their clones for their features, do the work and merge them into a QA repo for the testers to scrutinize. Let's say the testers report back that "Feature B is incomplete and in fact dangerous", and they would like to retest A and C. End example. What's the best way to do all this so that feature B can easily be removed and you end up with a new repo that contains only feature A and C merged together? Recreate the test repo? Back out B? Other magic?

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

  • rotating menu with Actors in libgdx

    - by joecks
    I am intending to build a circular menu, with menu items equally distributed around the circle. When clicking on a menu item the circle should rotate so that the selected item is facing the top. I am using libgdx and I am not very familiar with the Actors concept, so I intuitivly tried to implement an Actor, who is drawing a texture and then transforming it by using Actions, with no success: class CircleActor extends Actor { @Override public void draw(SpriteBatch batch, float parentAlpha) { batch.draw(texture1, 100, 100); } @Override public Actor hit(float x, float y) { return this; } } and the rotate action: CircleActor circleActor = new CircleActor(); circleActor.action(Forever.$(RotateBy.$(0.1f, 0.1f))); // stage.addActor(); stage.addActor(circleActor); The texture is rectangular, but it doe not work. 1. What is wrong? 2. Is it a good approach to solve the task? Thanks!

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >