Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 76/157 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

  • JSR updates - October 2013

    - by Heather VanCura
    A handful of JSRs have been making  progress in the JCP program--Java SE, Java ME and Java EE JSRs.  More to come in the next few weeks! Highlights and links to JSR material below. JSR 337,  Java SE 8 Release Contents, published an Early Draft Review. JSR 351, Java Identity API, published an Early Draft Review. JSR 360, Connected Limited Device Configuration (CLDC) 8, passed the EC Public Review Ballot with 21 yes votes. JSR 361, Java ME Embedded Profile, passed the EC Public Review Ballot with 20 yes votes. JSR 107, JCACHE-Java Temporary Caching API, published an update to their JSR Community Update Page.  You can find schedule information (plans to submit Proposed Final Draft very soon), Adopt-a-JSR suggestions, and presentation material from JavaOne.

    Read the article

  • What can I do to encourage teams to lighten up? [closed]

    - by Rahul
    I work with a geographically distributed team (different timezones) with people from various cultures and background. Some of us have never met each other in person but we communicate with each other over phone, chat and email almost on an hourly basis. Most of our meetings and discussions are dead serious and boring. What's worse, any attempt at humor is not very well received because of cultural differences. I feel that we are all taking our work a bit too seriously. We don't shy away from painful arguments, nasty emails and heated discussions when things go wrong but never attempt to develop camaraderie or friendships in better times. I would like to know your experiences with such situations and what, if anything, did you do to lighten things up at workplace.

    Read the article

  • Subdomains vs. subdirectory – status as of 2012.

    - by Quintin Par
    This following question by Jeff was in 2010 and I wanted to check how things have changed in the past 2 years. My problem: I run a site with most of the content distributed to subdomains that’s are user based. E.g: Joe.example.com John.example.com Jil.example.com So all of these subdomains have the content and the main site example.com becomes a mere dummy listing all the subdomains. Now the question is, as of 2012, how is google treating domain authority and page rank in this case? I understand the notion of page rank as page per se but when it comes to domain authority will the parent domain have the cumulative effect of the domain authority or will it be spread out?

    Read the article

  • Microsoft Azure Diagnostics Part 1: Introduction

    Having a well thought-out plan for diagnostic data is important for on-premises applications, but it is arguably more important for distributed, highly scalable cloud applications. Michael Collier has provided a clear introduction to Microsoft Azure Diagnostics, including the Diagnostics Agent and how to extract the data. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • SQL SERVER Fix : Error : 8501 MSDTC on server is unavailable. Changed database context to publisher

    During configuring replication on one of the server, I received following error. This is very common error and the solution of the same is even simpler.MSDTC on server is unavailable. Changed database context to publisherdatabase. (Microsoft SQL Server, Error: 8501)Solution:Enable Distributed Transaction Coordinator in SQL Server.Method 1:Click on StartControl Panel-Administrative Tools-ServicesSelect the service [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • JSR Updates and Inactive JSR ballots

    - by heathervc
    The following are JSRs have posted updates in the last week: JSR 331, Constraint Programming API, has posted a Maintenance Draft Review; this review closes 29 September. JSR 352, Batch Applications for the Java Platform, has posted an Early Draft Review; this review closes 29 September. JSR 353, Java API for JSON Processing, has posted an Early Draft Review; this review closes 7 October. Inactive JSRs: The following JSR proposals have been Inactive for at least two years and are currently on the EC ballot to be declared Dormant, following a period where the community was given an opportunity to express interest in their continued development: JSR 50, Distributed Real-Time Specification JSR 282, Real-Time Specification for Java (RTSJ) 1.1 JSR 307, Network Mobility and Mobile Data API JSR 327, Dynamic Contents Delivery Service API for Java ME JSR 328, Change Management API

    Read the article

  • How often should saving to disk occur in an automatically saving text editor?

    - by lelandmiller
    I am developing a simple text editor and would like the application to save the text automatically. In other words, the user would never have to press a save button. I have seen other applications that do this, and was wondering how often is it safe to write files to disk? From a user experience standpoint, it seems that the more frequently this happens the better, but I am worried about performance and possible disk wear (especially on writes to SSDs). It seems like the operating system disk caching might help avoid these problems, but I also don't know if its safe to rely on that for an application like this. I was planning on writing the whole document to disk at each save, but this just seems terribly inefficient if the OS ends up writing it to disk to frequently, but relying on program unload may lose data in the case of a crash. Does anyone have any experience dealing with this that might be able to help?

    Read the article

  • HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS

    - by Jason Faulkner
    Even if you’ve only loosely followed the events of the hacker groups Anonymous and LulzSec, you’ve probably heard about web sites and services being hacked, like the infamous Sony hacks. Have you ever wondered how they do it? There are a number of tools and techniques that these groups use, and while we’re not trying to give you a manual to do this yourself, it’s useful to understand what’s going on. Two of the attacks you consistently hear about them using are “(Distributed) Denial of Service” (DDoS) and “SQL Injections” (SQLI). Here’s how they work. Image by xkcd HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way

    Read the article

  • What is the most reliable session storage in PHP: Memcache, database or files?

    - by user1179459
    What is the best and most safest way to handle PHP sessions. Is the best way to store sessions in: Database (more reliable, but high bottleneck, slow speed, not good for high database usage websites)? Memcache (super fast, but distributed more security problems, chances of loosing data when the server restarted and chances of loosing data when the cache is full)? Files (default option, I guess slow since it reads and writes from file I/O, less security, etc). Which method is the best? What are the problems and good things of each of those approaches?

    Read the article

  • As a programmer what single discovery has given you the greatest boost in productivity?

    - by ChrisInCambo
    This question has been inspired by my recent discovery/adoption of distributed version control. I started using it (mercurial) just because I liked the idea of still being able to make commits at times when I couldn't connect to the central server. I never expected it would give me a large boost in general productivity, but a pleasant side effect I discovered was that making a new clone every time I started a new task and giving that clone a descriptive folder name is extremely effective at keeping me on task resulting is a noticeable productivity increase. So as a programmer what single discovery has given you the greatest boost in productivity? Extra respect for answers which involve tools or practices that aren't so obvious from the outside!

    Read the article

  • Using static in PHP

    - by nischayn22
    I have a few functions in PHP that read data from functions in a class readUsername(int userId){ $reader = getReader(); return $reader->getname(userId); } readUserAddress(){ $reader = getReader(); return $reader->getaddress(userId); } All these make a call to getReader() { require_once("Reader.php"); static $reader = new Reader(); return $reader; } An overview of Reader class Reader{ getname(int id) { //if in-memory cache exists for this id return that //else get from db and cache it } getaddress(int id) { $this->getname(int id); //get address from name here } /*Other stuff*/ } Why is class Reader needed The Reader class does some in-memory caching of user details. So, I need only one object of class Reader and it will cache the user details instead of making multiple db calls. I am using static so that it the object gets created only once. Is this the right approach or should I do something else?

    Read the article

  • ASP.NET MVC Portable Areas - Can they communicate and be used as a plugin-like architecture?

    - by Beton
    I'll get straight to the point: I was wondering if there is a common pattern to use portable areas as a components of a plugin-like architecture. Example: We've got 3 plugins (portable areas) packaged and distributed via NuGet feed. Each of them is following the standard MVC structure (has it own Models, Views and Controllers). Lets say login form, header and footer. What I was wondering if there is a way to make them communicate. For example: when user logs on, login plugin executes it own logic, logs the user and then it updates the state of the header plugin with changes it state accordingly. Thanks in advance.

    Read the article

  • How can I obtain in-game data from Warcraft 3 from an external process?

    - by Slav
    I am implementing a behavior algorithm and would like to test it with my lovely Warcraft III game to watch how it will fight against real players. The problem I'm having is that I don't know how to obtain information about in-game state (units, structures, environment, etc.) from the running WC3 game. My algorithm needs access to the hard drive and possibly distributed computing, that's why JASS (WC3's editor language) isn't appropriate; I need to run my algorithm from a separate process. Direct3D hooking is an approach, but it wasn't done for WC3 yet and a significant drawback of that approach would be the inability to watch how the AI performs online, since it uses the viewport to issue commands. How I read in-game data from WC3 in a different process in a fastest and easiest way?

    Read the article

  • Is it reasonable for REST resources to be singular and plural?

    - by Evan
    I have been wondering if, rather than a more traditional layout like this: api/Products GET // gets product(s) by id PUT // updates product(s) by id DELETE // deletes (product(s) by id POST // creates product(s) Would it be more useful to have a singular and a plural, for example: api/Product GET // gets a product by id PUT // updates a product by id DELETE // deletes a product by id POST // creates a product api/Products GET // gets a collection of products by id PUT // updates a collection of products by id DELETE // deletes a collection of products (not the products themselves) POST // creates a collection of products based on filter parameters passed So, to create a collection of products you might do: POST api/Products {data: filters} // returns api/Products/<id> And then, to reference it, you might do: GET api/Products/<id> // returns array of products In my opinion, the main advantage of doing things this way is that it allows for easy caching of collections of products. One might, for example, put a lifetime of an hour on collections of products, thus drastically reducing the calls on a server. Of course, I currently only see the good side of doing things this way, what's the downside?

    Read the article

  • Can I install Ubuntu 12.04 on MacBook 6.1 (Late 2009) on my external hard drive?

    - by tommywinarta
    I have no experience in installing Linux based OS on MacBook, but I already have Windows installed on my Mac. I read some articles saying I can have the Lucid Lynx installed in my Macbook 6.1 and luckily I already got the CD (which was distributed for free back then haha), my question is that can I have the 12.04 installed instead of the 10.04 and what do I need to do that? I also would like to know can I install it on my external hard drive just like installing it on a usb stick? I have viewed the how-to for installing the Lucid Lynx, is it just the same? Thanks in advance!

    Read the article

  • Syncing Files between workgroup server and Ubuntu workstation

    - by dotdawtdaught
    Recently I have decided that I can't make Windows 8 my primary OS on my laptop as it is just too cumbersome to deal with. I am made the switch to Ubuntu and so far so good. Using Windows I have been able to cache folders on my workgroup server using a feature called "Client Side Cache" that allows me to take a copy of my personal files offline while I am in the field, then later I when I return any changes get pushed up to the server and my local cache is refreshed. This feature is completely client driven although characteristics of it (who and what can be cached, and if caching is automatic) can be controlled via a policy assigned as part of a directory membership. Can anyone suggest a linux replacement for this feature? Is there a better way of handling this?

    Read the article

  • Can I copy large files faster without using the file cache?

    - by Veazer
    After adding the preload package, my applications seem to speed up but if I copy a large file, the file cache grows by more than double the size of the file. By transferring a single 3-4 GB virtualbox image or video file to an external drive, this huge cache seems to remove all the preloaded applications from memory, leading to increased load times and general performance drops. Is there a way to copy large, multi-gigabyte files without caching them (i.e. bypassing the file cache)? Or a way to whitelist or blacklist specific folders from being cached?

    Read the article

  • Azure Florida Association: Modern Architecture for Elastic Azure Applications

    - by Herve Roggero
    Join us on November 28th at 7PM, US Eastern Time, to hear Zachary Gramana talk about elastic scale on Windows Azure. REGISTER HERE: https://www3.gotomeeting.com/register/657038102 Description: Building horizontally scalable applications can be challenging. If you face the need to rapidly scale or adjust to high load variations, then you are left with little choice. Azure provides a fantastic platform for building elastic applications. Combined with recent advances in browser capabilities, some older architectural patterns have become relevant again. We will dust off one of them, the client-server architecture, and show how we can use its modern incarnation to bypass a class of problems normally encountered with distributed ASP.NET applications.

    Read the article

  • Creating sites with local ips that pointing to a distant server.

    - by fatnjazzy
    Hi. We are a company that is distributed in several places over Europe (real offices). Each office has its own domain. company.de company.co.uk company.ch And so. Our website servers are located in one place. We can't distribute our site to different locations. How can we create a local IP in each location to show our main server. so google will see us as local ip. Explanation: Google has decided to increase your PR if you have a local IP, they think that if you bought a server in a local market means that you are very serious about your business. We have 8 employees in each office, we cant have a separate server, is that mean that we are not serious about our business? no, this is y i need to create this illusion. Thanks

    Read the article

  • Handling (many) multiple projects in Git in an enterprise environment

    - by Michael K
    One of the advantages of older version control systems such as CVS and SVN in enterprise development is that anyone can connect to source control and see all the projects that the company has. This can make it easier to get a high level view of what kid of development is happening outside your sprint and also keeps everything in one place and easy to find. However, distributed version control systems (Git, specifically) use the repository as their base unit. They work best with one project (or several closely related projects) per repository. This makes repository management more difficult in most enterprise environments where it is not unusual to have more than 25-50 projects to support. As far as I have been able to determine, you have to keep a list somewhere else of all the repos you have. There is software available, like GitHub, that help, but that still is an extra step beyond a single connection string and listing the contents of the repository. What is the best way to deal with the complexity of multiple repositories?

    Read the article

  • ASP.NET Performance Framework

    At the start of the year, I finished a 5 part series on ASP.NET performance - focusing on largely generic ways to improve website performance rather than specific ASP.NET performance tricks. The series focused on a number of topics, including merging and shrinking files, using modules to remove unecessary headers and setting caching headers, enabling cache busting and automatically generating cache busted referneces in css, as well as an introduction to nginx. Yesterday I managed to put a number...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Counting product releases if you work on the backend/online services?

    - by stackoverflowuser2010
    I am trying to update my resume, and I would like to count the number of "product releases" that I was directly involved in with a company. It would seem to serve as a performance metric. The problem is that I was working on the backend of a very large distributed system, like along the lines of Hadoop or other huge database. We had regular 6-month major releases and other minor releases. My manager kept saying that "shipped" these releases, but "shipping" a product to me sounds like releasing single pieces of software, like Microsoft would ship Office 11 or something. Any ideas on "product releases" for backend service engineers, or any other type of performance metric?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >