Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 683/2416 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • Statements of direction for EPM 11.1.1.x series products

    - by THE
    Some of the older parts of EPM that have been replaced with newer software will phase out after January 2013. For most of these the 11.1.1.x Series will be the last release. They will then only be supported via sustaining support (see policy). We have notes about: the Essbase Excel Add In (replaced by SmartView which nearly achieved functionality parity with release 11.1.2.1.102) Oracle Essbase Spreadsheet Add-in Statement of Direction (Doc ID 1466700.1) Hyperion Data Integration Management (replaced by Oracle Data Integrator ( ODI )) Hyperion Data Integration Management Statement of Direction (Doc ID 1267051.1) Hyperion Enterprise and Enterprise Reporting (replaced by HFM) Hyperion Enterprise and Hyperion Enterprise Reporting Statement of Direction (Doc ID 1396504.1) Hyperion Business Rules (replaced by Calculation Manager) Hyperion Business Rules Statement of Direction (Doc ID 1448421.1) Oracle Visual Explorer (this one phased out in June 11 already - just in case anyone missed it) Oracle Essbase Visual Explorer Statement of Direction (Doc ID 1327945.1) For a complete list of the Supported Lifetimes, please review the "Oracle Lifetime Support Policy for Applications"

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Building a chat-like functionality in iOS

    - by Mani BAtra
    I was planning to implement a functionality wherein a user can send data to a friend of his, similar to sending messages in WhatsApp. This is how I broke down the problem : The user registers for the app. This equates to user info being stored on a dedicated server. With the phone number as the key identifier. The user selects the friend to send a message to and pushes the data. The receiver polls the server regularly and acknowledges that the data has been received. I did a little bit of research and am thinking of implementing this using the XMPP Framework for iOS. Any pointers as to is this the correct implementation or some advice in general?

    Read the article

  • Google Cloud Messaging (GCM) for turn-based mobile multiplayer server?

    - by Chris
    I'm designing a multiplayer turn-based game for Android (over 3g). I'm thinking the clients will send data to a central server over a socket or http, and receive data via GCM push messaging. I'd like to know if anyone has practical experience with GCM for pushing 'real-time' turn data to game clients. What kind of performance and limitations does it have? I'm also considering using a RESTful approach with GAE or Amazon EC2. Any advice about these approaches is appreciated.

    Read the article

  • Google I/O 2010 - Making smart & scalable Wave robots

    Google I/O 2010 - Making smart & scalable Wave robots Google I/O 2010 - Making smart & scalable Wave robots Wave 201 David Byttow, Marcel Prasetya A smart robot must be able to store persistent data. Wave robots can store data in wave structures, like wavelets, datadocs, and annotations, instead of traditional datastores. A scalable robot must perform operations with minimal bandwidth. Wave robots can optimize by selecting the appropriate amount of context, the optimal events, and narrow filters for events. In this talk, we'll share best practices on data storage and scaling. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 9 0 ratings Time: 58:25 More in Science & Technology

    Read the article

  • OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance?

    - by Varun
    Question: OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance? Answer: OBI Caching only speeds up what has been seen before. An In-memory data structure generated by the summary advisor is optimized to provide maximum value by accounting for the expected broad usage and drilldowns. It is possible to adapt the in-memory data to seasonality by running the summary advisor on specific workloads. Moreover, the in-memory data is created in an analytic database providing maximum performance for the large amount of memory available.

    Read the article

  • Load Texture From Image Content In Runtime

    - by Austin Brunkhorst
    Basically I wrote a world editor for a game I'm working on. Looking ahead, I was brainstorming ways to save the created world including the tile-sets (this game will rely on a tile engine). I was hoping to save the image data of each tile-set in the same file containing the tile positions, etc. and load the image data into a Texture with XNA. Is it possible? Something like this is what I'm going for. Texture2D tileset = Content.LoadFromString<Texture2D>("png tileset data");

    Read the article

  • How to structure a set of RESTful URLs

    - by meetamit
    Kind of a REST lightweight here... Wondering which url scheme is more appropriate for a stock market data app (BTW, all queries will be GETs as the client doesn't modify data): Scheme 1 examples: /stocks/ABC/news /indexes/XYZ/news /stocks/ABC/time_series/daily /stocks/ABC/time_series/weekly /groups/ABC/time_series/daily /groups/ABC/time_series/weekly Scheme 2 examples: /news/stock/ABC /news/index/XYZ /time_series/stock/ABC/daily /time_series/stock/ABC/weekly /time_series/index/XYZ/daily /time_series/index/XYZ/weekly Scheme 3 examples: /news/stock/ABC /news/index/XYZ /time_series/daily/stock/ABC /time_series/weekly/stock/ABC /time_series/daily/index/XYZ /time_series/weekly/index/XYZ Scheme 4: Something else??? The point is that for any data being requested, the url needs to encapsulate whether an item is a Stock or an Index. And, with all the RESTful talk about resources I'm confused about whether my primary resource is the stock & index or the time_series & news. Sorry if this is a silly question :/ Thanks!

    Read the article

  • Sending JSON to an ASP.NET MVC Action Method Argument

    Javier G Money Lozano, one of the good folks involved with C4MVC, recently wrote a blog post on posting JSON (JavaScript Object Notation) encoded data to an MVC controller action. In his post, he describes an interesting approach of using a custom model binder to bind sent JSON data to an argument of an action method. Unfortunately, his sample left out the custom model binder and only demonstrates how to retrieve JSON data sent from a controller action, not how to send the JSON to the action method....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Tornado Tracks Highlights 61 Years of Tornado Activity [Wallpaper]

    - by Jason Fitzpatrick
    This eye catching image maps 61 years worth of storm data over the continental United States. It’s neat way to see the frequency and intensity of tornadoes and is available in wallpaper-friendly resolutions. John Nelson took 61 years of data from government sources like the NOAA and compiled the data into a visualization. You can read more about the methodology behind the image at the link below or jump right to Flickr to grab a high-res image for your desktop. Tornado Tracks [via Neatorama] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • What is a business problem?

    - by Juha Untinen
    Can you give an example or two of what a business problem is? I hear this word thrown around a lot, but even after searching, I cannot find a clear answer. Especially when it comes to software development. Are any of these classified as a business problem? Convert invoice from format A to format B Send data from Company A database to Company B database Create a program for time reporting Make the financial software retrieve data faster Generate a weekly data transfer volume statistic Or is a business problem something else?

    Read the article

  • Java: Reflection Packet Builder using getField()

    - by Matchlighter
    So I just finished writing a packet builder that dynamically loads data into a data stream which is then sent out. Each builder operates by finding fields in its class (and its superclasses) that are marked with an @data annotation. Upon finishing the builder, I remembered that getFields() does not return in "any specific order". I quite like my builder because it allows for quite simple, yet hard-typed packets. Could this implementation be a problem? What would be the best next step to keep the simplicity - do alphabetical sorting of fields?

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • Chart Chooser Helps You Pick the Right Chart for the Job

    - by Jason Fitzpatrick
    If you’re not sure what kind of chart would best showcase the data you’re presenting, Chart Chooser makes short work of narrowing it down. Are you trying to showcase trends? Compare the composition of sets? Show distributions and trends together? By selecting what you’re trying to highlight, Chart Chooser automatically narrows the pool of chart types to show which would effectively achieve your end. Once you’ve narrowed it down to the chart type you want, you can even download an Excel template for that chart type and populate it with your own data. Hit up the link below to take it for a spin and grab some free templates. Chart Chooser [via Flowing Data] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Returning Images from ASP.NET Web API

    - by bipinjoshi
    Sometimes you need to save and retrieve image data in SQL Server as a part of Web API functionality. A common approach is to save images as physical image files on the web server and then store the image URL in a SQL Server database. However, at times you need to store image data directly into a SQL Server database rather than the image URL. While dealing with the later scenario you need to read images from a database and then return this image data from your Web API. This article shows the steps involved in this process. http://www.bipinjoshi.net/articles/4b9922c3-0982-4e8f-812c-488ff4dbd507.aspx

    Read the article

  • Oracle GoldenGate 11g Certified Implementation Beta Exam Available

    - by Irem Radzik
    We have great news for Oracle Data Integration partners:  Oracle GoldenGate 11g Certified Implementation Beta Exam is now available.  The Oracle GoldenGate 11g Certified Implementation Exam Essentials (1Z1-481) exam is designed for individuals who possess a strong foundation and expertise in selling and implementing Oracle Data Integration 11g solutions. This certification covers topics such as: Oracle GoldenGate 11g Overview Architecture Overview,  Configuring Oracle GoldenGate Parameters, Mapping and Transformation Overview,  Configuration Options,  Managing and Monitoring Oracle GoldenGate 11g.  This certification helps OPN members differentiate themselves in the marketplace through proven in-depth expertise and helps their partner company qualify for the Oracle Data Integration 11g Specialization Criteria. We recommend up-to-date training and field experience. OPN members earning this certification will be recognized as OPN Certified Specialists. Request a discounted beta voucher today using the OPN Beta Certified Specialist Exam  Voucher Request Form.  You can take the exam now at a near-by Pearson VUE testing center.

    Read the article

  • Google I/O 2012 - Making Google Product Search Work for You Using the Content API for Shopping

    Google I/O 2012 - Making Google Product Search Work for You Using the Content API for Shopping Mayuresh Saoji, Danny Hermes To get the best out of product search, merchants need to provide complete and accurate product information, as well as fresh price and availability data for all products. This session will provide merchants with concrete steps they can take to improve their data quality using the Content API for Shopping. We will provide details on when it makes sense to use the Content API to submit data (as opposed to Feeds), and how to use the API. We will also go into details on how to debug API requests and errors, and talk about general best practices to follow in order to use the API optimally and efficiently. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 35 1 ratings Time: 43:50 More in Science & Technology

    Read the article

  • Cost of maintenance depending on paradigms

    - by Anto
    Is there any data on which paradigms allow for code which is easier/cheaper to maintain? Certainly, independantly of the chosen paradigm, good design is cheaper to maintain than bad, but there should probably be major differences coming only from the paradigm choice. Unstructured programming, for instance, generates very messy code (spaghetti code) which is expensive to maintain. In object oriented programming, implementation details are hidden and thus it should be pretty cheap to change those. In functional programming, there are no side effects, thus there is lesser risk of introducing bugs during maintainance, which should be cheaper. Is there any data on which paradigms are the most cost-efficient when coming down to maintenance? If no such data exists, what is your take on the question?

    Read the article

  • Real Time Monitoring System using .net [closed]

    - by sameer
    I need to develop the application which display the dashboard where data from various SQL DB is fetched from different servers and displayed. Now this need to happen real time we can have refresh time say 5 min. Here is my thought, suggest if anything is wrong. 1) To Develop the Windows Service to accumulate the data from various SQL Server Instance. 2) Then Persist those details into SQL DB from which Dashboard will displayed on the web page. 3) Fetch of data from Windows service will be trigger every x minutes. 4) SQL Server Instance details will be stored in SQL DB which Windows Service will be referring. Thus this approach make sense. Thanks..

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Sales tracker that allows complex queries?

    - by feklee
    On a site, every click on a product should be registered by a sales tracker: price, type, etc. The sales tracker should provide an API so that complex queries can be performed, such as: Which products of a type "teapot" had a price below 20 EUR? Requirements: Recorded data should be available for querying no later than two hours after it has been recorded. For example, there are reports that Google Analytics may take up to 24h to update data. That is not acceptable. Querying doesn't need to be fast, but recording does (of course). Which sales tracker allows complex queries against collected data?

    Read the article

  • Upgraded to 12.04 now wifi doesn't work

    - by Benito Kestelman
    My laptop's wifi stopped working when I upgraded to Ubuntu 12.04 (wired works). I just reinstalled 12.04 over my old 12.04 on which wifi didn't work either in an attempt to restore any settings I may have accidentally changed, but it still doesn't work. I also used a wired connection to install updates in case this bug has been fixed, but it has not. Here is the result of sudo lshw -class network: *-network description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 67 serial: 40:25:c2:5f:5b:f4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-29-generic-pae firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:51 memory:de800000-de801fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c0 serial: 14:da:e9:c0:da:78 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:54 memory:dd400000-dd43ffff ioport:a000(size=128) Here is rfkill list all: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: asus-wimax: WiMAX Soft blocked: no Hard blocked: no lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 8087:07d6 Intel Corp. Bus 001 Device 004: ID 13d3:5710 IMC Networks Bus 002 Device 003: ID 045e:0745 Microsoft Corp. Nano Transceiver v1.0 for Bluetooth Bus 003 Device 003: ID 0781:5530 SanDisk Corp. Cruzer lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) 03:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 04:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0)

    Read the article

  • Document Management System

    - by rjayavrp
    Is there any Document Management System in Ubuntu? I tried Alfresco, RavenDB, Owl, Document Manager. Alfresco, RavenDB are heavy. More than my requirements. Owl having source issues. Document Manager im trying to install. Should keep data on the same machine as I am looking for more of internal purpose. Should allow to upload Zip files as well. If it extracts Zip it will be a great + Should allow to send email to preconfigured email addresses Should allow to upload data of size around 100MB at one go Should maintain history of documents also deleted documents Should allow role based document access. Should be Free :) It should not do any spoofing on data. Documents are confidential. Please share your knowledge. Thanks.

    Read the article

  • Sending state diffs (deltas) and unreliable connections

    - by spaceOwl
    We're building a realtime multiplayer game, in which each player is responsible for reporting its state on every iteration of the game loop. The state updates are broadcasted using unreliable UDP. To minimize state data sending, we've come up with a system that will send only deltas (whatever state data that was changed). This method however is flawed, since a lost packet will mean that other players will not receive the delta, making the game behave in an unexpected way. For example: Assume that state is comprised of: { positionX, positionY, health } Frame 1 - positionX changed --> send a packet with positionX only. Frame 2 - health changed // lost ! Frame 3 - positionY changed --> send a packet with positionY only. // Other players don't know about health change. How can one overcome this issue then? sending the entire data is not always feasible.

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >