Search Results

Search found 4337 results on 174 pages for 'binary runner'.

Page 83/174 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • How do you Install the Latest Release of Miro?

    - by Brenton Horne
    In the software centre the latest release of Miro available is 4.0.4 whereas the latest release of Miro is 5.0.4. How do I download 5.0.4 on 12.10? I have tried following the guide at http://www.getmiro.com/download/for-ubuntu/ (and thus have already run sudo add-apt-repository ppa:pcf/miro-releases) but it failed and when I tried to run sudo apt-get update I received the error: W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • Nagios suddenly stops working

    - by pankaj sharma
    I have configure passive checks on one my host system for this i am using nsca. it was running fine. suddenly host is showing down on the monitoring. but host was fine and running when i check the logs on the host showing [1347941895] Warning: Attempting to execute the command "/submit_check_result host.example.com 'Current Load' OK 'OK - load average: 0.69, 0.53, 0.42'" resulted in a return code of 127. Make sure the script or binary you are trying to execute actually exists... i restarted nagios services many times but still it is showing the same error. can anyone help me regarding this. thanks in advance..

    Read the article

  • How do multi-platform games usually store save data?

    - by PixelPerfect3
    I realize this is a bit of a broad question, but I was wondering if there is a "standard" in the industry when it comes to storing save data for games (and is it different across platforms - Xbox/PS/PC/Mac/Android/iOS?) For example for a game like Assassin's Creed or The Walking Dead: They are on multiple platforms and they usually have to save enough information about the player and their actions. Do they use something like XML files, databases, or just straight binary dumps? How much does it differ from platform to platform? I would appreciate it if someone with experience in the game industry would answer this.

    Read the article

  • How does the GPL work in regards to languages like Dart which compile to other languages?

    - by Peter-W
    Google's Dart language is not supported by any Web Browsers other than a special build of Chromium known as Dartium. To use Dart for production code you need to run it through a Dart-JavaScript compiler/translator and then use the outputted JavaScript in your web application. Because JavaScript is an interpreted language everyone who receives the "binary"(Aka, the .js file) has also received the source code. Now, the GNU General Public License v3.0 states that: "The “source code” for a work means the preferred form of the work for making modifications to it." Which would imply that the original Dart code in addition to the JavaScript code must also be provided to the end user. Does this mean that any web applications written in Dart must also provide the original Dart code to all visitors of their website even though a copy of the source code has already been provided in a human readable/writable/modifiable form?

    Read the article

  • How to use Mercurial's LargeFiles extension? [migrated]

    - by DuncanBoehle
    I use Mercurial for game development, and I'm trying to use the LargeFiles extension included in Mercurial 2.0 to keep track of large binary assets. Unfortunately there isn't a whole lot of documentation on the extension, so I'm not sure how people are expected to use it. For example, is there any way to safely clean out the .hg/largefiles directory? If I'm on the tip revision, and expect to always have internet access, then I don't need the old versions of largefiles cluttering up the repository, since that's the whole point of using the LargeFiles extension. Also, how do I have more fine-grained control over where the largefile store is? I can only assume that it's created somewhere on the computer that ran hg init, but I have no idea about the details. Thanks!

    Read the article

  • Fiddler Inspector for Federation Messages

    - by Your DisplayName here!
    Fiddler is a very useful tool for troubleshooting all kinds of HTTP(s) communications. It also features various extensibility points to make it even more useful. Using the inspector extensibility mechanism, I quickly knocked up an inspector for typical federation messages (thanks for Eric Lawrence btw). Below is a screenshot for WS-Federation. I also added support for SAML 2.0p request/response messages: The inspector can be downloaded from the identitymodel Codeplex site. Simply copy the binary to the inspector folder in the Fiddler directory.

    Read the article

  • Lucid hangs at booting after kernel upgrade

    - by Thomas Deutsch
    This weekend, one of our servers running Lucid has installed some upgrades: libgcrypt11 1.4.4-5ubuntu2.1 linux-firmware 1.34.14 linux-image-2.6.32-41-generic 2.6.32-41.91 linux-libc-dev 2.6.32-41.91 Afterwards, it rebooted since this was a kernel upgrade. Now, it hangs at booting, after /scripts/init-bottom. init-bottom itself should not be the problem, the last line I can see is "done". So the problem has to be shortly after that. http://manpages.ubuntu.com/manpages/hardy/man8/initramfs-tools.8.html tells me, that the next step is procfs and sysfs are moved to the real rootfs and execution is turned over to the init binary which should now be found in the mounted rootfs. But I don't know how and where. The problem exists with older kernels too, and this one here doesn't fix the problem: http://www.tummy.com/journals/entries/jafo_20111003_160440 Anyone an idea?

    Read the article

  • Routing PHP memcached calls to Oracle Coherence

    - by cj
    A new post Getting Started with the Coherence Memcached Adaptor from David Felcey shows how PHP memcached calls can automatically be routed to store data in Oracle Coherence 12c. This is possible now Coherence 12.1.3 supports Memcached clients using the Binary Memcached protocol. David's post shows how the Coherence Memcached adaptor can be configured as a proxy service that runs in the Coherence cluster. There's nothing particular to configure in the PHP application, except to enable memcached.use_sasl = 1 So what is Coherence? It is an "in-memory data grid solution", with a number of advanced features. You can read more in the Oracle Coherence 12C Data Sheet.

    Read the article

  • Why does update-notifier contains system-crash-notification?

    - by int_ua
    I just had "System Program Problem Detected" window appearing several times before user session even started (I have KDM with autologin to locked session). I traced it with xprop to being /usr/lib/update-notifier/system-crash-notification which is binary (while I expected it to be some script) and belongs to update-notifier package (while I expected it to be somewhere from apport*). P.S. Clicking on Report problem... button didn't do anything. $ dpkg -s update-notifier | grep Version Version: 0.147 $ dpkg -L update-notifier | grep system-crash /usr/lib/update-notifier/system-crash-notification $ grep RELEASE /etc/lsb-release DISTRIB_RELEASE=13.10

    Read the article

  • Kepler orbit : get position on the orbit over time

    - by Artefact2
    I'm developing a space-simulation related game, and I am having some trouble implementing the movement of binary stars, like this: The two stars orbit their centroid, and their trajectories are ellipses. I basically know how to determine the angular velocity at any position, but not the angular velocity over time. So, for a given angle, I can very easily compute the stars position (cf. http://en.wikipedia.org/wiki/Orbit_equation). I'd want to get the stars position over time. The parametric equations of the ellipse works but doesn't give the correct speed : { X(t) = a×cos(t) ; Y(t) = b×sin(t) }. Is it possible, and how can it be done?

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Artificial Intelligence implemented in x86 Assembly? [closed]

    - by Bigyellow Bastion
    Okay, so I decided that for my upcoming operating system, I do basically everything in x86 Assembly, using only 16-bit mode. I will need to write the software to host on it once I have something up and going, and I'll definitely post the source and VM-executable file. But as for now I'm stuck with the idea of implementing the AI code for some of the games I'm making to host on it. AI in Assembly is tedious, and sometimes almost impossible seeming, especially complex AI(I'm talking SNES Super Mario World 2: Yoshi's Island AI here, by the way, not pong AI). I was thinking that it'd be such a hassle that I'd have to bring a higher-level language to work some of this out here, like maybe C++ or C#, but I'd have to go through more work linking it into a fine binary that my OS will host, and that adds unnecessary work to the table I wanted to avoid(I don't want a complex system, I want everything as bare-bones as possible, avoiding libraries, APIs, and linkable formats for now, to make everything more directly accessible to the kernel's API).

    Read the article

  • Migrating from GlassFish 2.x to 3.1.x

    - by alexismp
    With clustering now available in GlassFish since version 3.1 (our Spring 2011 release), a good number of folks have been looking at migrating their existing GlassFish 2.x-based clustered environments to a more recent version to take advantage of Java EE 6, our modular design, improved SSH-based provisioning and enhanced HA performance. The GlassFish documentation set is quite extensive and has a dedicated Upgrade Guide. It obviously lists a number of small changes such as file layout on disk (mostly due to modularity), some option changes (grizzly, shoal), the removal of node agents (using SSH instead), new JPA default provider name, etc... There is even a migration tool (glassfish/bin/asupgrade) to upgrade existing domains. But really the only thing you need to know is that each module in GlassFish 3 and beyond is responsible for doing its part of the upgrade job which means that the migration is as simple as copying a 2.x domain directory to the domains/ directory and starting the server with asadmin start-domain --upgrade. Binary-compatible products eligible for such upgrades include Sun Java System Application Server 9.1 Update 2 as well as version 2.1 and 2.1.1 of Sun GlassFish Enterprise Server.

    Read the article

  • Install unetbootin on Ubuntu 12.04

    - by Matteo
    I'm trying to install UNetbootin on Ubuntu 12.04 LTS. I downloaded the executable file from this link and followed the instructions below: If using Linux, make the file executable (using either the command chmod +x ./unetbootin-linux, or going to Properties-Permissions and checking "Execute"), then start the application, you will be prompted for your password to grant the application administrative rights, then the main dialog will appear, where you select a distribution and install target (USB Drive or Hard Disk), then reboot when prompted.\ So I typed on my terminal sudo chmod +x unetbootin-linux-584 and tried to execute the binary file with ./unetbootin-linux-584 but got this output: ./unetbootin-linux-584: error while loading shared libraries: libXrandr.so.2: cannot open shared object file: No such file or directory However when I checked for libraries libXrandr on my system I actually found them $> locate libXrandr /usr/lib/x86_64-linux-gnu/libXrandr.so.2 /usr/lib/x86_64-linux-gnu/libXrandr.so.2.2.0 /usr/lib/x86_64-linux-gnu/libXrandr_ltsq.so.2 /usr/lib/x86_64-linux-gnu/libXrandr_ltsq.so.2.2.0 so I really don't have a clue of what's the problem and how can I fix it, any ideas?

    Read the article

  • Google Chrome Won't Open

    - by Mike Strand
    When I try to open Google Chrome from the launcher, nothing seems to happen. (this is a new phenomenon, it used to work). I'm on Ubuntu 13.04. When I try to open via the terminal with either $ google-chrome $ google-chrome --incognito I get, ":FATAL:zygote_host_impl_linux.cc(138)] The SUID sandbox helper binary was found, but is not configured correctly. Rather than run without sandboxing I'm aborting now. You need to make sure that /opt/google/chrome/chrome-sandbox is owned by root and has mode 4755." Any help would be appreciated.

    Read the article

  • rt73.bin for Ubuntu 12.04 LTS?

    - by fNek
    I have a D-Link USB WLAN stick and want to install Ubuntu 12.04 alternate. The installer asks me to insert an exchangable drive containing 'rt73.bin', the driver. I cannot find any binary downloads, but cannot build the package, because I have no other linux PC. What can I do? //EDIT: The reason why I want to do this during setup is that I want to setup an LTSP server. I only found guides that assume one has Internet connection - but I have not. What can I do?

    Read the article

  • Metsys.Bson - the BSON Library

    Earlier this month I detailed the implementation of the bson serialization we used in Norm - the C# MongoDB driver. I've since extracted the serialization/deserialization code and created a standalone project for it - in the hopes that it might prove helpful to someone. If you need an efficient binary protocol to transfer data, look no further. There are two methods you need to be aware of: Serializer.Serialize and Deserializer.Deserialize. User u1 = new User{...}; byte[] bytes = Serializer.Serialize(u1); User...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to Convert DMG Files to ISO Files on Windows

    - by Taylor Gibb
    The DMG image format is by far the most popular file container format used to distribute software on Mac OS X. Here’s how to convert a DMG file into an ISO file that can be mounted on a Windows PC. First head over to this website and grab yourself a copy of dmg2img by clicking on the win32 binary link. Once the file has downloaded, open your Downloads folder, right click on the file, and select extract all from the context menu. The Best Free Portable Apps for Your Flash Drive Toolkit How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC

    Read the article

  • Has programming ruined your perception of round numbers?

    - by Jon Purdy
    Most of the world works in base 10 nowadays, but as programmers working on binary systems, we constantly find ourselves working with powers of 2. While most people consider integer multiples of powers of 10 "nice and round" and somehow aesthetically superior, I found early on in my programming adventures that multiples of powers of 2 feel much more intuitively round to me: fewer factors, of course. I'm much more likely to lay out a Web site using, say, 8- or 16-pixel margins rather than 10 or 20, and when someone remarks that 128 is an insanely arbitrary number of ounces to be in a gallon, I have to smile a little inside at how, just perhaps, the U.S. system might be superior to metric in one small way. I'm just curious: has programming ruined (read: altered) your perception of the roundness of a number?

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • Possibility of recovering files from a dd zero-filled hard disk

    - by unknownthreat
    I have "zero filled" (complete wiped) an external hard disk using dd, and from what I have heard: people said you should at least "zero fill" 3 times to be sure that the data are really wiped and no one can recover anything. So I decided to scan the disk once again after I've zero filled the disk. I was expecting the disk to still have some random binary left. It turned out that it has only a few sequential bytes in the very beginning. This is probably the file structure type and other headers stuff. Other than that, it's all zeros and nothing else. So if we have to recover any file from a zero filled disk, ...how? From what I've heard, even you zero fill the disk, you should still have some data left. ...or could dd really completely annihilate all data?

    Read the article

  • What is the best way to keep track of the median?

    - by Steven Mou
    I read a question in one book: Numbers are randomly generated and stored into an (expanding) array, How would you keep track of the median? There are two data structures can solve the problem. One is the balanced binary tree, the other is two heaps which keep trace of the biggest half and the smallest half of the elements. I think these two solutions has the same running time as O(n lg n), but I am not sure of my judgement. In your opinions, What is the best way to keep track of the median?

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >