Search Results

Search found 3049 results on 122 pages for 'sync'.

Page 47/122 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Oracle Identity Management Connector Overview

    - by Darin Pendergraft
    Oracle Identity Manager (OIM) is a complete Identity Governance system that automates access rights management, and provisions IT resources.  One important aspect of this system is the Identity Connectors that are used to integrate OIM with external, identity-aware applications. New in OIM 11gR2 PS1 is the Identity Connector Framework (ICF) which is the foundation for both OIM and Oracle Waveset.Identity Connectors perform several very important functions: On boarding accounts from trusted sources like SAP, Oracle E-Business Suite, & PeopleSoft HCM Managing users lifecycle in various Target systems through provisioning and recon operations Synchronizing entitlements from targets systems so that they are available in the OIM request catalog Fulfilling access grants and access revoke requests Some connectors may support Role Lifecycle Management Some connectors may support password sync from target to OIM The Identity Connectors are broken down into several families: The BMC Remedy Family BMC Remedy Ticket Management BMC Remedy User Management The Microsoft Family Microsoft Active Directory Microsoft Active Directory Password Sync Microsoft Exchange The Novell Family Novell eDirectory Novell GroupWise The Oracle E-Business Suite Family Oracle e-Business Employee Reconciliation Oracle e-Business User Management The PeopleSoft Family PeopleSoft Employee Reconciliation PeopleSoft User Management The SAP Family SAP CUA SAP Employee Reconciliation SAP User Management The UNIX Family UNIX SSH UNIX Telnet As you can see, there are a large number of connectors that support apps from a variety of vendors to enable OIM to manage your business applications and resources. If you are interested in finding out more, you can get documentation on these connectors on our OTN page at: http://www.oracle.com/technetwork/middleware/id-mgmt/downloads/connectors-101674.html

    Read the article

  • Best practice with branching source code and application lifecycle

    - by Toni Frankola
    We are a small ISV shop and we usually ship a new version of our products every month. We use Subversion as our code repository and Visual Studio 2010 as our IDE. I am aware a lot of people are advocating Mercurial and other distributed source control systems but at this point I do not see how we could benefit from these, but I might be wrong. Our main problem is how to keep branches and main trunk in sync. Here is how we do things today: Release new version (automatically create a tag in Subversion) Continue working on the main trunk that will be released next month And the cycle repeats every month and works perfectly. The problem arises when an urgent service release needs to be released. We cannot release it from the main trunk (2) as it is under heavy development and it is not stable enough to be released urgently. In such case we do the following: Create a branch from the tag we created in step (1) Bug fix Test and release Push the change back to main trunk (if applicable) Our biggest problem is merging these two (branch with main). In most cases we cannot rely on automatic merging because e.g.: a lot of changes has been made to main trunk merging complex files (like Visual Studio XML files etc.) does not work very well another developer / team made changes you do not understand and you cannot just merge it So what you think is the best practice to keep these two different versions (branch and main) in sync. What do you do?

    Read the article

  • How To Access Your eBook Collection Anywhere in the World

    - by Jason Fitzpatrick
    If you have an eBook reader it’s likely you already have a collection of eBooks you sync to your reader from your home computer. What if you’re away from home or not sitting at your computer? Learn how to download books from your personal collection anywhere in the world (or just from your backyard). You have an eBook reader, you have an eBook collection, and when you remember to sync your books to the collection on your computer everything is rosy. What about when you forget or when the syncing process for your device is a bit of a hassle? (We’re looking at you, iPad.) Today we’re going to show you how to download eBooks to your eBook reader from anywhere in the world using a cross-platform solution Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0 A Look Back at 2010 Through Infographics Monitor the Weather with the Weather Forecast Extension for Opera Orbiting at the Edge of the Atmosphere Wallpaper

    Read the article

  • SQL Azure Roadmap gets a little clearer &ndash; announcements from Tech Ed

    - by Eric Nelson
    On Monday at Tech?Ed 2010 we announced new stuff (I like new stuff) that “showcases our continued commitment to deliver value, flexibility and control of data through data cloud services to our customers”. Ok, that does sound like marketing speak (and it is) but the good news is there is some meat behind it. We have some decent new features coming and we also have some clarity on when we will be able to get our hands on those features. SQL Azure Business Edition Extends to 50 GB – June 28th SQL Azure Business Edition database is now extending from 10GB to 50GB The new 50GB database size will be available worldwide starting June 28th SQL Azure Business Edition Subscription Offer – August 1st Starting August 1st, we will have a new discounted SQL Azure promotional offer (SQL Azure Development Accelerator Core) More information is available at http://www.microsoft.com/windowsazure/offers/. Public Preview of the Data Sync Service  - CTP now Data Sync Service for SQL Azure allows for more flexible control over data by deciding which data components should be distributed across multiple datacenters in different geographic locations, based on your internal policies and business needs.  Available as a community technology preview after registering at http://www.sqlazurelabs.com SQL Server Web Manager for SQL Azure - CTP this Summer SQL Server Web Manager (SSWM) is a lightweight and easy to use database management tool for SQL Azure databases, to be offered this summer. Access 10 Support for SQL Azure – available now Yey – at last! Microsoft Office 2010 will natively support data connectivity to SQL Azure – we can now start developing those “departmental apps” with the confidence of a highly available SQL store provisioned in seconds. NB: I don’t believe we will support any previous versions of Access talking to SQL Azure. The Pre-announced Spatial Data Support to Become Live – Live now* At MIX in March we announced spatial was coming and apparently it is now here - although I need to check. Related Links UK based? Sign up at http://ukazure.ning.com SQL Azure Team Blog http://blogs.msdn.com/b/sqlazure/

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Agile development challenges

    - by Bob
    With Scrum / user story / agile development, how does one handle scheduling out-of-sync tasks that are part of a user story? We are a small gaming company working with a few remote consultants who do graphics and audio work. Typically, graphics work should be done at least a week (sometimes 2 weeks) in advance of the code so that it's ready for integration. However, since SCRUM is supposed to focus on user stories, how should I split the stories across iteration so that they still follow the user story model? Ideally, a user story should be completed by all the team members in the same iteration, I feel that splitting them in any way violates the core principle of user story driven development. Also, one front end developer can work at 2X pace of backend developers. However, that throws the scheduling out of sync as well because he is either constantly ahead of them or what we have done is to have him work on tasks that not specific to this iteration just to keep busy. Either way, it's the same issue as above, splitting up user story tasks. If someone can recommend an active Google agile development group that discusses these and other issues, that'll be great. Also, if you know of a free alternative to Pivotal Labs, let me know as well. I'm looking now at Agilo.

    Read the article

  • computer shows up twice, connection unknown

    - by Thomas G. Seroogy
    added two computers to Ubuntu One. One machine is windows. The windows version seems to work fine, and I started syncing a folder by placing it into the Ubuntu One folder. All files and folders are visible when I go to my account online. On Ubuntu machine. I selected to sync the Download folder. Shortly thereafter, I realized that one folder exceeded my storage max. I tried to un-sync the folder, but Ubuntu One and the Stop Syncing This Folder were not visible in the menu. Per Ubuntu instructions, I removed my Ubuntu computer from the list of syncing computers. Per Ubuntu instructions, I re-added the Ubuntu computer. However, I find that two computers by the same name are added on both the desktop app and on the web. Plus, I the connection is "unknown." I have removed and re-added the computer several times with the same results. In all cases, I remove the computers using the Ubuntu One desktop app, then removing them from my account on the web, removing the Ubuntu One password, and restart the Ubuntu One app. Problem remains. Thanks in advance for any replies and help.

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • Ubuntu 13.04 client cannot connect to Raspbian samba share

    - by envoyweb
    I have a client Ubuntu 13.04 machine trying to connect to a server running Raspbian with samba and samba-common-bin installed on the server I can see my share and when I try to login I get this error: Unable to access location: Failed to write windows share Cannot allocate memory. I have installed ntfs-3g for the usb hard drive that already auto mounts on the server so I never had to create a directory or edit fstab. Testparm on the server states the following: [global] workgroup = ENVOYWEB server string = %h server map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [homes] comment = Home Directories valid users = %S create mask = 0700 directory mask = 0700 browseable = No [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [BigDude] comment = Sharing BigDude's Files path = /media/BigDude/ valid users = @users read only = No create mask = 0755 testparm on the client which is running ubuntu is as follows [global] workgroup = ENVOYWEB server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers

    Read the article

  • How to re-add RAID-10 dropped drive?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • How to improve my backup strategy (rsync)?

    - by GUI Junkie
    I've seen the QAs about backup solutions, but I'm asking anyway. One because it's a personal situation I haven't solved yet, and two, because the answer can be useful for others. My situation is rather simple. I have two computers with two users and one external hard-drive. I want to sync/backup a shared directory. Currently I use rsync with the -azvu options to sync to the external drive. My problem is the round-trip. All deleted files are restored! Using rsync I'm doing Computer A --> External disk --> Computer A Computer B --> External disk --> Computer B (I should probably do External disk -- Computer A as a last step) I've seen 'bup' mentioned and other QA talk about dropbox + rsync... Another option is maybe to delete files from rsync? Can my running backup strategy be improved in some other way?

    Read the article

  • Spreading incoming batched data into a real-time stream

    - by pr1001
    I would like to display some events in 'real-time'. However, I must fetch the data from another source. I can request the last X minutes, though the source is updated approximately every 5 minutes. This means that there will be a delay between the most recent data retrieved and the point in time that I make the request. Second, because I will be receiving a batch of data, I don't want to just fire out all the events down a socket once my fetcher has retrieved it: I would like to spread out the events so that they are both accurately spaced amongst each other and in sync with their original occurrences (e.g. an event is always displayed 6 minutes after it actually happened). My thought is to fetch the data every 5 minutes from the source, knowing that I won't get the very latest data. The original data would be then queued to be sent down the socket 7.5 minutes from its original timestamp – that is, at least ~2.5 minutes from when its batch was fetched and at most 7.5 minutes since then. My question is this: is this the best way to approach the problem? Does this problem have any standard approaches or associated literature related to implementation best-practices and edge cases? I am a bit worried that the frequency of my fetches and the frequency in which the source is updated will get out of sync, leading to points where no data will be retrieved from the source. However, since my socket delay is greater than my fetch frequency, the subsequent fetch should retrieve newer data before the socket queue is empty. Is that correct? Am I missing something? Thanks!

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • How do I stop video tearing? (Nvidia prop driver, non-compositing window manager)

    - by Chan-Ho Suh
    I have that problem which seemingly afflicts many using the proprietary Nvidia driver: Video tearing: fine horizontal lines (usually near the top of my display) when there is a lot of panning or action in the video. (Note: switching back to the default nouveau driver is not an option, as its seemingly nonexistent power-management drains my battery several times faster) I've tried Totem, Parole, and VLC, and tearing occurs with all of them. The best result has been to use X11 output in VLC, but there is still tearing with relatively moderate action. Hardware: MacBook Air 3,2 -- which has an Nvidia GeForce 320M. There are two common fixes for tearing with Nvidia prop drivers: Turn off compositing, since Nvidia proprietary drivers don't usually play nice with compositing window managers on Linux (Compiz is an exception I'm aware of). But I use an extremely lightweight window manager (Awesome window manager) which is not even capable of compositing (or any cool effects). I also have this problem in Xfce, where I have compositing disabled. Enabling sync to VBlank. To enable this, I set the option in nvidia-settings and then autostart it as nvidia-settings -l with my other autostart programs. This seems to work, because when I run glxgears, I get: $ glxgears Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 303 frames in 5.0 seconds = 60.500 FPS 300 frames in 5.0 seconds = 59.992 FPS And when I check the refresh rate using nvidia-settings: $ nvidia-settings -q RefreshRate Attribute 'RefreshRate' (wampum:0.0; display device: DFP-2): 60.00 Hz. All this suggests sync to VBlank is enabled. As I understand it, this is precisely designed to stop tearing, and a lot of people's problem is even getting something like glxgears to output the correct info. I don't understand why it's not working for me. xorg.conf: http://paste.ubuntu.com/992056/ Example of observed tearing::

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • doubleTwist is an iTunes Alternative that Supports Several Devices

    - by Mysticgeek
    There are a lot of iTunes users out there, but unfortunately you can’t use it with all of your portable devices. Today we take a look at doubleTwist, which allows you to sync your media with a multitude of portable devices and easily share it as well. Note: You can run doubleTwist on Windows or Mac, and here we take a look at the Windows version. Install & Setup doubleTwist Download and install doubleTwist using the defaults in the wizard… Installation takes several moments and you’ll see the progress while it finishes up. After installation is complete, sign up for an account if you don’t already have one. If you do have an account you can login right away. Enter in your username, email address, and password then click Sign Up.   You’ll get an confirmation email and need to activate the account before you can sign in. Once you’re all signed up, launch doubleTwist and you’ll be ready to start using it. doubleTwist Music The default music store is Amazon MP3 store which might appeal to those of you who are tired of the iTunes music store. A lot of times the music is cheaper and available at higher bit rates. You can start searching for music in the Amazon Music Store and previewing songs. To purchase anything though you will need to sign into your Amazon account.   Under Playlists it allows you to import your playlists from iTunes and Windows Media Player, which is a handy feature if you don’t want to set them up again. Of course you can play your songs through the music player on your desktop. Devices One of the coolest things about doubleTwist is that it supports a lot of different portable media devices including iPod, BlackBerry, Windows Mobile, Android, PSP, Smartphones, and much more. Unfortunately for Zune users…there isn’t any support for the Zune of Zune HD yet. Here we have a Creative Zen attached and can sync songs, pictures, and podcasts. An HTC-S620 Smartphone running Windows Mobile… Even a simple USB drive will be recognized and you can transfer your media to it as well.   Podcasts Finding your favorite audio and video podcasts is easy with the search feature. You can easily manage and subscribe to podcasts in the subscriptions section.   You can watch the video podcasts directly in doubleTwist. Sharing Media Also you can share digital media with your friends or add it to Flickr and YouTube. You can send any pictures, videos, or music in your library to other people by dragging it over. You can email users individually… Or access contacts from your Gmail and Yahoo accounts. There is a limit to how much you can send of video podcasts… only the first 10 minutes. The person you send it to will get a link in their email that points to your My Feed page on the doubleTwist site.   There they can access the media you sent…in this example it’s a video podcast but you can share any media. Other Features Under My Profile you can change your avatar and personal information.   In Preferences you can choose where media is stored, its startup actions, podcast subscriptions, and manage device syncing. Conclusion It’s still in beta stage so expect some bugs, but overall doubleTwist is a solid media player that is easy to use with a clean interface. It’s simple and doesn’t try to do too much so is fairly easy on system resources. The main annoyance is it tries to catalog all of your media out of the box. Which may be alright for some users with smaller media collections, but very irritating to advanced users with large collections. Also there is currently no support for the Zune, but according to their forums, it’s on the way. At the time of this writing it’s in public beta and can be downloaded for XP, Vista, Windows 7 (32 & 64 bit), and Mac OSX. If you’re looking for an iTunes alternative that works with several different portable devices, you might want to give DoubleTwist a try. Download DoubleTwist Public Beta See If Your Media Device is Supported by doubleTwist Similar Articles Productive Geek Tips MusicBee is a Fast and Powerful Music ManagerAvoid the Apple QuickTime Bloat with QT LiteBeginner Geek: Set Default Programs in Windows 7 and VistaBeginner Geeks: OpenOffice is a Free Cross Platform Alternative to MS OfficeManage Devices the Easy Way with Device Stage in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Solr DataImportHandler configuration

    - by talo
    I want to get data from mysql database with the help of DataImportHandler so i can create indexes. Now I've configured my Solr instance so that it works on Tomcat (the example admin page), but if I try to change the sorlconfig.xml file i'll get the error message. I'm working with Solr 3.6 So my configuration is: In solrconfig.xml i added: <dataDir>${solr.data.dir:/usr/share/tomcat7/solr2}</dataDir> to specify my working directory and then <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">/usr/share/tomcat7/solr2/conf/data-config.xml</str> </lst> </requestHandler> to specify new request handler. Theese two are my lib directives for DIH. Do i need to change them? <lib dir="../../dist/" regex="apache-solr-dataimporthandler-\d.*\.jar" /> <lib dir="../../contrib/dataimporthandler/lib/" regex=".*\.jar" /> I also created data-config.xml file and added following: <?xml version="1.0" encoding="UTF-8"?> <dataConfig> <dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ethicsweb_experts" user="root" password=""/> <document> <entity name="experts" datasource="mysql" pk="mainid" query="SELECT experts.mainid as mainid FROM experts WHERE validRec = 'y'"> <field column="mainid" name="mainid"/> <field column="validRec" name="validRec"/> </entity> </document> I've coppied following jars to tomcat/lib folder (DIH jar files and mysql JDBC connector jar file) apache-solr-dataimporthandler-3.6.0.jar apache-solr-dataimporthandler-extras-3.6.0.jar mysql-connector-java-5.1.20-bin.jar Also in the schema.xml file i added folowing fields: <field name="mainid" type="int" indexed="true" stored="true" /> <field name="validRec" type="string" indexed="true" stored="true" /> <field name="recSource" type="string" indexed="true" stored="true" /> <uniqueKey>mainid</uniqueKey> But now when i try to acces: http://localhost:8080/solr2/ I get following output: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in solr.xml ------------------------------------------------------------- org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) ------------------------------------------------------------- java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more My log entries show me that: SEVERE: java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more Jun 15, 2012 4:07:50 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Jun 15, 2012 4:07:50 PM org.apache.solr.common.SolrException log SEVERE: org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) So now I wonder where did i screw up the configuration for the DataImportHandler. Do i need to specify more jar files? Did i put jar files in the correct directory? Any help would be greatly appreciated.

    Read the article

  • Update JQuery Progressbar with JSON Response in an ajax Request

    - by Vincent
    All, I have an AJAX request, which makes a JSON request to a server, to get the sync status. The JSON Request and responses are as under: I want to display a JQuery UI progressbar and update the progressbar status, as per the percentage returned in the getStatus JSON response. If the status is "insync", then the progressbar should not appear and a message should be displayed instead. Ex: "Server is in Sync". How can I do this? //JSON Request to getStatus { "header": { "type": "request" }, "payload": [ { "data": null, "header": { "action": "load", } } ] } //JSON Response of getStatus (When status not 100%) { "header": { "type": "response", "result": 400 }, "payload": [ { "header": { "result": 400 }, "data": { "status": "pending", "percent": 20 } } ] } //JSON Response of getStatus (When percent is 100%) { "header": { "type": "response", "result": 400 }, "payload": [ { "header": { "result": 400 }, "data": { "status": "insync" } } ] }

    Read the article

  • c# - wmplib - playing 2 mp3 files but can get them in synic

    - by Ciarán
    I have 2 windowsMediaPlayer objects setup Both objects have the same mp3 song file but when I play them at the same time they are out of sync by a few seconds. The code I call to play them looks like this sound1.controls.play(); sound2.controls.play(); Is it just because one play method is executed before the other and thus one fast then the other. Is their anyway I could sync them? I have tired messing with the rate of one track to match the other, but not getting it prefect. Thanks

    Read the article

  • How can I update information in an Android Activity from a background Service

    - by tmm
    Hello, I am trying to create a simple Android application that has a ActivityList of information, when the application starts, I plan to start a Service that will be constantly calculating the data (it will be changing) and I want the ActivityList to be in sync with the data that the service is calculating for the life of the app. How can I set up my Activity to be listening to the Service? Is this the best way to approach this problem? For example, if you imagine a list of stock prices - the data would be being changed regularly and need to be in sync with the (in my case) Service that is calculating/fetching the data constantly. Thanks in advance

    Read the article

  • What is the fastest way to create a checksum for large files in C#

    - by crono
    Hi, I have to sync large files across some machines. The files can be up to 6GB in size. The sync will be done manually every few weeks. I cant take the filename into consideration because they can change anytime. My plan is to create checksums on the destination PC and on the source PC and than copy all files with a checksum, which are not already in the destination, to the destination. My first attempt was something like this: using System.IO; using System.Security.Cryptography; private static string GetChecksum(string file) { using (FileStream stream = File.OpenRead(file)) { SHA256Managed sha = new SHA256Managed(); byte[] checksum = sha.ComputeHash(stream); return BitConverter.ToString(checksum).Replace("-", String.Empty); } } The Problem was the runtime: - with SHA256 with a 1,6 GB File - 20 minutes - with MD5 with a 1,6 GB File - 6.15 minutes Is there a better - faster - way to get the checksum (maybe with a better hash function)?

    Read the article

  • Cant delete more than 200 contacts in HTC HERO

    - by rahul
    I'm working on security application which will copy all contacts to some other database and delete all contacts from phonebook. I'm testing this on android HTC HERO. I'm successful to delete contacts from phonebook and create new contact info database, Till 200 it is working, but after 200 contacts its not working properly. After tht application starts throwing error. There is one Sync with Google Option in MenuSettingData Sync, I think that is creating problem. There is notification that "Too many contacts deleted" n if i click tht there will b a dialog with title "Delete Limit exceeded". Is there anything i can do to stop syncronization or any other ideas by which i can achieve required output? Please Help me on this

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >