Search Results

Search found 9267 results on 371 pages for 'distributed apps'.

Page 64/371 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Cross-database transactions from one SP

    - by Michael Bray
    I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently. Which of the following is the most accurate: Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful? Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database? Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support. Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible. Any other suggestions are welcome as well. Thanks!

    Read the article

  • How to render different tab content on different facebook pages, by using the same Page Tab App in facebook

    - by Shekhar
    Is there any way, through which I can render different page tabs for different facebook pages, by using the same "page tab app". Something like: For FB Page "Blah" the app should render the page from the url "http://www.mywebsite.com/Blah" For FB Page "Blah-Blah" the app should render the page from the url "http://www.mywebsite.com/Blah-Blah" For FB Page "Blah-Blah-Blah" the app should render the page from the url "http://www.mywebsite.com/Blah-Blah-Blah" Can I achieve this by using the same facebook app?

    Read the article

  • Webpage shared with like button not showing in users timeline

    - by einar
    I have a single pages and a one page that lists all of the single pages. On the overview page you can like each single page with their url. And of corse you can do the same when you are viewing a single page. My issue is very strange. Most of the pages that I would like show up on my timeline. But then there are some that don't show after I click like, not even if I click on "Post to Facebook". This page will not show in a users timeline if liked ore "Post'ed to Facebook". http://www.inspiredbyiceland.com/inspiration/iceland-airwaves/valdimar/ But this one will http://www.inspiredbyiceland.com/inspiration/iceland-airwaves/snorri-helgason/ But these pages are excls the same, they use the same template so the code should not be any different, and in fact I cant see any differents between these pages that could be causing this kind of problem. You can view the overview page here http://www.inspiredbyiceland.com/inspiration/iceland-airwaves/ . Most of the single pages work fine and show up on users timeline. Most of the content on the site works fine so far as I know. There is an Facebook application defined on the page. I'm not sure if that is related to this problem.

    Read the article

  • ASP.NET MVC - separating large app

    - by marc_s
    I've been puzzled by what I consider a contradiction in terms: ASP.NET MVC claims to be furthering and supporting the "separation of concern" motto, which I find a great idea. However, it seems there's no way of separating out controllers, model or views into their own assembly, or separating areas into assemblies. With the fixed Controller, Model and View folders in your ASP.NET MVC, you're actually creating a huge hodge podge of things. Is that the separation of concerns, really?? Seems like quite the contrary to me. So what I'm wondering: how can I create an ASP.NET MVC solution that will either separate out controllers, the model, and the folders full of views, into separate assemblies? how can I put areas of ASP.NET MVC 2 into separate assemblies? or how else do you manage a large ASP.NET MVC app - which has several dozen or even over a hundred controllers, lots of model and viewmodel classes, and several hundred views?

    Read the article

  • Google SpreadSheets - When using a time trigger in code, some cells getting '#N/A' value

    - by Robi
    I've created a Google spreadsheet with imported data on one cell, extracting specific string to another cell and pasting the data in a table with a trigger for every 2 hours. Now everything works perfectly when running the script manually but when logged out and waiting for the cells to fill, sometimes the pasted cell getting "#N/A" value. Here is the code I'm using: function PasteV(){ var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getSheets()[0]; var timestamp= sheet.getRange("D1").getValue(); var sale= sheet.getRange("G1").getValue(); var rent= sheet.getRange("J1").getValue(); sheet.appendRow([timestamp, rent, sale]); } Again when running manually countless times - no problem. I'll appreciate your help. Thanks.

    Read the article

  • Why isn't Hadoop implemented using MPI?

    - by artif
    Correct me if I'm wrong, but my understanding is that Hadoop does not use MPI for communication between different nodes. What are the technical reasons for this? I could hazard a few guesses, but I do not know enough of how MPI is implemented "under the hood" to know whether or not I'm right. Come to think of it, I'm not entirely familiar with Hadoop's internals either. I understand the framework at a conceptual level (map/combine/shuffle/reduce and how that works at a high level) but I don't know the nitty gritty implementation details. I've always assumed Hadoop was transmitting serialized data structures (perhaps GPBs) over a TCP connection, eg during the shuffle phase. Let me know if that's not true.

    Read the article

  • Is there an use case for non-blocking receive when I have threads?

    - by Gabriel Šcerbák
    I know non-blocking receive is not used as much in message passing, but still some intuition tells me, it is needed. Take for example GUI event driven applications, you need some way to wait for a message in a non-blocking way, so your program can execute some computations. One of the ways to solve this is to have a special thread with message queue. Is there some use case, where you would really need non-blocking receive even if you have threads?

    Read the article

  • Is AMQP suitable as both an intra and inter-machine software bus?

    - by Bwooce
    I'm trying to get my head around AMQP. It looks great for inter-machine (cluster, LAN, WAN) communication between applications but I'm not sure if it is suitable (in architectural, and current implementation terms) for use as a software bus within one machine. Would it be worth pulling out a current high performance message passing framework to replace it with AMQP, or is this falling into the same trap as RPC by blurring the distinction between local and non-local communication? I'm also wary of the performance impacts of using a WAN technology for intra-machine communications, although this may be more of an implementation concern than architecture. War stories would be appreciated.

    Read the article

  • How can I set Invitee in Google Calendar through Python?

    - by Dhaval dave
    I am Setting Google Calendar via python command like this def _InsertQuickAddEvent(self, content="Tennis with dddddd on 5/19/2010 4am-5:30am"): """Creates an event with the quick_add property set to true so the content is processed as quick add content instead of as an event description.""" event = gdata.calendar.CalendarEventEntry() who = whois("[email protected]") event.content = atom.Content(text=content) event.quick_add = gdata.calendar.QuickAdd(value='true'); new_event = self.cal_client.InsertEvent(event, '/calendar/feeds/default/private/full') return new_event this code is given by Google API Can any one suggest what to do to add invitee in this? Important links for that http://code.google.com/apis/calendar/data/1.0/developers_guide_python.html

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • How can I get a rails server to use the same databse that cucumber uses during a test?

    - by James
    The cucumber test first makes an entry in the database and posts a form to a second server. This second server does some processing in background and then hits the first app (where the test is being run) with some data that the cucumber test needs to know about. I've tried running the main server via script/server and script/server -e test while the cucumber test is running, but I can't seem to force the server to use the same database that cucumber is using when it runs its step definitions. That is, when the second server pushes some data to a controller in the main server, the main server doesn't know about any entries that cucumber has made in the database. How can I get cucumber and the main server to use the same database?

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • Connecting to an RMI object without registry

    - by Mark Probst
    I think I need to connect to a remote RMI object without going through the registry, but I don't know how. My situation is this: I'm implementing a simple job distribution service which consists of one distributor and multiple workers. The distributor has a registered RMI object to which clients connect to send jobs, and workers connect to accept jobs. Unfortunately the distributor and worker hosts are behind a firewall. To get to the distributor host I am tunneling two ports (one for the registry, one for the distributor object) via SSH, so I can get to the registry and the distributor from outside the firewall. To make that work I have to set "-Djava.rmi.server.hostname=localhost" on the distributor JVM so that the clients connect to their local, tunneled port, instead of the port on the actual distributor host, which is blocked. This creates a problem for the workers, though, because they need to connect to the distributor directly, but because of the "localhost" redirection they behave like clients and try to connect to a port on their own host, which is not available, because I'm not tunneling on the workers (it is impractical). Now, if I could connect to a remote object directly by giving the hostname and port, I could do away both with the registry on the distributor and the "localhost" hack, and make the workers connect properly. How do I do that? Or is there a different solution to this problem?

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • wanting a good memory + disk caching solution

    - by brofield
    I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is: memcached semantics (i.e. not reliable, just a cache) memcached api preferred (but not required) large in-memory first level cache (MRU) huge on-disk second level cache (main) evicted from on-disk cache at maximum storage using LRU or LFU proven implementation In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either: other options that I haven't considered a way to make memcachedb do evictions Already considered are: memcachedb best fit but doesn't do evictions: explicitly "not a cache" can't see any way to do evictions (either manual or automatic) tugela cache abandoned, no support don't want to recommend it to customers nmdb doesn't use memcache api new and unproven don't want to recommend it to customers

    Read the article

  • Application in which I need Auto-update just like Gmail inbox,calendar Face-Book etc

    - by Dhaval dave
    I have made application in which I have kept calendar,now I need that if admin changes his calendar and if it is affected to user and if that user is currently looking that calendar then whatever changes Admin has done that should reflect to user without refreshing the page, just like when email comes to Gmail then without refreshing we can see the inbox marked as unread... SO to implement that what should I do? I am using J-query for user interface and Python as back-end?????

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >