Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 338/367 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • What exactly is a reentrant function?

    - by eSKay
    Most of the times, the definition of reentrance is quoted from Wikipedia: A computer program or routine is described as reentrant if it can be safely called again before its previous invocation has been completed (i.e it can be safely executed concurrently). To be reentrant, a computer program or routine: Must hold no static (or global) non-constant data. Must not return the address to static (or global) non-constant data. Must work only on the data provided to it by the caller. Must not rely on locks to singleton resources. Must not modify its own code (unless executing in its own unique thread storage) Must not call non-reentrant computer programs or routines. How is safely defined? If a program can be safely executed concurrently, does it always mean that it is reentrant? What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities? Also, Are all recursive functions reentrant? Are all thread-safe functions reentrant? Are all recursive and thread-safe functions reentrant? While writing this question, one thing comes to mind: Are the terms like reentrance and thread safety absolute at all i.e. do they have fixed concrete definations? For, if they are not, this question is not very meaningful. Thanks!

    Read the article

  • how to upgrade compact framework applications?

    - by Derick Bailey
    i'm looking for a way to manage application upgrades for my compact framework app. let's say i have v1 of the app installed on my device, and v1.1 has been released. I want the app to make a call to my server to see if there is a new version. since a new version is found, i want to send down the new version of the app to the device and have it installed, replacing the old version. my first thought was just to have the app download the .cab file and kick off the cab file just before exiting the app. this would mostly get the job done but it would prompt the user to pick the installation location if they have a storage card or other partitions on their device. i would like to prevent any user input and just have the new version of the app installed, replacing the old app. i'm certain that there are others doing this already and i don't want to reinvent the wheel, here. what application management tools and systems exist for this type of process? how can I facilitate this type of process?

    Read the article

  • How to catch YouTube embed code and turn into URL

    - by Jonathan Vanasco
    I need to strip YouTube embed codes down to their URL only. This is the exact opposite of all but one question on StackOverflow. Most people want to turn the URL into an embed code. This question addresses the usage patttern I want, but is tied to a specific embed code's regex ( Strip YouTube Embed Code Down to URL Only ) I'm not familiar with how YouTube has offered embeds over the years - or how the sizes differ. According to their current site, there are 2 possible embed templates and a variety of options. If that's it, I can handle a regex myself -- but I was hoping someone had more knowledge they could share, so I could write a proper regex pattern that matches them all and not run into endless edge-cases. The full use case scenario : user enters content in web based wysiwig editor backend cleans out youtube & other embed codes; reformats approved embeds into an internal format as the text is all converted to markdown. on display, appropriate current template/code display for youtube or other 3rd party site is generated At a previous company, our tech-team devised a plan where YouTube videos were embedded by listing the URL only. That worked great , but it was in a CMS where everyone was trained. I'm trying to create a similar storage, but for user-generated-content.

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Observer Design Pattern - multiple event types

    - by David
    I'm currently implementing the Observer design pattern and using it to handle adding items to the session, create error logs and write messages out to the user giving feedback on their actions (e.g. You've just logged out!). I began with a single method on the subject called addEvent() but as I added more Observers I found that the parameters required to detail all the information I needed for each listener began to grow. I now have 3 methods called addMessage(), addStorage() and addLog(). These add data into an events array that has a key related to the event type (e.g. log, message, storage) but I'm starting to feel that now the subject needs to know too much about the listeners that are attached. My alternative thought is to go back to addEvent() and pass an event type (e.g. USER_LOGOUT) along with the data associated and each Observer maintains it's own list of event handles it is looking for (possibly in a switch statement), but this feels cumbersome. Also, I'd need to check that sufficient data had also been passed along with the event type. What is the correct way of doing this? Please let me know if I can explain any parts of this further. I hope you can help and see the problem I'm battling with.

    Read the article

  • Best way to create a SPARQL endpoint for a RDBMS (MySQL database)

    - by Ankur
    I am doing (want to do) some experiments with Linked Open Datasets particularly those put out by governments. I have a RDBMS (more specifically MySQL). I designed it with semantic web ideas in mind i.e. I have a information stored as objects, predicates and classes which define objects. In turn all objects are related to each other though statements of the form subject -- predicate -- object (where the subjects are from the objects table). I want to be able to query other RDF triple stores from my application and let other triple stores query my data. Is it possible to "set something up" so that this is possible? I have looked at Jena. Using Jena seems to mean I have to it as a storage application rather than MySQL - the only problem with this is that I include a new concept called a category (which I don't think is part of the semantic web languages). I will use categories to help with displaying information (they don't have any other meaning) but using Jena seems to mean that I can't organise predicates under categories for more convenient viewing. I am using Java so a JAVA API is preferred. It's also possible I misunderstood the purpose of Jena, and maybe that can be of use, but I am not sure how. I am sure four days from now this question will seem rather silly, but at the moment I am somewhat confused about how to proceed.

    Read the article

  • Windows 7 versus Windows XP multithreading - Delphi app not acting right

    - by Robert Oschler
    I'm having a problem with a Delphi Pro 6 application that I wrote on my Windows XP machine when it runs on Windows 7. I don't have Windows 7 to test yet and I'm trying to see if Windows 7 might be the source of the trouble. Is there a fundamental difference between the way Windows 7 handles threads compared to Windows XP? I am seeing things happen out of sequence in my error logs on Windows 7 and it's causing problems. For example, objects that should have been initialized are uninitialized when running on Windows 7, yet those objects are initialized on Windows XP by the time they are needed. Some questions: 1) Are there any core differences that could cause threads/processes to behave differently between the two operating system versions? 2) I know this next question may seem absurd, but does Windows 7 attempt to split/fork threads that aren't split/forked on Windows XP? 3) And lastly, are there any known issues with FPU handling that can cause XP programs trouble when run on Windows 7 due to operational differences in wait state handling or register storage, or perhaps something like Exception mask settings, etc? 4) Any 32-bit versus 64-bit issues that could be creating trouble here? -- roschler

    Read the article

  • Does .Net use Device Dependent or Device Independent Bitmaps?

    - by Brian
    When loading an image into memory, does .Net use DDB, DIB, or something else entirely? If possible, please cite your sources. I'm wondering because we currently have a classic ASP application that is using a 3rd party component to load images that is occasionally creating a “Not enough storage is available to process this command.” error. The error is very inconsistent but tends to happen on larger images (not always, but often). After resetting IIS, processing the same file again typically works just fine. After much research I have found that DDBs tend to have this problem when processing large images because they work out of video memory. Considering that we are running on a web server with an integrated video card and limited shared memory, this could certainly be our problem. We are in the early stages of converting our app to .Net and am wondering if using .Net for this might be a viable alternative to our current method which is why I am asking the question. Any advice is welcome :) but out of curiosity if nothing else, I am really hoping for an answer to the question; does .Net use DDB or DIB?

    Read the article

  • Negative number representation across multiple architechture

    - by Donotalo
    I'm working with OKI 431 micro controller. It can communicate with PC with appropriate software installed. An EEPROM is connected in the I2C bus of the micro which works as permanent memory. The PC software can read from and write to this EEPROM. Consider two numbers, B and C, each is two byte integer. B is known to both the PC software and the micro and is a constant. C will be a number so close to B such that B-C will fit in a signed 8 bit integer. After some testing, appropriate value for C will be determined by PC and will be stored into the EEPROM of the micro for later use. Now the micro can store C in two ways: The micro can store whole two byte representing C The micro can store B-C as one byte signed integer, and can later derive C from B and B-C I think that two's complement representation of negative number is now universally accepted by hardware manufacturers. Still I personally don't like negative numbers to be stored in a storage medium which will be accessed by two different architectures because negative number can be represented in different ways. For you information, 431 also uses two's complement. Should I get rid of the headache that negative number can be represented in different ways and accept the one byte solution as my other team member suggested? Or should I stick to the decision of the two byte solution because I don't need to deal with negative numbers? Which one would you prefer and why?

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • C++ Matrix class hierachy

    - by bpw1621
    Should a matrix software library have a root class (e.g., MatrixBase) from which more specialized (or more constrained) matrix classes (e.g., SparseMatrix, UpperTriangluarMatrix, etc.) derive? If so, should the derived classes be derived publicly/protectively/privately? If not, should they be composed with a implementation class encapsulating common functionality and be otherwise unrelated? Something else? I was having a conversation about this with a software developer colleague (I am not per se) who mentioned that it is a common programming design mistake to derive a more restricted class from a more general one (e.g., he used the example of how it was not a good idea to derive a Circle class from an Ellipse class as similar to the matrix design issue) even when it is true that a SparseMatrix "IS A" MatrixBase. The interface presented by both the base and derived classes should be the same for basic operations; for specialized operations, a derived class would have additional functionality that might not be possible to implement for an arbitrary MatrixBase object. For example, we can compute the cholesky decomposition only for a PositiveDefiniteMatrix class object; however, multiplication by a scalar should work the same way for both the base and derived classes. Also, even if the underlying data storage implementation differs the operator()(int,int) should work as expected for any type of matrix class. I have started looking at a few open-source matrix libraries and it appears like this is kind of a mixed bag (or maybe I'm looking at a mixed bag of libraries). I am planning on helping out with a refactoring of a math library where this has been a point of contention and I'd like to have opinions (that is unless there really is an objective right answer to this question) as to what design philosophy would be best and what are the pros and cons to any reasonable approach.

    Read the article

  • Sharing same vector control between different places

    - by Alexander K
    Hi everyone, I'm trying to implement the following: I have an Items Manager, that has an Item class inside. Item class can store two possible visual representations of it - BitmapImage(bitmap) and UserControl(vector). Then later, in the game, I need to share the same image or vector control between all possible places it takes place. For example, consider 10 trees on the map, and all point to the same vector control. Or in some cases this can be bitmap image source. So, the problem is that BitmapImage source can be easily shared in the application by multiple UIElements. However, when I try to share vector control, it fails, and says Child Element is already a Child element of another control. I want to know how to organize this in the best way. For example replace UserControl with other type of control, or storage, however I need to be sure it supports Storyboard animations inside. The code looks like this: if (bi.item.BitmapSource != null) { Image previewImage = new Image(); previewImage.Source = bi.item.BitmapSource; itemPane.ItemPreviewCanvas.Children.Add(previewImage); } else if (bi.item.VectorSource != null) { UserControl previewControl = bi.item.VectorSource; itemPane.ItemPreviewCanvas.Children.Add(previewControl); } Thanks in advance

    Read the article

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • jQuery class selectors with nested div's

    - by mboles57
    This is part of some HTML from which I need to retrieve a piece of data. The HTML is assigned to a variable called fullDescription. <p>testing</p> <div class="field field-type-text field-field-video-short-desc"> <div class="field-label">Short Description:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> Demonstrates the basics of using the Content section of App Cloud Studio </div> </div> </div> <div class="field field-type-text field-field-video-id"> <div class="field-label">Video ID:&nbsp;</div> <div class="field-items"> <div class="field-item odd"> 1251462871001 </div> </div> </div> I wish to retrieve the video ID number (1251462871001). I was thinking something like this: var videoID = $(fullDescription).find(".field.field-type-text.field-field-video-id").find(".field-item.odd").html(); Although it does not generate any syntax errors, it does not retrieve the number. Thanks for helping out a jQuery noob! -Matt

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Largest triangle from a set of points

    - by Faken
    I have a set of random points from which i want to find the largest triangle by area who's verticies are each on one of those points. So far I have figured out that the largest triangle's verticies will only lie on the outside points of the cloud of points (or the convex hull) so i have programmed a function to do just that (using Graham scan in nlogn time). However that's where I'm stuck. The only way I can figure out how to find the largest triangle from these points is to use brute force at n^3 time which is still acceptable in an average case as the convex hull algorithm usually kicks out the vast majority of points. However in a worst case scenario where points are on a circle, this method would fail miserably. Dose anyone know an algorithm to do this more efficiently? Note: I know that CGAL has this algorithm there but they do not go into any details on how its done. I don't want to use libraries, i want to learn this and program it myself (and also allow me to tweak it to exactly the way i want it to operate, just like the graham scan in which other implementations pick up collinear points that i don't want).

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Seeking C Union clarity

    - by mustISignUp
    typedef union { float flts[4]; struct { GLfloat r; GLfloat theta; GLfloat phi; GLfloat w; }; struct { GLfloat x; GLfloat y; GLfloat z; GLfloat w; }; } FltVector; Ok, so i think i get how to use this, (or, this is how i have seen it used) ie. FltVector fltVec1 = {{1.0f, 1.0f, 1.0f, 1.0f}}; float aaa = fltVec1.x; etc. But i'm not really groking how much storage has been declared by the union (4 floats? 8 floats? 12 floats?), how? and why? Also why two sets of curly braces when using FltVector {{}}? Any pointers much appreciated (sorry for the pun)

    Read the article

  • Firefox extension js object initialization

    - by Michael
    Note: this is about Firefox extension, not a js general question. In Firefox extension project I need my javascript object to be initialized just once per Firefox window. Otherwise each time I open my window a new timers will be engaged, new properties will be used, so everything will start from scratch. hope example below will demystify my question :) var StupidExtension { statusBarValue: "Not Initialized Yet", startup: function () { ... // Show statusBarValue in Status Bar Panel }, initTimerToRetrieveStatusBarValueFromNetwork: function () { ... } } so each time you hit Ctrl+N a new window you will see "Not Initialized Yet" and then new timer will be fired, so after some time it retrieve data from network you will see value also on second window and so on. Ideally would be to have just a single timer function running and updating all status bar panels in all Firefox windows. Of course I can do some caching, like saving the value in prefs or some other storage, then show it from there. But I feel like this is artificial. So the question will be is there "native" technique of making static some parts of the object among all Firefox window instances?

    Read the article

  • MVVM and avoiding Monolithic God object

    - by bufferz
    I am in the completion stage of a large project that has several large components: image acquisition, image processing, data storage, factory I/O (automation project) and several others. Each of these components is reasonably independent, but for the project to run as a whole I need at least one instance of each component. Each component also has a ViewModel and View (WPF) for monitoring status and changing things. My question is the safest, most efficient, and most maintainable method of instantiating all of these objects, subscribing one class to an Event in another, and having a common ViewModel and View for all of this. Would it best if I have a class called God that has a private instance of all of these objects? I've done this in the past and regretted it. Or would it be better if God relied on Singleton instances of these objects to get the ball rolling. Alternatively, should Program.cs (or wherever Main(...) is) instantiate all of these components, and pass them to God as parameters and then let Him (snicker) and His ViewModel deal with the particulars of running this projects. Any other suggestions I would love to hear. Thank you!

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

  • Heroku: Postgres type operator error after migrating DB from MySQL

    - by sevennineteen
    This is a follow-up to a question I'd asked earlier which phrased this as more of a programming problem than a database problem. http://stackoverflow.com/questions/2935985/postgres-error-with-sinatra-haml-datamapper-on-heroku I believe the problem has been isolated to the storage of the ID column in Heroku's Postgres database after running db:push. In short, my app runs properly on my original MySQL database, but throws Postgres errors on Heroku when executing any query on the ID column, which seems to have been stored in Postgres as TEXT even though it is stored as INT in MySQL. My question is why the ID column is being created as INT in Postgres on the data transfer to Heroku, and whether there's any way for me to prevent this. Here's the output from a heroku console session which demonstrates the issue: Ruby console for myapp.heroku.com >> Post.first.title => "Welcome to First!" >> Post.first.title.class => String >> Post.first.id => 1 >> Post.first.id.class => Fixnum >> Post[1] PostgresError: ERROR: operator does not exist: text = integer LINE 1: ...", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Query: SELECT "id", "name", "email", "url", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER BY "id" LIMIT 1 Thanks!

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • How to create a new ID for the new added node?

    - by marknt15
    Hi, I can normally get the ID of the default tree nodes but my problem is onCreate then jsTree will add a new node but it doesn't have an ID. My question is how can I add an ID to the newly created tree node? What I'm thinking to do is adding the ID HTML attribute to the newly created tree node but how? I need to get the ID of all of the nodes because it will serve as a reference for the node's respective div storage. HTML code: <div class="demo" id="demo_1"> <ul> <li id="phtml_1" class="file"><a href="#"><ins>&nbsp;</ins>Root node 1</a></li> <li id="phtml_2" class="file"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> </div> JS code: $("#demo_1").tree({ ui : { theme_name : "apple" }, callback : { onrename : function (NODE, TREE_OBJ) { alert(TREE_OBJ.get_text(NODE)); alert($(NODE).attr('id')); } } }); Cheers, Mark

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >