Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 75/465 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Image storage as a service

    - by Samuel
    Google App Engine provides a image API for storing / retrieving images. We are currently not in a position to deploy our application on top of App Engine because of limitations in the java frameworks (jboss seam 2.2.0) we are using to build our j2ee application. We would eventually want to deploy our production application on top of Google App Engine, but what are the short term options (java based open source products) which provides comparable functionality to Google App Engine's Image API and will have an easier migration path at a later point in time.

    Read the article

  • problem downloading movie to iphone for storage and playback

    - by padatronic
    I am basically making a video library where you download videos and I then write them to the applications documents folder. This all works fine and if i stream the video from online it plays fine. Or indeed I can stream it from the resource folder fine. However, after downloading it and saving to the documents folder then attempting to stream I get the error 'movie format not supported' any ideas? thanks very much

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • Class template specializations with shared functionality

    - by Thomas
    I'm writing a simple maths library with a template vector type: template<typename T, size_t N> class Vector { public: Vector<T, N> &operator+=(Vector<T, N> const &other); // ... more operators, functions ... }; Now I want some additional functionality specifically for some of these. Let's say I want functions x() and y() on Vector<T, 2> to access particular coordinates. I could create a partial specialization for this: template<typename T> class Vector<T, 3> { public: Vector<T, 3> &operator+=(Vector<T, 3> const &other); // ... and again all the operators and functions ... T x() const; T y() const; }; But now I'm repeating everything that already existed in the generic template. I could also use inheritance. Renaming the generic template to VectorBase, I could do this: template<typename T, size_t N> class Vector : public VectorBase<T, N> { }; template<typename T> class Vector<T, 3> : public VectorBase<T, 3> { public: T x() const; T y() const; }; However, now the problem is that all operators are defined on VectorBase, so they return VectorBase instances. These cannot be assigned to Vector variables: Vector<float, 3> v; Vector<float, 3> w; w = 5 * v; // error: no conversion from VectorBase<float, 3> to Vector<float, 3> I could give Vector an implicit conversion constructor to make this possible: template<typename T, size_t N> class Vector : public VectorBase<T, N> { public: Vector(VectorBase<T, N> const &other); }; However, now I'm converting from Vector to VectorBase and back again. Even though the types are the same in memory, and the compiler might optimize all this away, it feels clunky and I don't really like to have potential run-time overhead for what is essentially a compile-time problem. Is there any other way to solve this?

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • Dynamic loading of shared objects using dlopen()

    - by Andy
    Hi, I'm working on a plain X11 app. By default, my app only requires libX11.so and the standard gcc C and math libs. My app has also support for extensions like Xfixes and Xrender and the ALSA sound system. But this feature shall be made optional, i.e. if Xfixes/Xrender/ALSA is installed on the host system, my app will offer extended functionality. If Xfixes or Xrender or ALSA is not there, my app will still run but some functionality will not be available. To achieve this behaviour, I'm not linking dynamically against -lXfixes, -lXrender and -lasound. Instead, I'm opening these libraries manually using dlopen(). By doing it this way, I can be sure that my app won't fail in case one of these optional components is not present. Now to my question: What library names should I use when calling dlopen()? I've seen that these differ from distro to distro. For example, on openSUSE 11, they're named the following: libXfixes.so libXrender.so libasound.so On Ubuntu, however, the names have a version number attached, like this: libXfixes.so.3 libXrender.so.1 libasound.so.2 So trying to open "libXfixes.so" would fail on Ubuntu, although the lib is obviously there. It just has a version number attached. So how should my app handle this? Should I let my app scan /usr/lib/ first manually to see which libs we have and then choose an appropriate one? Or does anyone have a better idea? Thanks guys, Andy

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • White Label Ecommerce app. Shared or Individual dbs

    - by MetaDan
    Currently I'm working with an in house white label cms that we resell to multiple clients and it all runs from the same box/db. I'm just looking at converting this to have an ecommerce version that we'll run alongside it. I'm wondering whether there will be an issue keeping all the products/categories/orders in one db or whether it would be advisory to separate each instance of the site into its own db for this. These white label instances will only be sold to smaller companies that probably wont have masses of traffic/products and are looking for a simple ecommerce site. Anything larger will definitely get its own hosting and db. But for smaller scale stuff do you think a single db will be ok?

    Read the article

  • .Net 4.0 Memory-Mapped Files verses RDMS Storage

    - by Harry
    I'm interested in people's thoughts comparing storing data in a traditional SQL based Database or utilising a Memory-Mapped File such as the one in the new .Net 4.0 runtime. The data in question would be arrays of simple structures. Obvious pros and cons: SQL Database Pros Adhoc query support SQL Management Tools Schema changes (adding more columns and setting default values) Memory-Mapped Pros Lighter overhead? (this is an assumption on my part) Shareable between process threads Any others? Is it worth it for performance gains?

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • Visual Studio 2010 "Not enough storage is available to process this command"

    - by Daniel Perez
    I'm fighting with VS 2010 and this error that seems to be very common in previous versions, but it looks like not everyone is having it in the latest version. I've got VS 2010 SP1 and I'm getting this error quite often. The problem is that it's not even enough to restart VS in order to make it go away, I usually have to restart my pc, and i'm losing a lot of time doing this (it's quite frequent) I've got Windows 7 32bits (can't upgrade to 64 bits, the company doesn't allow it), and I can't do things like creating another solution (please don't reply this :) ) I've used the command to make devenv.exe LARGEADDRESSAWARE, but the error keeps on happening My virtual memory size is set to automatic, and the weird thing is that VS doesn't even take 2gb of ram, so I don't know if the error is really because it's lacking memory, or if it's some bug in the program any ideas, things to try, something?

    Read the article

  • Mercurial Workflow (Shared Files)

    - by Jake Pearson
    Let's say I have programmers and artists working on a project. The artists have some folders they care about: /Doodles /Images/Jpgs And maybe the programmers have a folder like this: /Code/View/Jpgs What is the best process in Mercurial to keep the 2 Jpgs folders synced? I have used Vault, where you can have 2 or more files/folders linked in a repository so updating one updates another. Is there a way to do the same thing with Mercurial?

    Read the article

  • Win7: Right place to install a program that may be 'shared' with other computers

    - by robsoft
    We have an app that currently installs itself into 'program files\our app', and it puts the internal data files into the common Application Data folder. This means the program is available to any user on that particular PC. Now we want to make a multi-user version of this program, multiple PCs accessing the program at the same time across the network. In the bad old days, under XP, we'd just have the user who installed the app 'share' the app directory and off we'd go. In principle, is this still the 'right' way to do it under Vista/Windows 7? We'd like to do this 'properly' and be as compliant as possible! Is there a recommended 'Microsoft' approach for doing this, or is it largely down to whatever we can get away with and subsequently support (hah!). I've tried researching this on the MS websites but not found anything too helpful at all - it'd be really useful to have a 'if you're trying to install this kind of thing, put it here' type guide for developers!

    Read the article

  • Efficient persistent storage for simple id to table of values map for java

    - by wds
    I need to store some data that follows the simple pattern of mapping an "id" to a full table (with multiple rows) of several columns (i.e. some integer values [u, v, w]). The size of one of these tables would be a couple of KB. Basically what I need is to store a persistent cache of some intermediary results. This could quite easily be implemented as simple sql, but there's a couple of problems, namely I need to compress the size of this structure on disk as much as possible. (because of amount of values I'm storing) Also, it's not transactional, I just need to write once and simply read the contents of the entire table, so a relational DB isn't actually a very good fit. I was wondering if anyone had any good suggestions? For some reason I can't seem to come up with something decent atm. Especially something with an API in java would be nice.

    Read the article

  • Questions about using git as a backend storage system

    - by XO
    New to git here... I want to commit my personal file share to a git repo (text, docs, images etc). As I make modifications to various files over time, telling git about them along the way, how do go about things so I can: Get out of the business of traditional fulls/incrementals. Be able to do a point-in-time file or full clone restore. Basically, I want something granular, such that, if I make an edit to a file 5 times on a particular day. I will have 5 versions of that file that I can refer back to- forever. Or even just derive the a full copy of everything the way it looked on that particular day. I am currently using rsync for remote incremental syncs (no file versioning).

    Read the article

  • how to check if internal storage file has any data

    - by user3720291
    public class Save extends Activity { int levels = 2; int data_block = 1024; //char[] data = new char[] {'0', '0'}; String blankval = "0"; String targetval = "0"; String temp; String tempwrite; String string = "null"; TextView tex1; TextView tex2; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.save); Intent intent = getIntent(); Bundle b = intent.getExtras(); tex1 = (TextView) findViewById(R.id.textView1); tex2 = (TextView) findViewById(R.id.textView2); if(b!=null) { string =(String) b.get("string"); } loadprev(); save(); } public void save() { if (string.equals("Blank")) blankval = "1"; if (string.equals("Target")) targetval = "1"; temp = blankval + targetval; try { FileOutputStream fos = openFileOutput("data.gds", MODE_PRIVATE); fos.write(temp.getBytes()); fos.close(); } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} tex1.setText(blankval); tex2.setText(targetval); } public void loadprev() { String final_data = ""; try { FileInputStream fis = openFileInput("data.gds"); InputStreamReader isr = new InputStreamReader(fis); char[] data = new char[data_block]; int size; while((size = isr.read(data))>0) { String read_data = String.copyValueOf(data, 0, size); final_data += read_data; data = new char[data_block]; } } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} char[] tempread = final_data.toCharArray();; blankval = "" + tempread[0]; targetval = "" + tempread[1]; } } After much tinkering i have finally managed to get my save/load function to work, but it does have an error, pretty much i got it to work then i did a fresh reintall deleting data.gds, afterwards the save/load function crashes because the data.gds file has no previous values. can i use a if statment to check if data.gds has any values in it, if so how do i do it and if not, then what could i use instead?

    Read the article

  • Efficient Array Storage for Binary Tree

    - by Sundararajan S
    We have to write the nodes of a binary tree to a file. What is the most space efficient way of writing a binary tree . We can store it in array format with parent in position 'i' and its childs in 2i,2i+1. But this will waste lot of space in case of sparse binary trees.

    Read the article

  • SOLR multicore shared configuration

    - by Mark
    I'm using multiple cores in SOLR to enable offline population of indices (and then using SWAP to swap out the active core). I want to use the same solrconfig.xml file for both cores - can someone tell me where I should put this so it can be picked up by SOLR?

    Read the article

  • C# / IronPython Interop with shared C# Class Library

    - by Adam Haile
    I'm trying to use IronPython as an intermediary between a C# GUI and some C# libraries, so that it can be scripted post compile time. I have a Class library DLL that is used by both the GUI and the python and is something along the lines of this: namespace MyLib { public class MyClass { public string Name { get; set; } public MyClass(string name) { this.Name = name; } } } The IronPython code is as follows: import clr clr.AddReferenceToFile(r"MyLib.dll") from MyLib import MyClass ReturnObject = MyClass("Test") Then, in C# I would call it as follows: ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = null; scope = engine.CreateScope(); ScriptSource source = engine.CreateScriptSourceFromFile("Script.py"); source.Execute(scope); MyClass mc = scope.GetVariable<MyClass>("ReturnObject ") When I call this last bit of code, source.Execute(scope) runs returns successfully, but when I try the GetVariable call, it throw the following exception Microsoft.Scripting.ArgumentTypeException: expected MyClass , got MyClass So, you can see that the class names are exactly the same, but for some reason it thinks they are different. The DLL is in a different directory than the .py file (I just didn't bother to write out all the path setup stuff), could it be that there is an issue with the interpreter for IronPython seeing these objects as difference because it's somehow seeing them as being in a different context or scope?

    Read the article

  • Rails: unexpected behavior updating a shared instance

    - by Pascal Lindelauf
    I have a User object, that is related to a Post object via two different association paths: Post --(has_many)-- comments --(belongs to)-- writer (of type User) Post --(belongs to)-- writer (of type User) Say the following hold: user1.name == "Bill" post1.comments[1].writer == user1 post1.writer == user1 Now when I retrieve the post1 and its comments from the database and I update post1.comments[1].writer like so: post1.comments[1].writer.name = "John" I would expect post1.writer to equal "John" too. But it doesn't! It still equals "Bill". So there seems to be some caching going on, but the kind I would not expect. I would expect Rails to be clever enough to load exactly one instance of the user with name "Bill"; instead is appears to load two individual ones: one for each association path. Can someone explain how this works exactly and how I am to handle these types of situations the "Rails way"?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >