Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 74/271 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • How do I design the file storage issue?

    - by user102533
    I am working on an application that creates video files and stores them in a folder in the C:\ drive. I speculate that there will be a large number of these files in the future and we would run out of disk space at some point of time (on our VPS). When the time comes that we have to upgrade, we either plan to use one of the Cloud providers to store files or our existing provider can add another disk (say D:\ drive). Either way, I would want to design the app now in a way that in future, moving to different locations would not be an issue and would be transparent to the end user. The code that creates these files supports 2 ways: myObj.SetOutputToDisk(<path to store>); or myObj.SetOutputToMemoryStream(ms); If we go with the Cloud architecture, I assume we might have the following combination: Cloud Files + Existing VPS or Cloud Files + Cloud Windows Server Given the unknowns at this time, how would I go about designing this?

    Read the article

  • Anyone using NoSQL databases for medical record storage?

    - by Brian Bay
    Electronic Medical records are composed of different types of data. Visit information ( date/location/insurance info) seems to lend itself to a RDMS. Other types of medical infomation, such as lab reports, x-rays, photos, and electronic signatures, are document based and would seem to be a good candidate for a 'document-oriented' database, such as MongoDB. Traditionally, binary data would be stored as a BLOB in a RDBMS. A hybrid approach using a traditional RDBMS along with a 'document-oriented' database would seem like good alternative to this. Other alternative would be something like DB2 purexml. The ultimate answer could be that 'it depends', but I really just wanted to get some general feedback/ideas on this. Is anyone using the NoSql approach for medical records?

    Read the article

  • Uncompress OpenOffice files for better storage in version control

    - by Craig McQueen
    I've heard discussion about how OpenOffice (ODF) files are compressed zip files of XML and other data. So making a tiny change to the file can potentially totally change the data, so delta compression doesn't work well in version control systems. I've done basic testing on an OpenOffice file, unzipping it and then rezipping it with zero compression. I used the Linux zip utility for my testing. OpenOffice will still happily open it. So I'm wondering if it's worth developing a small utility to run on ODF files each time just before I commit to version control. Any thoughts on this idea? Possible better alternatives? Secondly, what would be a good and robust way to implement this little utility? Bash shell that calls zip (probably Linux only)? Python? Any gotchas you can think of? Obviously I don't want to accidentally mangle a file, and there are several ways that could happen. Possible gotchas I can think of: Insufficient disk space Some other permissions issue that prevents writing the file or temporary files ODF document is encrypted (probably should just leave these alone; the encryption probably also causes large file changes and thus prevents efficient delta compression)

    Read the article

  • What is the best way to format a date in JSON for Mongo DB storage

    - by Poul
    I have a date with a time. I'm using ruby, but the language shouldn't matter. d = "2010-04-01 13:00:00" What is the best way to format this date for Mongo DB? By 'best' I mean, is there a certain format I could use where Mongo would recognize it as a date and might give me more-advanced filtering optons? ie: If formatted correctly, could I ask Mongo to return all records whose month is '04'? Thanks!

    Read the article

  • Retrieve binary data from S3 storage through AWS.NET in C#

    - by BerggreenDK
    I've tested most of the included samples in the AWS SDK for .NET and they all works fine. I can PUT objects, LIST objects and DELETE objects in a bucket, but... lets say I delete the original and want to sync those files missing locally? I would like to make a GET object (by key/name and bucket ofcause). I can find the object, but how do I read the binary data from S3 through the API? Do I have to write my own SOAP wrapper for this or is there some kinda sample for this out "here" ? :o) In hope of a sample. It does not have to tollerate execeptions etc. I just need to see the main parts that connects, retreives and stores the file back on my ASP.net or C# project. Anyone???

    Read the article

  • Storage of events in Calendar application in Android 2.1

    - by Navin
    Does the calendar application in Android maintain a cache of its database? Whenever I edit and mark some events via the calendar app, it is stored in the database but if I edit the calendar.db from some outside source the changes made are not reflected in the calendar app. So my question is: Does the calendar app maintain a cache or some other form of database? If yes then where and how?

    Read the article

  • Do any clouds support SSD storage?

    - by taw
    I'm using Amazon cloud right now, and the biggest performance issue is horrible I/O performance. As long as something fits RAM it's fine - once it's too big it gets ridiculously slow (in many different scenarios). There are only so many ways one can avoid hitting disk - so the question is - does Amazon or some other cloud provide SSD option?

    Read the article

  • Convert a Delphi example using TDatabase and local paradox table to server storage

    - by Brian Frost
    I am looking at the Developer Express Quantum Grid example 'IssueList' which is a useful bug reporting and tracking application that's almost ready to go out of the box. It uses a TDatabase component with several paradox (.db) tables. Is it simple to rejig the TDatabase settings to use a database on a shared machine so that several of us can access it together across the network? If so, what would be the steps needed please?

    Read the article

  • Mysql Text Storage?

    - by mii
    I was wondering if you where to have an article or articles with huge amounts of text, what would be better when creating the database structure for the articles text? And why? What will be the advantages or disadvantages if any?. I was thinking of using one of the data types below to hold the articles text for the MySQL database. VARCHAR TEXT MEDIUMTEXT LONGTEXT

    Read the article

  • Simple File-based Record Storage with Fast Text Searching for Compact Framework and Silverlight

    - by Eric Farr
    I have a single table with lots of records ( 100k) that I need to be able to index and search on several text fields. The easiest searches will have the first part of the string specified (eg, LIKE 'ABC%' in SQL). The tougher searches will need to search for any substring within the text fields (eg, LIKE '%ABC%' in SQL). I need to run on the Compact Framework. SQL Compact is a memory hog and overkill for my one table. Besides, I'd like to be able to run on Silverlight 4 eventually. The file and indexes can be generated on the full .NET Framework and I only need read capability on the Compact Framework. My records are not especially large and can be expressed in fix length format. I'm looking for some existing code or libraries to avoid having to write a file-based BTree implementation from scratch.

    Read the article

  • Confused with the Isolated Storage with Multiple Assemblies Access

    - by Peter Lee
    I googled and searched a lot, but I got no luck. I have a WindowsFormsApplication.exe and ConsoleApplication.exe. I want both of them to access to the same IsolatedStorage, is it possible? I tried using this in ConsoleApplication.exe: IsolatedStorageFile isoStore = IsolatedStorageFile.GetMachineStoreForApplication(); but I got: IsolatedStorageException: Unable to determine application identity of the caller. How can I fix this? Or can I use this way? P.S.: This is NOT a ClickOnce app.

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • MySQL: Storage of multiple text fields for a record

    - by Tom
    An inexperienced question: I need to store about 10 unknown-length text fields per record into a MySQL table. I expect no more than 50K rows in total for this table but speed is important. The database actions will be solely SELECTs for all practical purposes. I'm using InnoDB. In other words: id | text1 | text2 | text3 | .... | text10 As I understand that MySQL will store the text elsewhere and use its own indicators on the table itself, I'm wondering whether there's any fundamental performance implications that I should be worrying about given the way the data is stored? (i.e. several "sub-fetches" from the table). Thank you.

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Storage location of yellow-blue shield icon

    - by gencha
    Where, in Windows, is this icon stored? I need to use it in a TaskDialog emulation for XP and am having a hard time tracking it down. It's not in shell32.dll, explorer.exe, ieframe.dll or wmploc.dll (as these contain a lot of icons commonly used in Windows).

    Read the article

  • Image storage as a service

    - by Samuel
    Google App Engine provides a image API for storing / retrieving images. We are currently not in a position to deploy our application on top of App Engine because of limitations in the java frameworks (jboss seam 2.2.0) we are using to build our j2ee application. We would eventually want to deploy our production application on top of Google App Engine, but what are the short term options (java based open source products) which provides comparable functionality to Google App Engine's Image API and will have an easier migration path at a later point in time.

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • problem downloading movie to iphone for storage and playback

    - by padatronic
    I am basically making a video library where you download videos and I then write them to the applications documents folder. This all works fine and if i stream the video from online it plays fine. Or indeed I can stream it from the resource folder fine. However, after downloading it and saving to the documents folder then attempting to stream I get the error 'movie format not supported' any ideas? thanks very much

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >