Search Results

Search found 13664 results on 547 pages for 'storage engine'.

Page 124/547 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Google App Engine datastore

    - by megala
    Hi i had created table with primarykey{groupname} in google app engine datastore as follows, //---------------coding-----------// @Entity @Table(name="Groups", schema="PUBLIC") public class Creategroup { @Id private String groupname; @Basic private String groupid; . . . } My doubt is I wnat to set groupname and groupid both as primary key is it possible.how ? Guide me thanks in advance

    Read the article

  • Do any clouds support SSD storage?

    - by taw
    I'm using Amazon cloud right now, and the biggest performance issue is horrible I/O performance. As long as something fits RAM it's fine - once it's too big it gets ridiculously slow (in many different scenarios). There are only so many ways one can avoid hitting disk - so the question is - does Amazon or some other cloud provide SSD option?

    Read the article

  • Storage of events in Calendar application in Android 2.1

    - by Navin
    Does the calendar application in Android maintain a cache of its database? Whenever I edit and mark some events via the calendar app, it is stored in the database but if I edit the calendar.db from some outside source the changes made are not reflected in the calendar app. So my question is: Does the calendar app maintain a cache or some other form of database? If yes then where and how?

    Read the article

  • JavaScript Templating Engine

    - by Randy Gurment
    Hi, I would like to create universal templating engine in JavaScript, how to? HTML template <h1><%title1%></h1> <h2><%title2%></h2> JSON file { "title1" : "Hello World!", "title2" : "Hi World!" } Javascript Find in HTML file <%title1% Find in JSON file variable "title1" Replace <%title1% with value of variable "title1" Same for <%title2% Thanks!

    Read the article

  • Convert a Delphi example using TDatabase and local paradox table to server storage

    - by Brian Frost
    I am looking at the Developer Express Quantum Grid example 'IssueList' which is a useful bug reporting and tracking application that's almost ready to go out of the box. It uses a TDatabase component with several paradox (.db) tables. Is it simple to rejig the TDatabase settings to use a database on a shared machine so that several of us can access it together across the network? If so, what would be the steps needed please?

    Read the article

  • High-Quality Text-To-Speech engine for personal use

    - by phihag
    I'm looking for a high-quality TTS engine that I can afford (let's say less than 1000$). So far, I've tried flite and festival. However, while the results are certainly understandable, technical texts are hard to follow. Commercial TTS solutions from Loquendo and Readspeaker sound way better. However, these companies don't seem to be willing to sell their product to mere mortals - I can't find a price on either's homepage. So, what are good TTS solutions for personal use?

    Read the article

  • Moving MVC2 Helpers to MVC3 razor view engine

    - by Dai Bok
    Hi, In my MVC 2 site, I have an html helper, that I use to add javascripts for my pages. In my master page I have the main javascripts I want to include, and then in the aspx pages, I include page specific javascripts. So for example, my Site.Master has something like this: .... <head> <%=html.renderScripts() %> </head> ... //core scripts for main page <%html.AddScript("/scripts/jquery.js") %> <%html.AddScript("/scripts/myLib.js") %> .... Then in the child aspx page, I may also want to include other scripts. ... //the page specific script I want to use <% html.AddScript("/scripts/register.aspx.js") %> ... So when the full page gets rendered the javascript files are all collected and rendered in the head by sitemaster placeholder function RenderScripts. This works fine. Now with MVC 3 and razor view engine, they layout pages behave differently, because now my page level javascripts are not rendered/included. Now all I see the LayoutMaster contents. How do I get the solution wo workwith MVC 3 and the razor view engine. (The helper has already been re-written to return a HTMLString ;-)) For reference: my MasterLayout looks like this: ... ... <head> @{ Html.AddJavaScript("/Scripts/jQuery.js"); Html.AddJavaScript("/Scripts/myLib.js"); } //Render scripts @html.RenderScripts() </head> .... and the child page looks like this: @{ Layout = "~/Views/Shared/MasterLayout.cshtml"; ViewBag.Title = "Child Page"; Html.AddJavaScript("/Scripts/register.aspx.js"); } .... <div>some html </div> Thanks for your help. Edit = Just to explain, if this question is not clear enough. When producing a "page" I collect all the javascript files the designers want to use, by using the html.addJavascript("filename.js") and store these in a dictionary - (1) stops people adding duplicate js files - then finally when the page is ready to render, I write out all the javascript files neatly in the header. (2) - this helper helps keep JS in one place, and prevents designers from adding javascript files all over the place. This used to work fine with Master/SiteMaster Pages in mvc 2. but how can I achieve this with razor?

    Read the article

  • Mysql Text Storage?

    - by mii
    I was wondering if you where to have an article or articles with huge amounts of text, what would be better when creating the database structure for the articles text? And why? What will be the advantages or disadvantages if any?. I was thinking of using one of the data types below to hold the articles text for the MySQL database. VARCHAR TEXT MEDIUMTEXT LONGTEXT

    Read the article

  • Simple File-based Record Storage with Fast Text Searching for Compact Framework and Silverlight

    - by Eric Farr
    I have a single table with lots of records ( 100k) that I need to be able to index and search on several text fields. The easiest searches will have the first part of the string specified (eg, LIKE 'ABC%' in SQL). The tougher searches will need to search for any substring within the text fields (eg, LIKE '%ABC%' in SQL). I need to run on the Compact Framework. SQL Compact is a memory hog and overkill for my one table. Besides, I'd like to be able to run on Silverlight 4 eventually. The file and indexes can be generated on the full .NET Framework and I only need read capability on the Compact Framework. My records are not especially large and can be expressed in fix length format. I'm looking for some existing code or libraries to avoid having to write a file-based BTree implementation from scratch.

    Read the article

  • Confused with the Isolated Storage with Multiple Assemblies Access

    - by Peter Lee
    I googled and searched a lot, but I got no luck. I have a WindowsFormsApplication.exe and ConsoleApplication.exe. I want both of them to access to the same IsolatedStorage, is it possible? I tried using this in ConsoleApplication.exe: IsolatedStorageFile isoStore = IsolatedStorageFile.GetMachineStoreForApplication(); but I got: IsolatedStorageException: Unable to determine application identity of the caller. How can I fix this? Or can I use this way? P.S.: This is NOT a ClickOnce app.

    Read the article

  • where should damage logic go Game Engine or Character Class

    - by numerical25
    I am making a game and I am trying to decide what is the best practice for exchanging damage between two objects on the screen. Should the damage be passed directly between the two objects or should it be pass through a central game engine that decides the damage and different criteria's such as hit or miss or amount dealt. So overall what is the best practice.

    Read the article

  • MySQL: Storage of multiple text fields for a record

    - by Tom
    An inexperienced question: I need to store about 10 unknown-length text fields per record into a MySQL table. I expect no more than 50K rows in total for this table but speed is important. The database actions will be solely SELECTs for all practical purposes. I'm using InnoDB. In other words: id | text1 | text2 | text3 | .... | text10 As I understand that MySQL will store the text elsewhere and use its own indicators on the table itself, I'm wondering whether there's any fundamental performance implications that I should be worrying about given the way the data is stored? (i.e. several "sub-fetches" from the table). Thank you.

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Storage location of yellow-blue shield icon

    - by gencha
    Where, in Windows, is this icon stored? I need to use it in a TaskDialog emulation for XP and am having a hard time tracking it down. It's not in shell32.dll, explorer.exe, ieframe.dll or wmploc.dll (as these contain a lot of icons commonly used in Windows).

    Read the article

  • Send a "304 Not Modified" for images stored in the datastore

    - by Emilien
    I store user-uploaded images in the Google App Engine datastore as db.Blob, as proposed in the docs. I then serve those images on /images/<id>.jpg. The server always sends a 200 OK response, which means that the browser has to download the same image multiple time (== slower) and that the server has to send the same image multiple times (== more expensive). As most of those images will likely never change, I'd like to be able to send a 304 Not Modified response. I am thinking about calculating some kind of hash of the picture when the user uploads it, and then use this to know if the user already has this image (maybe send the hash as an Etag?) I have found this answer and this answer that explain the logic pretty well, but I have 2 questions: Is it possible to send an Etag in Google App Engine? Has anyone implemented such logic, and/or is there any code snippet available?

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • problem downloading movie to iphone for storage and playback

    - by padatronic
    I am basically making a video library where you download videos and I then write them to the applications documents folder. This all works fine and if i stream the video from online it plays fine. Or indeed I can stream it from the resource folder fine. However, after downloading it and saving to the documents folder then attempting to stream I get the error 'movie format not supported' any ideas? thanks very much

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • Word documents generation in web app using Eclipse BIRT Report Engine

    - by Orr151
    Hi, Is it possible to generate word documents (*.doc) in java web application using Eclipse BIRT (Report Engine)? I want .rptdesign to be an input file in generating process. I could not find any example or tutorial. What would you recommend as an alternative solution. As far as I know Jasper Reports allow only RTF format generation. Thank you for your answer/explaination

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >