Search Results

Search found 15931 results on 638 pages for 'password storage'.

Page 107/638 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Is it possible to read data that has been separately copied to the Android sd card without having ro

    - by icecream
    I am developing an application that needs to access data on the sd card. When I run on my development device (an odroid with Android 2.1) I have root access and can construct the path using: File sdcard = Environment.getExternalStorageDirectory(); String path = sdcard.getAbsolutePath() + File.separator + "mydata" File data = new File(path); File[] files = data.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String filename) { return filename.toLowerCase().endsWith(".xyz"); }}); However, when I install this on a phone (2.1) where I do not have root access I get files == null. I assume this is because I do not have the right permissions to read the data from the sd card. I also get files == null when just trying to list files on /sdcard. So the same applies without my constructed path. Also, this app is not intended to be distributed through the app store and is needs to use data copied separately to the sd card so this is a real use-case. It is too much data to put in res/raw (I have tried, it did not work). I have also tried adding: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> to the manifest, even though I only want to read the sd card, but it did not help. I have not found a permission type for reading the storage. There is probably a correct way to do this, but I haven't been able to find it. Any hints would be useful.

    Read the article

  • StorageClientException: The specified message does not exist?

    - by Aaron
    I have a simple video encoding worker role that pulls messages from a queue encodes a video then uploads the video to storage. Everything seems to be working but occasionally when deleting the message after I am done encoding and uploading I get a "StorageClientException: The specified message does not exist." Although the video is processed, I believe the message is reappearing in the queue because it's not being deleted correctly. Is it possible that another instance of the Worker role is processing and deleting the message? Doesn't the GetMessage() prevent other worker roles from picking up the same message? Am I doing something wrong in the setup of my queue? What could be causing this message to not be found on delete? some code... //onStart() queue setup var queueStorage = _storageAccount.CreateCloudQueueClient(); _queue = queueStorage.GetQueueReference(QueueReference); queueStorage.RetryPolicy = RetryPolicies.Retry(5, new TimeSpan(0, 5, 0)); _queue.CreateIfNotExist(); public override void Run() { while (true) { try { var msg = _queue.GetMessage(new TimeSpan(0, 5, 0)); if (msg != null) { EncodeIt(msg); PostIt(msg); _queue.DeleteMessage(msg); } else { Thread.Sleep(WaitTime); } } catch (StorageClientException exception) { BlobTrace.Write(exception.ToString()); Thread.Sleep(WaitTime); } } }

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • Windows Azure - Automatic Load Balancing - partitioning

    - by veda
    I was going through some videos. I found that Windows Azure will group the blobs into partitions based on the partition key and will Automatically Load Balance these partitions on their servers. The partition key for a blob is blob name. Using the blob name, azure will automatically do partitions. Now, My question is that Can I able to make the azure to do partitions based on the Container Name. I wanted my partition key to be container name. For example, I have a storage account. In that I have 2 containers named container1 and container2. In container1, I have 1000 files named 1.txt, 2.txt, 3.txt, ......., 501.txt, 502.txt, ..... 999.txt, 1000.txt and in container2, I have another 1000 files named 1001.txt, 1002.txt, 1003.txt, ......., 1501.txt, 1502.txt, ..... 1999.txt, 2000.txt Now, Will Windows Azure will generate 2000 partitions based on the blob name and serve me through several servers??? Won't it be better if Azure partitions based on the Container name? container1 on one server and conatiner2 on another.

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • How to use dd to make splitted ISO images from an storage device?

    - by Gustavo Bandeira
    This is a double question, I just hope it's valid. I need to know how to use dd to make splitted ISO images from some storage device, I'm doing it through SSH: the process is slow and the risk of faling at the mid of the operation (1) is high then I need to know how to make these splitted ISO images from my storage device. (2) I'm searching for some reference on dd, it could be a book or a good website about it for when any doubt arises. 1 - I'm doing it on a ~60GB storage device, it took me a whole day to copy ~10GB from this disk. 2 - For curious people, I'm trying to recover an accidentaly deleted file from an iPod, until now I've been able to make the whole process, I just need to improve it beucase I left it copying the disk yesterday: Today it gave me an error when it was at ~10GB.

    Read the article

  • make a folder/partition on one computer appear as a mass storage device to another?

    - by user137560
    Is there anyway to make a folder or a partition on a computer (Linux or Windows) act like a mass storage device to other computers or devices when connected with a Male-Male USB cable? For example, I have a Windows 7 computer with 2 partitions, C and D. I would then connect that computer to another computer or a Smart TV using a Male-Male USB cable, and the other computer or device recognizes a folder/partition on current computer as a mass storage device. Is this possible? If not, is there any USB switch that can connect an external hard drive or flash drive to both a computer and TV without the need to manually switch them? (I know about some USB switches, but they only support automatic switching with some certain types of printers, not with mass storage)

    Read the article

  • Am I going the right way to make login system secure with this simple password salting?

    - by LoVeSmItH
    I have two fields in login table password salt And I have this little function to generate salt function random_salt($h_algo="sha512"){ $salt1=uniqid(rand(),TRUE); $salt2=date("YmdHis").microtime(true); if(function_exists('dechex')){ $salt2=dechex($salt2); } $salt3=$_SERVER['REMOTE_ADDR']; $salt=$salt1.$salt2.$salt3; if(function_exists('hash')){ $hash=(in_array($h_algo,hash_algos()))?$h_algo:"sha512"; $randomsalt=hash($hash,md5($salt)); //returns 128 character long hash if sha512 algorithm is used. }else{ $randomsalt=sha1(md5($salt)); //returns 40 characters long hash } return $randomsalt; } Now to create user password I have following $userinput=$_POST["password"] //don't bother about escaping, i have done it in my real project. $static_salt="THIS-3434-95456-IS-RANDOM-27883478274-SALT"; //some static hard to predict secret salt. $salt=random_salt(); //generates 128 character long hash. $password =sha1($salt.$userinput.$static_salt); $salt is saved in salt field of database and $password is saved in password field. My problem, In function random_salt(), I m having this FEELING that I'm just making things complicated while this may not generate secure salt as it should. Can someone throw me a light whether I m going in a right direction? P.S. I do have an idea about crypt functions and like such. Just want to know is my code okay? Thanks.

    Read the article

  • Password security; Is this safe?

    - by Camran
    I asked a question yesterday about password safety... I am new at security... I am using a mysql db, and need to store users passwords there. I have been told in answers that hashing and THEN saving the HASHED value of the password is the correct way of doing this. So basically I want to verify with you guys this is correct now. It is a classifieds website, and for each classified the user puts, he has to enter a password so that he/she can remove the classified using that password later on (when product is sold for example). In a file called "put_ad.php" I use the $_POST method to fetch the pass from a form. Then I hash it and put it into a mysql table. Then whenever the users wants to delete the ad, I check the entered password by hashing it and comparing the hashed value of the entered passw against the hashed value in the mysql db, right? BUT, what if I as an admin want to delete a classified, is there a method to "Unhash" the password easily? sha1 is used currently btw. some code is very much appreciated. Thanks

    Read the article

  • Is Storing Cookies in a Database Safe?

    - by viatropos
    If I use mechanize, I can, for instance, create a new google analytics profile for a website. I do this by programmatically filling out the login form and storing the cookies in the database. Then, for at least until the cookie expires, I can access my analytics admin panel without having to enter my username and password again. Assuming you can't create a new analytics profile any other way (with OpenAuth or any of that, I don't think it works for actually creating a new Google Analytics profile, the Analytics API is for viewing the data, but I need to create an new analytics profile), is storing the cookie in the database a bad thing? If I do store the cookie in the database, it makes it super easy to programatically login to Google Analytics without the user ever having to go to the browser (maybe the app has functionality that says "user, you can schedule a hook that creates a new anaytics profile for each new domain you create, just enter your credentials once and we'll keep you logged in and safe"). Otherwise I have to keep transferring around emails and passwords which seems worse. So is storing cookies in the database safe?

    Read the article

  • Generic class for performing mass-parallel queries. Feedback?

    - by Aaron
    I don't understand why, but there appears to be no mechanism in the client library for performing many queries in parallel for Windows Azure Table Storage. I've created a template class that can be used to save considerable time, and you're welcome to use it however you wish. I would appreciate however, if you could pick it apart, and provide feedback on how to improve this class. public class AsyncDataQuery<T> where T: new() { public AsyncDataQuery(bool preserve_order) { m_preserve_order = preserve_order; this.Queries = new List<CloudTableQuery<T>>(1000); } public void AddQuery(IQueryable<T> query) { var data_query = (DataServiceQuery<T>)query; var uri = data_query.RequestUri; // required this.Queries.Add(new CloudTableQuery<T>(data_query)); } /// <summary> /// Blocking but still optimized. /// </summary> public List<T> Execute() { this.BeginAsync(); return this.EndAsync(); } public void BeginAsync() { if (m_preserve_order == true) { this.Items = new List<T>(Queries.Count); for (var i = 0; i < Queries.Count; i++) { this.Items.Add(new T()); } } else { this.Items = new List<T>(Queries.Count * 2); } m_wait = new ManualResetEvent(false); for (var i = 0; i < Queries.Count; i++) { var query = Queries[i]; query.BeginExecuteSegmented(callback, i); } } public List<T> EndAsync() { m_wait.WaitOne(); return this.Items; } private List<T> Items { get; set; } private List<CloudTableQuery<T>> Queries { get; set; } private bool m_preserve_order; private ManualResetEvent m_wait; private int m_completed = 0; private void callback(IAsyncResult ar) { int i = (int)ar.AsyncState; CloudTableQuery<T> query = Queries[i]; var response = query.EndExecuteSegmented(ar); if (m_preserve_order == true) { // preserve ordering only supports one result per query this.Items[i] = response.Results.First(); } else { // add any number of items this.Items.AddRange(response.Results); } if (response.HasMoreResults == true) { // more data to pull query.BeginExecuteSegmented(response.ContinuationToken, callback, i); return; } m_completed = Interlocked.Increment(ref m_completed); if (m_completed == Queries.Count) { m_wait.Set(); } } }

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • Persistance JDO - How to query a property of a collection with JDOQL?

    - by Sergio del Amo
    I want to build an application where a user identified by an email address can have several application accounts. Each account can have one o more users. I am trying to use the JDO Storage capabilities with Google App Engine Java. Here is my attempt: @PersistenceCapable @Inheritance(strategy = InheritanceStrategy.NEW_TABLE) public class AppAccount { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private String companyName; @Persistent List<Invoices> invoices = new ArrayList<Invoices>(); @Persistent List<AppUser> users = new ArrayList<AppUser>(); // Getter Setters and Other Fields } @PersistenceCapable @EmbeddedOnly public class AppUser { @Persistent private String username; @Persistent private String firstName; @Persistent private String lastName; // Getter Setters and Other Fields } When a user logs in, I want to check how many accounts does he belongs to. If he belongs to more than one he will be presented with a dashboard where he can click which account he wants to load. This is my code to retrieve a list of app accounts where he is registered. public static List<AppAccount> getUserAppAccounts(String username) { PersistenceManager pm = JdoUtil.getPm(); Query q = pm.newQuery(AppAccount.class); q.setFilter("users.username == usernameParam"); q.declareParameters("String usernameParam"); return (List<AppAccount>) q.execute(username); } But I get the next error: SELECT FROM invoices.server.AppAccount WHERE users.username == usernameParam PARAMETERS String usernameParam: Encountered a variable expression that isn't part of a join. Maybe you're referencing a non-existent field of an embedded class. org.datanucleus.store.appengine.FatalNucleusUserException: SELECT FROM com.softamo.pelicamo.invoices.server.AppAccount WHERE users.username == usernameParam PARAMETERS String usernameParam: Encountered a variable expression that isn't part of a join. Maybe you're referencing a non-existent field of an embedded class. at org.datanucleus.store.appengine.query.DatastoreQuery.getJoinClassMetaData(DatastoreQuery.java:1154) at org.datanucleus.store.appengine.query.DatastoreQuery.addLeftPrimaryExpression(DatastoreQuery.java:1066) at org.datanucleus.store.appengine.query.DatastoreQuery.addExpression(DatastoreQuery.java:846) at org.datanucleus.store.appengine.query.DatastoreQuery.addFilters(DatastoreQuery.java:807) at org.datanucleus.store.appengine.query.DatastoreQuery.performExecute(DatastoreQuery.java:226) at org.datanucleus.store.appengine.query.JDOQLQuery.performExecute(JDOQLQuery.java:85) at org.datanucleus.store.query.Query.executeQuery(Query.java:1489) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1371) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at com.softamo.pelicamo.invoices.server.Store.getUserAppAccounts(Store.java:82) at com.softamo.pelicamo.invoices.test.server.StoreTest.testgetUserAppAccounts(StoreTest.java:39) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Any idea? I am getting JDO persistance totally wrong?

    Read the article

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • Ruby: backslash all non-alphanumeric characters in a string

    - by HBlend
    I have a script where I need to take a user's password and then run a command line using it. I need to backslash all (could be more then one) non-alphanumeric characters in the password. I have tried several things at this point including the below but getting no where. This has to be easy, just missing it. Tried these and several others: password = password.gsub(/(\W)/, '\\1') password = password.gsub(/(\W)/, '\\\1') password = password.gsub(/(\W)/, '\\\\1')

    Read the article

  • How can I set paperclip's storage mechanism based on the current Rails environment?

    - by John Reilly
    I have a rails application that has multiple models with paperclip attachments that are all uploaded to S3. This app also has a large test suite that is run quite often. The downside with this is that a ton of files are uploaded to our S3 account on every test run, making the test suite run slowly. It also slows down development a bit, and requires you to have an internet connection in order to work on the code. Is there a reasonable way to set the paperclip storage mechanism based on the Rails environment? Ideally, our test and development environments would use the local filesystem storage, and the production environment would use S3 storage. I'd also like to extract this logic into a shared module of some kind, since we have several models that will need this behavior. I'd like to avoid a solution like this inside of every model: ### We don't want to do this in our models... if Rails.env.production? has_attached_file :image, :styles => {...}, :storage => :s3, # ...etc... else has_attached_file :image, :styles => {...}, :storage => :filesystem, # ...etc... end Any advice or suggestions would be greatly appreciated! :-)

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • Why doesn't the highlighted part of the JavaScript work?

    - by Dor Cohen
    Why isn't the 'confirm password and password the same' part working? Meaning, the part that uses the 'getElementById' to adress the password and confirmpassword. Every part works but that particular part. It doesn't show a red box around the text fields. Can anyone help me? <html> <head> </head> <script> function submitinfo() { var firstname = document.getElementById("firstname").value; var lastname = document.getElementById("lastname").value; var username = document.getElementById("username").value; var password = document.getElementById("password").value; var confirmpassword = document.getElementById("confirmpassword").value; var email = document.getElementById("email").value; if(firstname !== "" && document.getElementById("firstname").style.borderColor == "red")     {     document.getElementById("firstname").style.border = "none"     } if(lastname !== "" && document.getElementById("lastname").style.borderColor == "red") { document.getElementById("lastname").style.border = "none" } if(username !== "" && document.getElementById("username").style.borderColor == "red") { document.getElementById("username").style.border = "none" } if(password !== "" && document.getElementById("password").style.borderColor == "red") { document.getElementById("password").style.border = "none" } if(confirmpassword !== "" && document.getElementById("confirmpassword").style.borderColor == "red") { document.getElementById("confirmpassword").style.border = "none" } if(email !== "" && document.getElementById("email").style.borderColor == "red") { document.getElementById("email").style.border = "none" } if(firstname == "") { document.getElementById("firstname").style.borderColor = "red"; document.getElementById("firstname").style.borderStyle = "solid"; } if(lastname == "") { document.getElementById("lastname").style.borderColor = "red"; document.getElementById("lastname").style.borderStyle = "solid"; } if(username == "") { document.getElementById("username").style.borderColor = "red"; document.getElementById("username").style.borderStyle = "solid"; } if(password == "") { document.getElementById("password").style.borderColor = "red"; document.getElementById("password").style.borderStyle = "solid"; } if(confirmpassword == "") { document.getElementById("confirmpassword").style.borderColor = "red"; document.getElementById("confirmpassword").style.borderStyle = "solid"; } if(email == "") { document.getElementById("email").style.borderColor = "red"; document.getElementById("email").style.borderStyle = "solid"; } if(password !== "" && confirmpassword !== "" && document.getElementById("password").style.border == "none" && document.getElementById("confirmpassword").style.border == "none" && password !== confirmpassword) { document.getElementById("password").style.border = "red"; document.getElementById("confirmpassword").style.border = "red"; } if(firstname && lastname && username && password && confirmpassword && email !== "") { window.open() } } </script> <h><font size=4 color=3BCCBE><b>Full Name</b></font><h/> <br> <input type="text" id="firstname" size="15px" placeholder="First"> <input type="text" id="lastname" size="15px" placeholder="Last"> <br> <br> <br> <br> <h><font size=4 color=3BCCBE><b>Choose your username</b></font></h> <br> <input type="text" id="username" size="37px"> <br> <p><font size=3 color="grey">atleast 6 characters long</font></p> <br> <h><font size=4 color=3BCCBE><b>Create a password</b></font></h> <br> <input type="password" id="password" size="37px"> <br> <br> <br> <br> <h><font size=4 color=3BCCBE><b>Confirm your password</b></font><h/> <br> <input type="password" id="confirmpassword" size="37px"> <br> <br> <br> <br> <h><font size=4 color=3BCCBE><b>Email address</b></font><h/> <br> <input type="text" id="email" size="37px"> <br> <br> <br> <br> <input type="button" value="Submit" onclick="submitinfo()" style="height:50px; width:85px; font-size:22px;> <br> </body> </html>

    Read the article

  • Indian government department have more unsecure website then others.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/26/indian-government-department-have-more-unsecure-website-then-others.aspxOne of my friend share his college experience with me. He is not related with computer science. One day he told me that Ankia Fadia come to their college. In front of many student he show how to hack BSNL website by tricks. he break the flow how BSNL site work. I have told them BSNL is one of the most unsecure website of India   If you logged-in to website maybe it’s run in few seconds but sometime it run in 58 minute. OK this is not grammar mistake 58 minute is less then 1 hour. This means open a tab and put the link to open. it will open in hours. If you are using IE8, Chrome and Firefox you will be forced to use IE7 or downgrade. I simply use Ie7 mode in IE for make it work. This happen because they use something that is called DynaTrace. This site is most unsecure. now guess how !   Suppose my username is xyz and password is abc. How I can reset the password I simply go to website and in their site when I do reset my password he told me to fill password and password will not worked here.you can use here password here to reset my password. Remember that username are different then broadband username and password. Suppose if I want to reset your password I simply need to know your broadband username and I can reset it myself. I just logged in with my username and when I open the page for reset password I can fill your bb username and password will work here. I have not tried this. the broadband username can easily guess. this is depend on same way how people’s broandband username made. IS this Safe ? Nope, There are many thing on the site which make me feel that is 1900 century website. They still lived in popup life.  These site are nothing but a crap. not work most of time and when work it’s run too slowly.

    Read the article

  • Public EC Meeting Today at 15:00; new WebEx password

    - by Heather VanCura
    Update:  Public EC Meeting is today at 15:00 PST; note new WebEx meeting password is 12345; login from https://jcp.webex.com. Audio remains the same: +1 (866) 682-4770 (US) Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073

    Read the article

  • passwordless ssh not working

    - by kuurious
    I've tried to setup a password-less ssh b/w A to B and B to A as well. Generated the public and private key using ssh-keygen -trsa on both the machines. Used the ssh-copy-id utility to copy the public-keys from A to B as well as B to A. The passwordless ssh works from A to B but not from B to A. I've checked the permissions of the ~/ssh/ folder and seems to be normal. A's .ssh folder permissions: -rw------- 1 root root 13530 2011-07-26 23:00 known_hosts -rw------- 1 root root 403 2011-07-27 00:35 id_rsa.pub -rw------- 1 root root 1675 2011-07-27 00:35 id_rsa -rw------- 1 root root 799 2011-07-27 00:37 authorized_keys drwxrwx--- 70 root root 4096 2011-07-27 00:37 .. drwx------ 2 root root 4096 2011-07-27 00:38 . B's .ssh folder permissions: -rw------- 1 root root 884 2011-07-07 13:15 known_hosts -rw-r--r-- 1 root root 396 2011-07-27 00:15 id_rsa.pub -rw------- 1 root root 1675 2011-07-27 00:15 id_rsa -rw------- 1 root root 2545 2011-07-27 00:36 authorized_keys drwxr-xr-x 8 root root 4096 2011-07-06 19:44 .. drwx------ 2 root root 4096 2011-07-27 00:15 . A is an ubuntu 10.04 (OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009) B is a debian machine (OpenSSH_5.1p1 Debian-5, OpenSSL 0.9.8g 19 Oct 2007) From A: #ssh B works fine. From B: #ssh -vvv A ... ... debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/identity ((nil)) debug2: key: /root/.ssh/id_rsa (0x7f1581f23a50) debug2: key: /root/.ssh/id_dsa ((nil)) debug3: Wrote 64 bytes for a total of 1127 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug3: no such identity: /root/.ssh/identity debug1: Offering public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1495 debug1: Authentications that can continue: publickey,password debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: Which essentially means it's not authenticating using the file /root/id_rsa. I ran the ssh-add command in both the machines as well. The authentication part of /etc/ssh/sshd_config file is # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files I'm running out of ideas. Any help would be appreciated.

    Read the article

  • How can I permanently save a password-protected SSH key?

    - by pl1nk
    I am using Awesome Window Manager How can I permanently add private keys with password? Inspired by the answer here I have added the private keys in ~/.ssh/config Contents of ~/.ssh/config: IdentityFile 'private key full path' Permissions of ~/.ssh/config: 0700 But it doesn't work for me. If I manually add the key in every session, it works but I'm looking for a more elegant way (not in .bashrc)

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >