Search Results

Search found 1359 results on 55 pages for 'uploading'.

Page 44/55 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • How to Monitor Network in Medium-Sized Company?

    - by Kyle Lowry
    I work at a medium sized company (100+ employees). An issue that has been cropping up is network performance, internet access in particular. We have about 70 or more computers, a mix of Mac OS X and Windows XP & 7 machines. We have several servers (Exchange server, PC file servers, MS SQL, Blackberry, FTP, Mac server, etc). There are four main switches, a SonicWall firewall, and probably a couple routers in the server room with a dozen or so more scattered around the building. The network structure has grown organically over a number of years; and, as far as I know, there really isn't a monitoring solution in place. When we experience network issues (slow connections, dropped packets, and so on), our general solution is to power cycle some hardware or go around to each employee and ask them if they are uploading/downloading any large files. This is really inefficient and time consuming, and it does not allow us to monitor the network, tackling potential problems proactively. I would like to find a solution that would allow me to monitor network usage company-wide in real time, with detail going down to the individual computer, ideally. Given the hodgepodge of equipment and operating systems, what would be the best way to set up some kind of monitoring solution? Hardware, software, restructuring our network architecture?

    Read the article

  • AT&T Filtering FTP traffic?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the file names from upper to lower case and they upload fine. If I change back to the original file name, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • AT&T Upload Filtering?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the filenames from upper to lower case and they upload fine. If I change back to the original filename, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • different user group can not upload file in the server

    - by Dallal
    I have a CentOS server running in Thailand, and I'm in Canada. The guy at the computer center who set up the server for me doesn't really understand much about linux and left me off an issue to solve myself. I just moved from Mac Server to Linux server, and the first thing I'm facing a problem now is `file name` has failed to upload due to an error The uploaded file could not be moved to `location name` So what happen is that I knew from my experiences of these problem is all about permissions. So I go ahead and checked on my whole folder and found that everything in the folder permission is like myusername mygroupname then I checked the httpd file in the server and it is default to apache apache. My question is that how can I make my user to be in the same group with apache group so that I don't have to have any problem about uploading, changing data in my file....? But without having to affect other user in the same server. I'm holding Administrator account, but not root account, but I can change stuff on the server root no problem. When I was with godaddy.com there never been any problem about the permission and I wish I know how they configure that :(

    Read the article

  • "Synchronizing" files between local and remote server using Git

    - by ConcreteVitamin
    My intended goal: I maintain some files in my local computer, and I also share them with others by putting them on my website. In the past I did this by manually uploading all the files using FTP, every time I did some modifications etc. Now, I am wondering if I can use Git to help me achieve this (by "pushing" the local files to my website server). My server is hosted by Dreamhost. First Attempt: First, I try this tutorial. I first push my local files to my Github repo, and ssh into my Dreamhost server to clone --bare from the Github repo. But I find that git does not transfer my files. So I ignore the tutorial. Second Attempt: I ssh into my Dreamhost server to clone directly from Github. My files are all transfered to the server. Then, on my local computer, I git remote add dreamhost ssh://[email protected]/~/my-project. Then I add some files, and commit, and git push dreamhost master. And a bunch of errors appears: http://geotakucovi.com/gitError.jpg As a newbie Git user, I must have missed something. Please help!

    Read the article

  • Can't connect to FTP server from a specific location

    - by wv_pip
    Last week while uploading website files to our server via FTP, the transfer failed. Ever since then, I haven't been able to connect to the server from work. I can connect just fine from home, or by using an FTP app on my cell phone as long as I'm on the cell network. I can't access the server from any machine on my work network. It's not a credential issue, either. The error message that I always get says that a connection cannot be established, and I am never prompted for my credentials. I have changed absolutely nothing on our domain controller or our firewall/router. I've contacted our ISP (who hosts the website/FTP server) and they can't find anything wrong on their end. They insist that it must be something here at the office that is blocking access. I've also tested access to other FTP servers (ea.com, nvidia.com, etc.) so I know that port 21 is not being blocked. I'm totally stumped. Any help is much appreciated. EDIT: wireshark info here: http://www.cloudshark.org/captures/85a118ae9296?filter=ip.dst%3D%3D66.118.64.208

    Read the article

  • SSH very slow when connecting from external [closed]

    - by wnstnsmth
    Possible Duplicate: ssh delay when connecting We have a CentOS server that we use for internal testing purposes, which has sshd enabled. When I (as a developer) am at the company, I use ssh [email protected] to connect to it - and it works flawlessly. Now, in order to work from home, accessing the server via the company's static IP, we set up another port for ssh, 2020. So I execute ssh -p 2020 [email protected] and am immediately granted for a password. After entering the password, it takes up to 30 seconds until I can access the server. Same is with SFTP (i.e. uploading files takes about 30 seconds until it begins to transfer). As you can imagine, if you have to regularly upload files to a webserver via SFTP, this is very tedious. So I looked at similar questions and thus edited the sshd_config file on the server, setting UseDNS to "no" and GSSAPIAuthentication to "no" (this one also in ssh_config on the client) - it did not work.. Please have a look at the -vvv output when externally accessing the server: ssh -p 2020 -vvv [email protected] PasteBin: ssh What could it be? Do you need more info?

    Read the article

  • Why are my SharePoint downloads not completing for outside users?

    - by CT
    I am using WSS 3.0 with Microsoft Server 2003. I am running into the following problem. On a pretty frequent basis, outside users are having trouble downloading documents. Some downloads are completing while the download is still incomplete. So for instance, a PDF is a 17MB file. If I download it from within the office, all 17MBs are downloaded and it opens. If I download it from an outside connection, it may download anywhere from 5-10 MB of the file and then say it is complete. When these partial downloads are opened, it gives the user the error, this file is corrupt and cannot be repaired. I have solved this problem on some of the occasions by simply deleting the document and uploading a new copy of the document. This does not always work. Are there known bugs? Are the Internet settings that need to be modified on the outside user's machine? Does anyone else run into this?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Call Webservices&hellip;Maybe!?

    - by MOSSLover
    So I have been doing preliminary work for my iOS talk for a while, but did not get into the meat of the project until recently.  One day I envision my talk uploading pictures from a camera on an iPhone or iPad into SharePoint and telling people how I did it.  As you know with my Silverlight talk and any new technology, building new talks with new technologies always ends up with some pain points that you must jump over just to grab data.  So step 1 always starts out with how do we even access a webservice using the new technology. I started out watching every single SPC video available on oAuth and Rest Webservices in SharePoint 2013.  I also sent an email to Eric Shupps about some REST and 2013 examples.  The videos further confused me, because all the videos were on SharePoint hosted apps (provider and autohosted).  I did not want to create a SharePoint hosted app, but instead a mobile app outside of the SharePoint context altogether.  Nick Swan sent me his code and it was great for a starting point on how the JSON calls would look like on iOS, but I was still missing a piece.  Nick does a great job on showing how to use the REST/JSON calls in a non-MS tech, however his presentation uses the SharePoint context and can grab the SPAppToken.  At this point I had to ask the question how do you grab the SAML token outside of SharePoint 2013 in iOS using Objective-C?  After reading all the MSDN documentation, some documentation on Restkit and Objective-C/oAuth calls, and some SharePoint 2013 blog post my head was swimming.  I was dreaming about REST and iOS in SharePoint 2013.  SAML tokens were taunting me.  I was nowhere near understanding 2013. I started talking to my friend, Pedro Jimenez, who is also playing with Objective-C and went to SPC.  He found me a couple good MSDN posts with REST/JSON calls that basically showed the accessToken was all I needed (at this point I was still thinking iOS needed to be a provider hosted app which is wrong).  So then again I had to ask the SAML token question…How do you get a SAML token outside of SharePoint without the TokenHelper class? So then I started talking to people and thinking why do I need to completely avoid TokenHelper…The solution in concept is basically create a webservice in Azure wrapped into a Provider Hosted App in SharePoint.  Wictor Wilen created a helper webservice in the following blog post: http://www.wictorwilen.se/Post/How-to-do-active-authentication-to-Office-365-and-SharePoint-Online.aspx. So now I have to basically stand up the webservice, the SharePoint app wrapper, and then use Restkit to call the first webservice to grab the token and then the second webservice to pass in the token and grab some SharePoint data.  What this means is that you can no longer just pass credentials into SharePoint webservices and get data back.  You have to pass in a SAML token with every single webservice call to SharePoint.  The theory is that this token is associated with the permissions the app can handle (read, write, whatever).  It seems like a ton of pain and a lot of work, but this is step 1 in my crusade to pull some piece of data into iOS from SharePoint and show people how to do it themselves.  In the upcoming months hopefully I can get halfway to my end goal. Technorati Tags: SharePoint 2013,REST,oAuth,Objective-C,iOS

    Read the article

  • Oracle Text query parser

    - by Roger Ford
    Oracle Text provides a rich query syntax which enables powerful text searches.However, this syntax isn't intended for use by inexperienced end-users.  If you provide a simple search box in your application, you probably want users to be able to type "Google-like" searches into the box, and have your application convert that into something that Oracle Text understands.For example if your user types "windows nt networking" then you probably want to convert this into something like"windows ACCUM nt ACCUM networking".  But beware - "NT" is a reserved word, and needs to be escaped.  So let's escape all words:"{windows} ACCUM {nt} ACCUM {networking}".  That's fine - until you start introducing wild cards. Then you must escape only non-wildcarded searches:"win% ACCUM {nt} ACCUM {networking}".  There are quite a few other "gotchas" that you might encounter along the way.Then there's the issue of scoring.  Given a query for "oracle text query syntax", it would be nice if we could score a full phrase match higher than a hit where all four words are present but not in a phrase.  And then perhaps lower than that would be a document where three of the four terms are present.  Progressive relaxation helps you with this, but you need to code the "progression" yourself in most cases.To help with this, I've developed a query parser which will take queries in Google-like syntax, and convert them into Oracle Text queries. It's designed to be as flexible as possible, and will generate either simple queries or progressive relaxation queries. The input string will typically just be a string of words, such as "oracle text query syntax" but the grammar does allow for more complex expressions:  word : score will be improved if word exists  +word : word must exist  -word : word CANNOT exist  "phrase words" : words treated as phrase (may be preceded by + or -)  field:(expression) : find expression (which allows +,- and phrase as above) within "field". So for example if I searched for   +"oracle text" query +syntax -ctxcatThen the results would have to contain the phrase "oracle text" and the word syntax. Any documents mentioning ctxcat would be excluded from the results. All the instructions are in the top of the file (see "Downloads" at the bottom of this blog entry).  Please download the file, read the instructions, then try it out by running "parser.pls" in either SQL*Plus or SQL Developer.I am also uploading a test file "test.sql". You can run this and/or modify it to run your own tests or run against your own text index. test.sql is designed to be run from SQL*Plus and may not produce useful output in SQL Developer (or it may, I haven't tried it).I'm putting the code up here for testing and comments. I don't consider it "production ready" at this point, but would welcome feedback.  I'm particularly interested in comments such as "The instructions are unclear - I couldn't figure out how to do XXX" "It didn't work in my environment" (please provide as many details as possible) "We can't use it in our application" (why not?) "It needs to support XXX feature" "It produced an invalid query output when I fed in XXXX" Downloads: parser.pls test.sql

    Read the article

  • FFmpeg Video Hosting for Linux and Windows Server

    - by Aditi
    FFmpeg hosting is a special type of web hosting where the host servers have video transcoding software loaded on them, which allows the automatic conversion of videos from one format to another. FFmpeg is a cross-platform solution for recording, converting, transcoding and stream audio and video. It includes libavcodec – the leading audio/video codec library. FFmpeg hosting gets its name from a set of server side programs (modules) called FFmpeg. There are a number of applications or web scripts available, which allow webmasters to create their own video sharing websites. Video hosting typically requires: PHP 4.3 and above (including support of CLI) Mencoder and also Mplayer FFMpeg-PHP MySQL database server LAME MP3 Encoder Libogg + Libvorbis GD Library 2 or higher CGI-BIN There are number of web service providers who provide FFmpeg hosting service. Following is a list of some of the Best FFmpeg hosting providers for both Linux and Windows Server below. Dream Host Dreamhost provides for web based email access, mail filtering, spam filtering, unlimited email ids, vacation autoresponder, python support, full CGI access and many more services. Price: $7.95 View Details Micfo It offers unlimited disk space and bandwidth. Other services include free domain for life and free Website Transfer with many more services. All in all one of the best option to consider. Price: $5 View Details Host Upon HostUpon offers FFMpeg Hosting on all their hosting packages, with readily installed modules to start a Video website or Social Network with Video uploading. These scripts such as Boonex Dolphin / PHPMotion / Social Engine / ABKsoft Scripts / Joomla Video Plugin / Clipshare / ClipBucket / Social Media / Rayzz / Vidi Script work with their ffmpeg. Their FFMPEG hosting plan offers 24/7/365 support with typical response time of 15min or less. Price: $5.95 View Details DownTown Host DownTown Host provides full and exceptional support by live chat and telephone. It has high-power, modern servers and the finest web server technology. It offers free search engine Submission and continuous data backup protection with free email forwarding and site move. There are many more services too. Site5 This ffmpeg service provider offers uptime guarantee, a real time stats on each server and many more attractive services. Price: $4.95 View Details Cirtex Hosting Cirtex Hosting allows to host 7 websites & domains and provides for unlimited storage space and monthly bandwidth. It also offers FTP and email accounts and many more services. Price: $2.49 View Details FLV Hosting FLV hosting supplies RTMP SERVER STREAMING for large size video streaming and server side recording. It is flexible and costs less. They customize to the clients requirements. Price: $9.95 View Details AptHost This hosting service provides for 24x7x365 Premium Support and fully ffmpeg enabled services. Price: $4.95 View Details HostMDS Great Support, Priced Low. It provides for SSH access, CGI, Ruby on Rails, Perl, PHP, MySQL, front page extentions, 24/7 Support, FREE Domain transfer and spam filtering. It offers instant account setup, low latency fast bandwidth & much more! They were formerly known as Vistapages. Price: $4.95 View Details Related posts:Best WordPress Video Themes for a Video Blog Free Web Based Applications 24+ Coda Alternatives for Windows and Linux

    Read the article

  • Silverlight Cream for June 16, 2011 -- #1108

    - by Dave Campbell
    In this Issue: René Schulte, Rajat Jaiswal(-2-), Peter Kuhn, Colin Eberhardt, Kunal Chowdhury(-2-), Beth Massi, Michael Crump, Daniel Vaughan, Chris Rouw, WindowsPhoneGeek, and Jesse Liberty. Above the Fold: Silverlight: "Cubelicious - Silverlight 5 + Balder + Physics + SLARToolkit Augmented Reality = Triple Win!" René Schulte WP7: "Binding the WP7 ProgressIndicator in XAML" Daniel Vaughan LightSwitch: "Adding Static Images and Text on a LightSwitch Screen" Beth Massi Shoutouts: Laurent Bugnion is Proposing a new RelayCommand snippet for MVVM Light V4... read about it and give him some feedback From SilverlightCream.com: Cubelicious - Silverlight 5 + Balder + Physics + SLARToolkit Augmented Reality = Triple Win! René Schulte has a post up about using the SLARToolkit for Silverlight 5 Beta in conjuncion with Balder and Physics ... dang this is cool, check out the video! PSD TO XAML in few easy steps using Expression Blend I'm not a Photoshop person, but apparently Rajat Jaiswal is, and he's demonstrating using Expression Blend to get your PSD file into XAML Its really great feature Silverlight realtime augment toolkit This is a fun post from Rajat Jaiswal... fun to see someone other than René Schulteposting about René's SLARToolkit :) Getting ready for the Windows Phone 7 Exam 70-599 (Part 2) Peter Kuhn has part 2 of his series up on getting ready for the Windows Phone 7 Exam at SilverlightShow Metro In Motion Part #7 – Panorama Prettiness and Opacity Colin Eberhardt has another Metro in Motion up... this one concentrates on the opacity effect when the user slides from item-to-item in Panorama contents Windows Phone 7 (Mango) Tutorial - 13 - What is Tombstoning? Kunal Chowdhury has a couple of posts up... first up is this one on Tombstoning... and if you're just starting with WP7.1, it got easier Windows Phone 7 Tip: Showing and Hiding onscreen keyboard in Emulator Kunal Chowdhury's latest is a great hint if you haven't found it already... how to show/hide the onscreen keyboard in the emulator Adding Static Images and Text on a LightSwitch Screen Beth Massi's latest post is on showing how to display an image or static text such as a logo in a LightSwitch app Displaying PDF Files in Windows Phone 7 Mango Michael Crump responds to reader's questions about displaying a PDF file in WP7.1 with this post using ComponentOne's Studio for Windows Phone CTP Binding the WP7 ProgressIndicator in XAML Daniel Vaughan has a solution to the problem of having to bind the ProgressIndicator in WP7.1 in code-behind... he wrote a ProgressIndicatorProxy and shares it with us!<>/dd> Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 2 Chris Rouw has Part 2 of his Storing Files in SQL Servier using WCF RIA Services and Silverlight up... this one is on uploading and saving files to the database from Silvelright by the user dropping them onto your app. Using SqlMetal to generate Windows Phone Mango Local Database classes OK I'm not too proud to admit I'd never heard of SQLMetal... if you haven't, or even if you have, this post by WindowsPhoneGeek is a good discussion of using it to generate your WP7.1 database classes. Obtaining Email, Address or Phone Number Jesse Liberty's latest is another in his 'Mango From Scratch' series discussing the new tasks to obtain more info from the contact list. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Sunshine after the iCloud release?

    - by Laila
    "Why should I believe them? They're the ones that brought us MobileMe? It was not our finest hour, but we learned a lot." Steve Jobs June 6th 2011 Apple's new cloud service has been met with uncritical excitement by industry commentators.  It is wonderful what a rename can do.  Apple has had a 'cloud' offering for three years called MobileMe, successor to .MAC and  iTools, so iCloud is now the fourth internet service Apple have attempted. If this had been Microsoft, there would have been catcalls all around the blogosphere.  I'll admit that there is a lot more functionality announced for iCloud than MobileMe has ever managed to achieve, but then almost anything has more functionality than MobileMe.  It's an expensive service (£120 a year in the UK, $90 in the states), launched as far back as  June 9, 2008, that has delivered very little and suffered a string of technical problems; the documentation was mainly  a community effort, built up gradually by the frustrated and angry users. It was supposed to synchronise PC Outlook calendars but couldn't manage Microsoft Exchange (Google could, of course). It used WebDAV to allow Windows users to attach to the filestore, but didn't document how to do it. The method for downloading and uploading files to the cloud-based filestore was ridiculously clunky. It allowed you to post photos on a public site, but forgot to include a way of deleting photos. I could go on with the list, but you can explore the many sites that have flourished to inhabit the support-vacuum left by Apple. MobileMe should have had all the bright new clever things announced for iCloud. Apple dropped the ball, and allowed services such as Flickr to fill the void. However, their PR skills are such that, a name-change later (the .ME.com email address remains), it has turned a rout into a victory, and hundreds of earnest bloggers have been extolling Apple's expertise in cloud matters. This must be frustrating for the other cloud providers who have quietly got the technology working right. I wish iCloud well, even though I resent the expensive mess they made of MobileMe. Apple promise that iCloud will sync files, apps, app data, and media across all the different iOS5 devices, Macs, and PCs. It also hopes to sync music across devices, but not video content. They've offered existing MobileMe users free use of the MobileMe service for a year as the product is morphed, and they will be able to transfer to iCloud when it is launched in the autumn.  On June 30, 2012, MobileMe will die, and Apple's iWeb is also soon to join iTools and .MAC in the hereafter. So why get excited about iCloud? That all depends on the level of PC integration. Whereas iOS5 machines will be full participants in the new world of data-sharing (Sorry iPod Touch users) what about .NET libraries? There is talk of synchronising 'My Pictures' libraries with iOS5 and iMac machines, but little more detail as yet. Apple has a lot to prove with iCloud and anyone with actual experience of their past attempts to get into cloud services will be wary.

    Read the article

  • Why are you doing this? [closed]

    - by NIcholas Lawson
    I am working on a story that I am going to be querying to several magazines in my hometown about this work that is being done by the AXR group. This is a group of people who have networked online and are working on developing a higher level syntax structure than CSS and HTML currently offer. I am covering this is as a story because I see potential in this as a human interest story in cosmopolitan society. I have been asked by the group to pose this question to you and would appreciate any and all comments you would have on the following ... To AXR: So when does the internet become finished? At what point does a computer scientist say to himself ... my job here is finished ... the internet is complete? When is the internet ready to be more about the display of content than the uploading of new websites or computer tech? You are embarking on upon a sixty year project every day you work with this internet, what drives you? Why are you spending your hard earned hours working on the code to this computer? I spend thirty hours a week online because I love the writing and I know what would make the internet better ... ease of use ... i know it is difficult to program but I see some very elegant solutions online ... in this early inception phase of your programming development for this HSS prototype ... I would like to know why I do not see you programmers asking questions such as ... What would make the end user's life the easiest when using this code? I know you can solve the problem but an evolution forward would be simple, not simple to a computer scientist but simple to use for a career janitor ... if you could solve the problem of alleviating the stress at using a the computer you could get better content out of the computer ... right now the main problem is that the best content is in the hands of the people least likely to use the computer and the more simple you make the computer to use ... the better the content collection will be in the long run ... That is not what I want to talk about though ... why are you writing code when you could be writing stories? I know the computer is worthless without content so I build content, I know the book is worthless without the combinations of words in them, i know the television is worthless without the television news anchor or the actor, what I want to know from you folks in a very journalistic sense is why are you even bothering to bother to write code for a machine that has only made our lives i would dare say less interesting. why are you feeding the beast your time when you could be writing stories or being an actor or musician or auto mechanic ... why code? why this machine? what do you love about it? what do you hate about it? what do you wonder about it? I want to know so that starting out I know how to further shape my questions with axr ... i want the full story ... i want the real answers ... and i want to know why you are doing this, it would make for great writing if you could elucidate on this point.

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • Uploadify with ruby on rails 'bad content body' 500 Internal Server Error

    - by Mr_Nizzle
    I'm Getting this error in my development log while uploadify is uploading the file and in the view i get an 'IO ERROR' beside filename. /!\ FAILSAFE /!\ Thu Mar 18 11:54:53 -0500 2010 Status: 500 Internal Server Error bad content body /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:351:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `loop' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/request.rb:133:in `POST' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/params_parser.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/session/cookie_store.rb:93:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/reloader.rb:29:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/failsafe.rb:26:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/dispatcher.rb:106:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/fastcgi.rb:58:in `serve' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:103:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:153:in `with_signal_handler' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:101:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:78:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `catch' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:51:in `process!' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:23:in `process!' dispatch.fcgi:24 any idea on this?

    Read the article

  • uploadify scriptData problem

    - by elpaso66
    Hi, I'm having problems with scriptData on uploadify, I'm pretty sure the config syntax is fine but whatever I do, scriptData is not passed to the upload script. I tested in both FF and Chrome with flash v. Shockwave Flash 9.0 r31 This is the config: $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : '/media/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': 'e1b552afde044bdd188ad51af40cfa8e'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : '/media/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); I checked that other values (file name) are passed correctly but session_key is not. This is the decorator code from django-filebrowser, you can see it checks for request.POST.get('session_key'), the problem is that request.POST is empty. def flash_login_required(function): """ Decorator to recognize a user by its session. Used for Flash-Uploading. """ def decorator(request, *args, **kwargs): try: engine = __import__(settings.SESSION_ENGINE, {}, {}, ['']) except: import django.contrib.sessions.backends.db engine = django.contrib.sessions.backends.db print request.POST session_data = engine.SessionStore(request.POST.get('session_key')) user_id = session_data['_auth_user_id'] # will return 404 if the session ID does not resolve to a valid user request.user = get_object_or_404(User, pk=user_id) return function(request, *args, **kwargs) return decorator

    Read the article

  • Windows Azure : Storage Client Exception Unhandled

    - by veda
    I am writing a code for upload large files into the blobs using blocks... When I tested it, it gave me an StorageClientException It stated: One of the request inputs is out of range. I got this exception in this line of the code: blob.PutBlock(block, ms, null); Here is my code: protected void ButUploadBlocks_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; byte[] byteArray = uplFileUpload.FileBytes; Int64 contentLength = byteArray.Length; int numBytesPerBlock = 250 *1024; // 250KB per block int blocksCount = (int)Math.Ceiling((double)contentLength / numBytesPerBlock); // number of blocks MemoryStream ms ; List<string>BlockIds = new List<string>(); string block; int offset = 0; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; for (int i = 0; i < blocksCount; i++, offset = offset + numBytesPerBlock) { block = Convert.ToBase64String(BitConverter.GetBytes(i)); ms = new MemoryStream(); ms.Write(byteArray, offset, numBytesPerBlock); blob.PutBlock(block, ms, null); BlockIds.Add(block); } blob.PutBlockList(BlockIds); blob.Metadata["FILETYPE"] = "text"; } } Can anyone tell me how to solve it...

    Read the article

  • Updating a status on a Winform in BackgroundWorker

    - by Mike Wills
    I have a multi-step BackgroundWorker process. I use a marquee progress bar because several of these steps are run on a iSeries server so there isn't any good way to determine a percentage. What I am envisioning is a label with updates after every step. How would you recommend updating a label on a winform to reflect each step? Figured I would add a bit more. I call some CL and RPG programs via a stored procedure on an iSeries (or IBM i or AS/400 or a midrange computer running OS/400... er... i5/OS (damn you IBM for not keeping the same name year-to-year)). Anyway I have to wait until that step is fully complete before I can continue on the winform side. I was thinking of sending feedback to the user giving the major steps. Dumping data to iSeries Running month-end Creating reports Uploading final results I probably should have given this in the beginning. Sorry about that. I try to keep my questions general enough for others to make use of later rather than my specific task.

    Read the article

  • Paperclip and tempfile with Rails

    - by Eric Koslow
    I'm trying to write a rails application where users can upload images, but Paperclip doesn't seem to be working for me. I've gone through all the basic steps (added has_attached_file, the migration, making the form multipart) but I keep getting the same error whenever I try uploading an image: can't convert nil into Integer Looking at the top of the stack ...rails3/lib/paperclip/processor.rb:46:in `sprintf' ...rails3/lib/paperclip/processor.rb:46:in `make_tmpname' .../ruby-1.9.2-head/lib/ruby/1.9.1/tmpdir.rb:154:in `create' .../ruby-1.9.2-head/lib/ruby/1.9.1/tempfile.rb:134:in `initialize' It seems the problem is in the tempfile. My code: _form.rb <%= form_for @high_school, :html => {:multipart => true} do |f| %> <%= f.error_messages %> ... <div class="field"> <%= f.file_field :photo %> </div> <div class="actions"> <%= f.submit %> </div> <% end %> model/high_school.rb ... validates_length_of :password, :minimum => 4, :allow_blank => true has_attached_file :photo has_many :students ... Is this a known problem? I basically followed the instructions from the github to the letter. My environment: Rails3 and Ruby 1.9.2dev Thank you!

    Read the article

  • IOS: How to uplaod a file to specific google drive folder using google drive sdk library

    - by loganathan
    I integrated google drive sdk with my ios app. But i do not know how to upload a file to google drive specific folder. Here the code am using to upload the file. But this one uploading the file to my google drive root folder. Any one share a code to upload a file to google drive specific folder?. My Code: -(void)uploadFileToGoogleDrive:(NSString*)fileName { GTLDriveFile *driveFile = [[[GTLDriveFile alloc]init] autorelease]; driveFile.mimeType = @"application/pdf"; driveFile.originalFilename = @"test.doc"; driveFile.title = @"test.doc"; NSString *filePath = [LocalFilesDetails getUserDocumentFullPathForFileName:fileName isSignedDocument:YES]; GTLUploadParameters *uploadParameters = [GTLUploadParameters uploadParametersWithData:[NSData dataWithContentsOfFile:filePath] MIMEType:@"application/pdf"]; GTLQueryDrive *query = [GTLQueryDrive queryForFilesInsertWithObject:driveFile uploadParameters:uploadParameters]; [self.driveService executeQuery:query completionHandler:^(GTLServiceTicket *ticket, GTLDriveFile *updatedFile, NSError *error) { if (error == nil) { NSLog(@"\n\nfile uploaded into google drive\\<my_folder> foler"); } else { NSLog(@"\n\nfile uplod failed google drive\\<my_folder> foler"); } }]; }

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >