Search Results

Search found 38961 results on 1559 pages for 'boost function'.

Page 112/1559 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • What makes deployment successful for some users and unsuccessful for others?

    - by Julien
    I am trying to deploy a Visual C++ application (developed with Microsoft Visual Studio 2008) using a Setup and Deployment Project. After installation, users on some target computers get the following error message after launching the application executable: “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem.” Another user after installation could run the application properly. I cannot find the root cause of this problem, despite spending several hours on the Visual Studio help files and online forums (most postings date back to 2006). Does anyone at Stack Overflow have a suggestion? Thanks in advance. Additional details appear below. The application uses FLTK 1.1.9 for a GUI library, as well as some Boost 1.39 libraries (regex, lexical_cast, date_time, math). I made sure I am trying to deploy the release version (not the debug version) of the application. The Runtime library in the Code Generation settings is Multi-threaded DLL (/MD). The dependency walker of myapp.exe lists the following DLLs: wsock32.dll, comctl32.dll, kernel32.dll, user32.dll, gdi32.dll, shell32.dll, ole32.dll, mvcp90.dll, msvcr90.dll. In the Setup and Deployment Project, I add the following DLLs to the File System on Target Machine: fltkdlld.dll, and a folder named Microsoft.VC90.CRT with msvcm90.dll, msvcp90.dll, mcvcr90.dll and Microsoft.VC90.CRT.manifest. The installation process on the target computers getting the error message requires having the .Net Framework 3.5 installed first. Any suggestion? Thanks in advance!

    Read the article

  • What is the nicest way to parse this in C++ ?

    - by ereOn
    Hi, In my program, I have a list of "server address" in the following format: host[:port] The brackets here, indicate that the port is optional. host can be a hostname, an IPv4 or IPv6 address. port, if present can be a numeric port number or a service string (like: "http" or "ssh"). If port is present and host is an IPv6 address, host must be in "bracket-enclosed" notation (Example: [::1]) Here are some valid examples: localhost localhost:11211 127.0.0.1:http [::1]:11211 ::1 [::1] And an invalid example: ::1:80 // Invalid: Is this the IPv6 address ::1:80 and a default port, or the IPv6 address ::1 and the port 80 ? ::1:http // This is not ambigous, but for simplicity sake, let's consider this is forbidden as well. My goal is to separate such entries in two parts (obviously host and port). I don't care if either the host or port are invalid as long as they don't contain a : (290.234.34.34.5 is ok for host, it will be rejected in the next process); I just want to separate the two parts, or if there is no port part, to know it somehow. I tried to do something with std::stringstream but everything I come up to seems hacky and not really elegant. How would you do this in C++ ? I don't mind answers in C but C++ is prefered. Any boost solution is welcome as well. Thank you.

    Read the article

  • Calling a method at a specific interval rate in C++

    - by Aetius
    This is really annoying me as I have done it before, about a year ago and I cannot for the life of me remember what library it was. Basically, the problem is that I want to be able to call a method a certain number of times or for a certain period of time at a specified interval. One example would be I would like to call a method "x" starting from now, 10 times, once every 0.5 seconds. Alternatively, call method "x" starting from now, 10 times, until 5 seconds have passed. Now I thought I used a boost library for this functionality but I can't seem to find it now and feeling a bit annoyed. Unfortunately I can't look at the code again as I'm not in possession of it any more. Alternatively, I could have dreamt this all up and it could have been proprietary code. Assuming there is nothing out there that does what I would like, what is currently the best way of producing this behaviour? It would need to be high-resolution, up to a millisecond. It doesn't matter if it blocks the thread that it is executed from or not. Thanks!

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Safe way for getting/finding a vertex in a graph with custom properties -> good programming practice

    - by Shadow
    Hi, I am writing a Graph-class using boost-graph-library. I use custom vertex and edge properties and a map to store/find the vertices/edges for a given property. I'm satisfied with how it works, so far. However, I have a small problem, where I'm not sure how to solve it "nicely". The class provides a method Vertex getVertex(Vertexproperties v_prop) and a method bool hasVertex(Vertexproperties v_prop) The question now is, would you judge this as good programming practice in C++? My opinion is, that I have first to check if something is available before I can get it. So, before getting a vertex with a desired property, one has to check if hasVertex() would return true for those properties. However, I would like to make getVertex() a bit more robust. ATM it will segfault when one would directly call getVertex() without prior checking if the graph has a corresponding vertex. A first idea was to return a NULL-pointer or a pointer that points past the last stored vertex. For the latter, I haven't found out how to do this. But even with this "robust" version, one would have to check for correctness after getting a vertex or one would also run into a SegFault when dereferencing that vertex-pointer for example. Therefore I am wondering if it is "ok" to let getVertex() SegFault if one does not check for availability beforehand?

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • C++ invalid reference problem

    - by Karol
    Hi all, I'm writing some callback implementation in C++. I have an abstract callback class, let's say: /** Abstract callback class. */ class callback { public: /** Executes the callback. */ void call() { do_call(); }; protected: /** Callback call implementation specific to derived callback. */ virtual void do_call() = 0; }; Each callback I create (accepting single-argument functions, double-argument functions...) is created as a mixin using one of the following: /** Makes the callback a single-argument callback. */ template <typename T> class singleArgumentCallback { protected: /** Callback argument. */ T arg; public: /** Constructor. */ singleArgumentCallback(T arg): arg(arg) { } }; /** Makes the callback a double-argument callback. */ template <typename T, typename V> class doubleArgumentCallback { protected: /** Callback argument 1. */ T arg1; /** Callback argument 2. */ V arg2; public: /** Constructor. */ doubleArgumentCallback(T arg1, V arg2): arg1(arg1), arg2(arg2) { } }; For example, a single-arg function callback would look like this: /** Single-arg callbacks. */ template <typename T> class singleArgFunctionCallback: public callback, protected singleArgumentCallback<T> { /** Callback. */ void (*callbackMethod)(T arg); public: /** Constructor. */ singleArgFunctionCallback(void (*callback)(T), T argument): singleArgumentCallback<T>(argument), callbackMethod(callback) { } protected: void do_call() { this->callbackMethod(this->arg); } }; For user convenience, I'd like to have a method that creates a callback without having the user think about details, so that one can call (this interface is not subject to change, unfortunately): void test3(float x) { std::cout << x << std::endl; } void test5(const std::string& s) { std::cout << s << std::endl; } make_callback(&test3, 12.0f)->call(); make_callback(&test5, "oh hai!")->call(); My current implementation of make_callback(...) is as follows: /** Creates a callback object. */ template <typename T, typename U> callback* make_callback( void (*callbackMethod)(T), U argument) { return new singleArgFunctionCallback<T>(callbackMethod, argument); } Unfortunately, when I call make_callback(&test5, "oh hai!")->call(); I get an empty string on the standard output. I believe the problem is that the reference gets out of scope after callback initialization. I tried using pointers and references, but it's impossible to have a pointer/reference to reference, so I failed. The only solution I had was to forbid substituting reference type as T (for example, T cannot be std::string&) but that's a sad solution since I have to create another singleArgCallbackAcceptingReference class accepting a function pointer with following signature: void (*callbackMethod)(T& arg); thus, my code gets duplicated 2^n times, where n is the number of arguments of a callback function. Does anybody know any workaround or has any idea how to fix it? Thanks in advance!

    Read the article

  • I don't understand how work call_once

    - by SABROG
    Please help me understand how work call_once Here is thread-safe code. I don't understand why this need Thread Local Storage and global_epoch variables. Variable _fast_pthread_once_per_thread_epoch can be changed to constant/enum like {FAST_PTHREAD_ONCE_INIT, BEING_INITIALIZED, FINISH_INITIALIZED}. Why needed count calls in global_epoch? I think this code can be rewriting with logc: if flag FINISH_INITIALIZED do nothing, else go to block with mutexes and this all. #ifndef FAST_PTHREAD_ONCE_H #define FAST_PTHREAD_ONCE_H #include #include typedef sig_atomic_t fast_pthread_once_t; #define FAST_PTHREAD_ONCE_INIT SIG_ATOMIC_MAX extern __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; #ifdef __cplusplus extern "C" { #endif extern void fast_pthread_once( pthread_once_t *once, void (*func)(void) ); inline static void fast_pthread_once_inline( fast_pthread_once_t *once, void (*func)(void) ) { fast_pthread_once_t x = *once; /* unprotected access */ if ( x _fast_pthread_once_per_thread_epoch ) { fast_pthread_once( once, func ); } } #ifdef __cplusplus } #endif #endif FAST_PTHREAD_ONCE_H Source fast_pthread_once.c The source is written in C. The lines of the primary function are numbered for reference in the subsequent correctness argument. #include "fast_pthread_once.h" #include static pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER; /* protects global_epoch and all fast_pthread_once_t writes */ static pthread_cond_t cv = PTHREAD_COND_INITIALIZER; /* signalled whenever a fast_pthread_once_t is finalized */ #define BEING_INITIALIZED (FAST_PTHREAD_ONCE_INIT - 1) static fast_pthread_once_t global_epoch = 0; /* under mu */ __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; static void check( int x ) { if ( x == 0 ) abort(); } void fast_pthread_once( fast_pthread_once_t *once, void (*func)(void) ) { /*01*/ fast_pthread_once_t x = *once; /* unprotected access */ /*02*/ if ( x _fast_pthread_once_per_thread_epoch ) { /*03*/ check( pthread_mutex_lock(µ) == 0 ); /*04*/ if ( *once == FAST_PTHREAD_ONCE_INIT ) { /*05*/ *once = BEING_INITIALIZED; /*06*/ check( pthread_mutex_unlock(µ) == 0 ); /*07*/ (*func)(); /*08*/ check( pthread_mutex_lock(µ) == 0 ); /*09*/ global_epoch++; /*10*/ *once = global_epoch; /*11*/ check( pthread_cond_broadcast(&cv;) == 0 ); /*12*/ } else { /*13*/ while ( *once == BEING_INITIALIZED ) { /*14*/ check( pthread_cond_wait(&cv;, µ) == 0 ); /*15*/ } /*16*/ } /*17*/ _fast_pthread_once_per_thread_epoch = global_epoch; /*18*/ check (pthread_mutex_unlock(µ) == 0); } } This code from BOOST: #ifndef BOOST_THREAD_PTHREAD_ONCE_HPP #define BOOST_THREAD_PTHREAD_ONCE_HPP // once.hpp // // (C) Copyright 2007-8 Anthony Williams // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include #include #include #include "pthread_mutex_scoped_lock.hpp" #include #include #include namespace boost { struct once_flag { boost::uintmax_t epoch; }; namespace detail { BOOST_THREAD_DECL boost::uintmax_t& get_once_per_thread_epoch(); BOOST_THREAD_DECL extern boost::uintmax_t once_global_epoch; BOOST_THREAD_DECL extern pthread_mutex_t once_epoch_mutex; BOOST_THREAD_DECL extern pthread_cond_t once_epoch_cv; } #define BOOST_ONCE_INITIAL_FLAG_VALUE 0 #define BOOST_ONCE_INIT {BOOST_ONCE_INITIAL_FLAG_VALUE} // Based on Mike Burrows fast_pthread_once algorithm as described in // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2444.html template void call_once(once_flag& flag,Function f) { static boost::uintmax_t const uninitialized_flag=BOOST_ONCE_INITIAL_FLAG_VALUE; static boost::uintmax_t const being_initialized=uninitialized_flag+1; boost::uintmax_t const epoch=flag.epoch; boost::uintmax_t& this_thread_epoch=detail::get_once_per_thread_epoch(); if(epoch #endif I right understand, boost don't use atomic operation, so code from boost not thread-safe?

    Read the article

  • How to execute scalar function using Enterprise Library?

    - by Vadim
    I'm having trouble to execute scalar function using Enterprise Library 5.0. The code looks something like that: somedDb.ExecuteScalar(CommandType.Text, "SELECT dbo.MyFunction('param')"); When the code is executed, I get the following error: Cannot find either column "dbo" or the user-defined function or aggregate "dbo.MyFunction", or the name is ambiguous.

    Read the article

  • PHP SimpleXML recursive function to list children and attibutes

    - by Phill Pafford
    I need some help on the SimpleXML calls for a recursive function that lists the elements name and attributes. Making a XML config file system but each script will have it's own config file as well as a new naming convention. So what I need is an easy way to map out all the elements that have attributes, so like in example 1 I need a simple way to call all the processes but I don't know how to do this without hard coding the elements name is the function call. Is there a way to recursively call a function to match a child element name? I did see the xpath functionality but I don't see how to use this for attributes. Any ideas? Also does the XML in the examples look correct? can I structure my XML like this? Example 1: <application> <processes> <process id="123" name="run batch A" /> <process id="122" name="run batch B" /> <process id="129" name="run batch C" /> </processes> <connections> <databases> <database usr="test" pss="test" hst="test" dbn="test" /> </databases> <shells> <ssh usr="test" pss="test" hst="test-2" /> <ssh usr="test" pss="test" hst="test-1" /> </shells> </connections> </application> Example 2: <config> <queues> <queue id="1" name="test" /> <queue id="2" name="production" /> <queue id="3" name="error" /> </queues> </config> Pseudo code: // Would return matching process id getProcess($process_id) { return the process attributes as array that are in the XML } // Would return matching DBN (database name) getDatabase($database_name) { return the database attributes as array that are in the XML } // Would return matching SSH Host getSSHHost($ssh_host) { return the ssh attributes as array that are in the XML } // Would return matching SSH User getSSHUser($ssh_user) { return the ssh attributes as array that are in the XML } // Would return matching Queue getQueue($queue_id) { return the queue attributes as array that are in the XML } EDIT: Can I pass two parms? on the first method you have suggested @Gordon public function findProcessById($id, $name) { $attr = false; $el = $this->xml->xpath("//process[@id='$id']"); // How do I also filter by the name? if($el && count($el) === 1) { $attr = (array) $el[0]->attributes(); $attr = $attr['@attributes']; } return $attr; }

    Read the article

  • GInfoWindowTab function equivalent in Google Map API v3

    - by TTCG
    Is there any function in Google Map API Version to show the multiple tab info windows like the following example? Example This example is done by using Google Map API Version 2. I am wondering the equivalent function in Google Map API v3. I can only find the google.maps.InfoWindow which is the popup window without Tab. Thanks.

    Read the article

  • getGridParam is not a function

    - by AMAR
    Get Selected id's jQuery("#m1").click( function() { var s; s = jQuery("#list4").getGridParam('selarrrow'); alert(s); }); i have data in the grid "#list4" but it is giving error: getGridParam is not a function. Please sort out the prob.

    Read the article

  • Reverse function of HttpUtility.ParseQueryString

    - by Palani
    .Net System.Web.HttpUtility class has following function to parse query string into NameValueCollection. public static NameValueCollection ParseQueryString(string query); MSDN: http://msdn.microsoft.com/en-us/library/ms150046.aspx Is there any function to do reverse , convert NameValueCollection into Query String.

    Read the article

  • Open source alternative to MATLAB's fmincon function?

    - by dF
    Is there an open-source alternative to MATLAB's fmincon function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / NumPy / SciPy and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do.

    Read the article

  • C# alternative for javascript escape function

    - by FosterZ
    hi, what is an alternative for javascript escape function in c# for e.g suppose a string:"Hi Foster's i'm missing /you" will give "Hi%20Foster%27s%20i%27m%20missing%20/you" if we use javascript escape function, but what is the alternative for c#. i have searched for it but no use.

    Read the article

  • Python3 function annotations for type hinting versus Boo

    - by b0lt
    I've started on a medium-sized project in python, and I decided to use python 3 because I'm not using any large external libraries and py3k has some nice new syntactic sugar and more importantly function annotations. However, it seems like none of WingIDE, Pydev, or pycharm actually have any support for type hinting using function annotations. If I want something resembling static typing in python, is switching to boo a reasonable option?

    Read the article

  • Using scipy.interpolate.splrep function

    - by Koustav Ghosal
    I am trying to fit a cubic spline to a given set of points. My points are not ordered. I CANNOT sort or reorder the points, since I need that information. But since the function scipy.interpolate.splrep works only on non-duplicate and monotonically increasing points I have defined a function that maps the x-coordinates to a monotonically increasing space. My old points are: xpoints=[4913.0, 4912.0, 4914.0, 4913.0, 4913.0, 4913.0, 4914.0, 4915.0, 4918.0, 4921.0, 4925.0, 4932.0, 4938.0, 4945.0, 4950.0, 4954.0, 4955.0, 4957.0, 4956.0, 4953.0, 4949.0, 4943.0, 4933.0, 4921.0, 4911.0, 4898.0, 4886.0, 4874.0, 4865.0, 4858.0, 4853.0, 4849.0, 4848.0, 4849.0, 4851.0, 4858.0, 4864.0, 4869.0, 4877.0, 4884.0, 4893.0, 4903.0, 4913.0, 4923.0, 4935.0, 4947.0, 4959.0, 4970.0, 4981.0, 4991.0, 5000.0, 5005.0, 5010.0, 5015.0, 5019.0, 5020.0, 5021.0, 5023.0, 5025.0, 5027.0, 5027.0, 5028.0, 5028.0, 5030.0, 5031.0, 5033.0, 5035.0, 5037.0, 5040.0, 5043.0] ypoints=[10557.0, 10563.0, 10567.0, 10571.0, 10575.0, 10577.0, 10578.0, 10581.0, 10582.0, 10582.0, 10582.0, 10581.0, 10578.0, 10576.0, 10572.0, 10567.0, 10560.0, 10550.0, 10541.0, 10531.0, 10520.0, 10511.0, 10503.0, 10496.0, 10490.0, 10487.0, 10488.0, 10488.0, 10490.0, 10495.0, 10504.0, 10513.0, 10523.0, 10533.0, 10542.0, 10550.0, 10556.0, 10559.0, 10560.0, 10559.0, 10555.0, 10550.0, 10543.0, 10533.0, 10522.0, 10514.0, 10505.0, 10496.0, 10490.0, 10486.0, 10482.0, 10481.0, 10482.0, 10486.0, 10491.0, 10497.0, 10506.0, 10516.0, 10524.0, 10534.0, 10544.0, 10552.0, 10558.0, 10564.0, 10569.0, 10573.0, 10576.0, 10578.0, 10581.0, 10582.0] Plots: The code for the mapping function and interpolation is: xnew=[] ynew=ypoints for c3,i in enumerate(xpoints): if np.isfinite(np.log(i*pow(2,c3))): xnew.append(np.log(i*pow(2,c3))) else: if c==0: xnew.append(np.random.random_sample()) else: xnew.append(xnew[c3-1]+np.random.random_sample()) xnew=np.asarray(xnew) ynew=np.asarray(ynew) constant1=10.0 nknots=len(xnew)/constant1 idx_knots = (np.arange(1,len(xnew)-1,(len(xnew)-2)/np.double(nknots))).astype('int') knots = [xnew[i] for i in idx_knots] knots = np.asarray(knots) int_range=np.linspace(min(xnew),max(xnew),len(xnew)) tck = interpolate.splrep(xnew,ynew,k=3,task=-1,t=knots) y1= interpolate.splev(int_range,tck,der=0) The code is throwing an error at the function interpolate.splrep() for some set of points like the above one. The error is: File "/home/neeraj/Desktop/koustav/res/BOS5/fit_spline3.py", line 58, in save_spline_f tck = interpolate.splrep(xnew,ynew,k=3,task=-1,t=knots) File "/usr/lib/python2.7/dist-packages/scipy/interpolate/fitpack.py", line 465, in splrep raise _iermessier(_iermess[ier][0]) ValueError: Error on input data But for other set of points it works fine. For example for the following set of points. xpoints=[1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1630.0, 1630.0, 1630.0, 1631.0, 1631.0, 1631.0, 1631.0, 1630.0, 1629.0, 1629.0, 1629.0, 1628.0, 1627.0, 1627.0, 1625.0, 1624.0, 1624.0, 1623.0, 1620.0, 1618.0, 1617.0, 1616.0, 1615.0, 1614.0, 1614.0, 1612.0, 1612.0, 1612.0, 1611.0, 1610.0, 1609.0, 1608.0, 1607.0, 1607.0, 1603.0, 1602.0, 1602.0, 1601.0, 1601.0, 1600.0, 1599.0, 1598.0] ypoints=[10570.0, 10572.0, 10572.0, 10573.0, 10572.0, 10572.0, 10571.0, 10570.0, 10569.0, 10565.0, 10564.0, 10563.0, 10562.0, 10560.0, 10558.0, 10556.0, 10554.0, 10551.0, 10548.0, 10547.0, 10544.0, 10542.0, 10541.0, 10538.0, 10534.0, 10532.0, 10531.0, 10528.0, 10525.0, 10522.0, 10519.0, 10517.0, 10516.0, 10512.0, 10509.0, 10509.0, 10507.0, 10504.0, 10502.0, 10500.0, 10501.0, 10499.0, 10498.0, 10496.0, 10491.0, 10492.0, 10488.0, 10488.0, 10488.0, 10486.0, 10486.0, 10485.0, 10485.0, 10486.0, 10483.0, 10483.0, 10482.0, 10480.0] Plots: Can anybody suggest what's happening ?? Thanks in advance......

    Read the article

  • Matlab Simulink: Transfer Function

    - by stanigator
    I am trying to model a transfer function block for let's say 1/(s+1). I am having a hard time coming up with proper search terms for it, hence having trouble finding out how I can achieve this on the Internet. What is the easiest way to implement a block for a transfer function in Simulink? Thanks in advance.

    Read the article

  • What's wrong with this MySQL Stored Function?

    - by Matt
    Having trouble getting this to apply in MySQL Workbench 5.2.15 DELIMITER // CREATE DEFINER=`potts`@`%` FUNCTION `potts`.`fn_create_category_test` (test_arg VARCHAR(50)) RETURNS int BEGIN DECLARE new_id int; SET new_id = 8; RETURN new_id; END// The actual function will have a lot more between BEGIN and END but as it stands, even this 3 liner won't work. Thanks!

    Read the article

  • Jquery Tools Overlay call on javascript function

    - by user342944
    Hi Guys, I have a jquery overlay window in my html alongwith a button to activate here is the code. <!-- Validation Overlay Box --> <div class="modal" id="yesno" style="top: 100px; left: 320px; position: relative; display: none; z-index: 0; margin-top: 100px;"> <h2> &nbsp;&nbsp;Authentication Failed</h2> <p style="font-family:Arial; font-size:small; text-align:center"> Ether your username or password has been entered incorrectly. Please make sure that your username or password entered correctly... </p> <!-- yes/no buttons --> <p align="center"> <button class="close"> OK </button> </p> </div> Here is the Jquery tools Script <SCRIPT> $(document).ready(function() { var triggers = $(".modalInput").overlay({ // some mask tweaks suitable for modal dialogs mask: { color: '#a2a2a2', loadSpeed: 200, opacity: 0.9 }, closeOnClick: false }); var buttons = $("#yesno button").click(function(e) { // get user input var yes = buttons.index(this) === 0; // do something with the answer triggers.eq(0).html("You clicked " + (yes ? "yes" : "no")); }); $("#prompt form").submit(function(e) { // close the overlay triggers.eq(1).overlay().close(); // get user input var input = $("input", this).val(); // do something with the answer triggers.eq(1).html(input); // do not submit the form return e.preventDefault(); }); }); And here how its called upon clicking on a button or a hyperlink "<BUTTON class="modalInput" rel="#yesno">You clicked no</BUTTON>" All i want is I dont want to show the overlay on clicking on button or through a link. Is it possible to call it through a javascript function like "showOverlay()" ??

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >