Search Results

Search found 4838 results on 194 pages for 'binary heap'.

Page 135/194 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • how to omit extra bits in a file?

    - by thinthinyu
    I want to omit extra bit in txt file.eg ....ÿ 0111111110111101100011011010010001 in this string we want to omit extra bit "ÿ " which is appeared when we save a binary string. Save fun is as follow. please help me. void LFSR_ECDlg::Onsave() { this-UpdateData(); CFile bitstream; char strFilter[] = { "Stream Records (*.mpl)|*.mpl| (*.pis)|*.pis|All Files (*.*)|*.*||" }; CFileDialog FileDlg(FALSE, ".mpl", NULL, 0, strFilter); if( FileDlg.DoModal() == IDOK ) { if( bitstream.Open(FileDlg.GetFileName(), CFile::modeCreate | CFile::modeWrite) == FALSE ) return; CArchive ar(&bitstream, CArchive::store); CString txt; txt=""; txt.Format("%s",m_B);//by ANO AfxMessageBox (txt);//by ANO txt=m_B;//by ANO ar <<txt;//by ANO ar.Close(); } else return; bitstream.Close();

    Read the article

  • Problem running gems in OS X

    - by akarnid
    I'm running Snow Leopard, and installed a custom built Ruby according to the guide here: http://hivelogic.com/articles/compiling-ruby-rubygems-and-rails-on-snow-leopard . My ruby binary lives in usr/local/bin/ruby and my gems are installed in /usr/local/bin/gem . My gem env looks like so: RUBY VERSION: 1.8.7 (2008-08-11 patchlevel 72) [universal-darwin10.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin I think I may have borked the install since all actions taked on gems give the error: ERROR: While executing gem ... (Errno::EEXIST) File exists - /usr/local/bin/ruby How do you edit the environment variables for the gem environment? And for those of you on OS X and using ruby AND gems, what did you use to get yourself up and running? I'm thinking of just nuking everything and starting anew.

    Read the article

  • plupload variable upload path?

    - by SoulieBaby
    Hi all, I'm using plupload to upload files to my server (http://www.plupload.com/index.php), however I wanted to know if there was any way of making the upload path variable. Basically I need to select the upload path folder first, then choose the files using plupload and then upload to the initially selected folder. I've tried a few different ways but I can't seem to pass along the variable folder path to the upload.php file. I'm using the flash version of plupload. If someone could help me out, that would be fantastic!! :) Here's my plupload jquery: jQuery.noConflict(); jQuery(document).ready(function() { jQuery("#flash_uploader").pluploadQueue({ // General settings runtimes: 'flash', url: '/assets/upload/upload.php', max_file_size: '10mb', chunk_size: '1mb', unique_names: false, // Resize images on clientside if we can resize: {width: 500, height: 350, quality: 100}, // Flash settings flash_swf_url: '/assets/upload/flash/plupload.flash.swf' }); }); And here's the upload.php file: <?php /** * upload.php * * Copyright 2009, Moxiecode Systems AB * Released under GPL License. * * License: http://www.plupload.com/license * Contributing: http://www.plupload.com/contributing */ // HTTP headers for no cache etc header('Content-type: text/plain; charset=UTF-8'); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Last-Modified: ".gmdate("D, d M Y H:i:s")." GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // Settings $targetDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads"; //temp directory <- need these to be variable $finalDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads2"; //final directory <- need these to be variable $cleanupTargetDir = true; // Remove old files $maxFileAge = 60 * 60; // Temp file age in seconds // 5 minutes execution time @set_time_limit(5 * 60); // usleep(5000); // Get parameters $chunk = isset($_REQUEST["chunk"]) ? $_REQUEST["chunk"] : 0; $chunks = isset($_REQUEST["chunks"]) ? $_REQUEST["chunks"] : 0; $fileName = isset($_REQUEST["name"]) ? $_REQUEST["name"] : ''; // Clean the fileName for security reasons $fileName = preg_replace('/[^\w\._]+/', '', $fileName); // Create target dir if (!file_exists($targetDir)) @mkdir($targetDir); // Remove old temp files if (is_dir($targetDir) && ($dir = opendir($targetDir))) { while (($file = readdir($dir)) !== false) { $filePath = $targetDir . DIRECTORY_SEPARATOR . $file; // Remove temp files if they are older than the max age if (preg_match('/\\.tmp$/', $file) && (filemtime($filePath) < time() - $maxFileAge)) @unlink($filePath); } closedir($dir); } else die('{"jsonrpc" : "2.0", "error" : {"code": 100, "message": "Failed to open temp directory."}, "id" : "id"}'); // Look for the content type header if (isset($_SERVER["HTTP_CONTENT_TYPE"])) $contentType = $_SERVER["HTTP_CONTENT_TYPE"]; if (isset($_SERVER["CONTENT_TYPE"])) $contentType = $_SERVER["CONTENT_TYPE"]; if (strpos($contentType, "multipart") !== false) { if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen($_FILES['file']['tmp_name'], "rb"); if ($in) { while ($buff = fread($in, 4096)) fwrite($out, $buff); } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); unlink($_FILES['file']['tmp_name']); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } else die('{"jsonrpc" : "2.0", "error" : {"code": 103, "message": "Failed to move uploaded file."}, "id" : "id"}'); } else { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen("php://input", "rb"); if ($in) { while ($buff = fread($in, 4096)){ fwrite($out, $buff); } } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } //Moves the file from $targetDir to $finalDir after receiving the final chunk if($chunk == ($chunks-1)){ rename($targetDir . DIRECTORY_SEPARATOR . $fileName, $finalDir . DIRECTORY_SEPARATOR . $fileName); } // Return JSON-RPC response die('{"jsonrpc" : "2.0", "result" : null, "id" : "id"}'); ?>

    Read the article

  • Converting kernel image from ELF to PE

    - by Frank Miller
    I am using Msys to build a home brew kernel that I wrote under Linux. Linux used ELF for its binary format and Msys uses PE. I have the source setup to allow it to be booted by Grub using the Multiboot spec. At the end of the build, I get some undefined symbols: init.o:init.S:(.text+0x14): undefined reference to `edata' main.o:main.c:(.text+0x121): undefined reference to `_alloca' main.o:main.c:(.text+0x126): undefined reference to `__main' ../../lib\libkern.a(mem.o):mem.c:(.text+0x242): undefined reference to `_end' ../../lib\libkern.a(mem.o):mem.c:(.text+0x323): undefined reference to `_end' These appear to be ELF oriented symbols. If anyone can advise me on how these should be dealt with in the PE world, e.g. if there are equivalents, it would help me out a lot!

    Read the article

  • Is there a SortedList<T> class in .net ? (not SortedList<Key,Value> which is actually a kind of Sort

    - by Brann
    I need to sort some objects according to their contents (in fact according to one of their properties, which is NOT the key and may be duplicated between different objects) .net provides two classes (SortedDictionnary and SortedList), and both are some kind of Dictionaries. The only difference (AFAIK) is that SortedDictionnary uses a binary tree to maintain its state whereas SortedList does not and is accessible via an index. I could achieve what I want using a List, and then using its Sort() method with a custom implementation of IComparer, but it wouldn't be time-efficient as I would sort the whole List each time I insert a new object, whereas a good SortedList would just insert the item at the right position. What I need is a SortedList class with a RefreshPosition(int index) to move only the changed (or inserted) object rather than resorting the whole list each time an object inside changes. Am I missing something obvious ?

    Read the article

  • Convert Decimal to ASCII

    - by Dan Snyder
    I'm having difficulty using reinterpret_cast. Before I show you my code I'll let you know what I'm trying to do. I'm trying to get a filename from a vector full of data being used by a MIPS I processor I designed. Basically what I do is compile a binary from a test program for my processor, dump all the hex's from the binary into a vector in my c++ program, convert all of those hex's to decimal integers and store them in a DataMemory vector which is the data memory unit for my processor. I also have instruction memory. So When my processor runs a SYSCALL instruction such as "Open File" my C++ operating system emulator receives a pointer to the beginning of the filename in my data memory. So keep in mind that data memory is full of ints, strings, globals, locals, all sorts of stuff. When I'm told where the filename starts I do the following: Convert the whole decimal integer element that is being pointed to to its ASCII character representation, and then search from left to right to see if the string terminates, if not then just load each character consecutively into a "filename" string. Do this until termination of the string in memory and then store filename in a table. My difficulty is generating filename from my memory. Here is an example of what I'm trying to do: C++ Syntax (Toggle Plain Text) 1.Index Vector NewVector ASCII filename 2.0 240faef0 128123792 'abc7' 'a' 3.0 240faef0 128123792 'abc7' 'ab' 4.0 240faef0 128123792 'abc7' 'abc' 5.0 240faef0 128123792 'abc7' 'abc7' 6.1 1234567a 243225 'k2s0' 'abc7k' 7.1 1234567a 243225 'k2s0' 'abc7k2' 8.1 1234567a 243225 'k2s0' 'abc7k2s' 9. //EXIT LOOP// 10.1 1234567a 243225 'k2s0' 'abc7k2s' Index Vector NewVector ASCII filename 0 240faef0 128123792 'abc7' 'a' 0 240faef0 128123792 'abc7' 'ab' 0 240faef0 128123792 'abc7' 'abc' 0 240faef0 128123792 'abc7' 'abc7' 1 1234567a 243225 'k2s0' 'abc7k' 1 1234567a 243225 'k2s0' 'abc7k2' 1 1234567a 243225 'k2s0' 'abc7k2s' //EXIT LOOP// 1 1234567a 243225 'k2s0' 'abc7k2s' Here is the code that I've written so far to get filename (I'm just applying this to element 1000 of my DataMemory vector to test functionality. 1000 is arbitrary.): C++ Syntax (Toggle Plain Text) 1.int i = 0; 2.int step = 1000;//top->a0; 3.string filename; 4.char *temp = reinterpret_cast<char*>( DataMemory[1000] );//convert to char 5.cout << "a0:" << top->a0 << endl;//pointer supplied 6.cout << "Data:" << DataMemory[top->a0] << endl;//my vector at pointed to location 7.cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing 8.cout << "Characters:" << &temp << endl;//my temporary char array 9. 10.while(&temp[i]!=0) 11.{ 12. filename+=temp[i];//add most recent non-terminated character to string 13. i++; 14. if(i==4)//when 4 chatacters have been added.. 15. { 16. i=0; 17. step+=1;//restart loop at the next element in DataMemory 18. temp = reinterpret_cast<char*>( DataMemory[step] ); 19. } 20. } 21. cout << "Filename:" << filename << endl; int i = 0; int step = 1000;//top-a0; string filename; char *temp = reinterpret_cast( DataMemory[1000] );//convert to char cout << "a0:" << top-a0 << endl;//pointer supplied cout << "Data:" << DataMemory[top-a0] << endl;//my vector at pointed to location cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing cout << "Characters:" << &temp << endl;//my temporary char array while(&temp[i]!=0) { filename+=temp[i];//add most recent non-terminated character to string i++; if(i==3)//when 4 chatacters have been added.. { i=0; step+=1;//restart loop at the next element in DataMemory temp = reinterpret_cast( DataMemory[step] ); } } cout << "Filename:" << filename << endl; So the issue is that when I do the conversion of my decimal element to a char array I assume that 8 hex #'s will give me 4 characters. Why isn't this this case? Here is my output: C++ Syntax (Toggle Plain Text) 1.a0:0 2.Data:0 3.Data(1000):4428576 4.Characters:0x7fff5fbff128 5.Segmentation fault

    Read the article

  • Why is insertion into my tree faster on sorted input than random input?

    - by Juliet
    Now I've always heard binary search trees are faster to build from randomly selected data than ordered data, simply because ordered data requires explicit rebalancing to keep the tree height at a minimum. Recently I implemented an immutable treap, a special kind of binary search tree which uses randomization to keep itself relatively balanced. In contrast to what I expected, I found I can consistently build a treap about 2x faster and generally better balanced from ordered data than unordered data -- and I have no idea why. Here's my treap implementation: http://pastebin.com/VAfSJRwZ And here's a test program: using System; using System.Collections.Generic; using System.Linq; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static Random rnd = new Random(); const int ITERATION_COUNT = 20; static void Main(string[] args) { List<double> rndTimes = new List<double>(); List<double> orderedTimes = new List<double>(); rndTimes.Add(TimeIt(50, RandomInsert)); rndTimes.Add(TimeIt(100, RandomInsert)); rndTimes.Add(TimeIt(200, RandomInsert)); rndTimes.Add(TimeIt(400, RandomInsert)); rndTimes.Add(TimeIt(800, RandomInsert)); rndTimes.Add(TimeIt(1000, RandomInsert)); rndTimes.Add(TimeIt(2000, RandomInsert)); rndTimes.Add(TimeIt(4000, RandomInsert)); rndTimes.Add(TimeIt(8000, RandomInsert)); rndTimes.Add(TimeIt(16000, RandomInsert)); rndTimes.Add(TimeIt(32000, RandomInsert)); rndTimes.Add(TimeIt(64000, RandomInsert)); rndTimes.Add(TimeIt(128000, RandomInsert)); string rndTimesAsString = string.Join("\n", rndTimes.Select(x => x.ToString()).ToArray()); orderedTimes.Add(TimeIt(50, OrderedInsert)); orderedTimes.Add(TimeIt(100, OrderedInsert)); orderedTimes.Add(TimeIt(200, OrderedInsert)); orderedTimes.Add(TimeIt(400, OrderedInsert)); orderedTimes.Add(TimeIt(800, OrderedInsert)); orderedTimes.Add(TimeIt(1000, OrderedInsert)); orderedTimes.Add(TimeIt(2000, OrderedInsert)); orderedTimes.Add(TimeIt(4000, OrderedInsert)); orderedTimes.Add(TimeIt(8000, OrderedInsert)); orderedTimes.Add(TimeIt(16000, OrderedInsert)); orderedTimes.Add(TimeIt(32000, OrderedInsert)); orderedTimes.Add(TimeIt(64000, OrderedInsert)); orderedTimes.Add(TimeIt(128000, OrderedInsert)); string orderedTimesAsString = string.Join("\n", orderedTimes.Select(x => x.ToString()).ToArray()); Console.WriteLine("Done"); } static double TimeIt(int insertCount, Action<int> f) { Console.WriteLine("TimeIt({0}, {1})", insertCount, f.Method.Name); List<double> times = new List<double>(); for (int i = 0; i < ITERATION_COUNT; i++) { Stopwatch sw = Stopwatch.StartNew(); f(insertCount); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } return times.Average(); } static void RandomInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for (int i = 0; i < insertCount; i++) { tree = tree.Insert(rnd.NextDouble()); } } static void OrderedInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for(int i = 0; i < insertCount; i++) { tree = tree.Insert(i + rnd.NextDouble()); } } } } And here's a chart comparing random and ordered insertion times in milliseconds: Insertions Random Ordered RandomTime / OrderedTime 50 1.031665 0.261585 3.94 100 0.544345 1.377155 0.4 200 1.268320 0.734570 1.73 400 2.765555 1.639150 1.69 800 6.089700 3.558350 1.71 1000 7.855150 4.704190 1.67 2000 17.852000 12.554065 1.42 4000 40.157340 22.474445 1.79 8000 88.375430 48.364265 1.83 16000 197.524000 109.082200 1.81 32000 459.277050 238.154405 1.93 64000 1055.508875 512.020310 2.06 128000 2481.694230 1107.980425 2.24 I don't see anything in the code which makes ordered input asymptotically faster than unordered input, so I'm at a loss to explain the difference. Why is it so much faster to build a treap from ordered input than random input?

    Read the article

  • Querying and ordering results of a database in grails using transient fields

    - by Azder
    I'm trying to display paged data out of a grails domain object. For example: I have a domain object Employee with the properties firstName and lastName which are transient, and when invoking their setter/getter methods they encrypt/decrypt the data. The data is saved in the database in encrypted binary format, thus not sortable by those fields. And yet again, not sortable by transient ones either, as noted in: http://www.grails.org/GSP+Tag+-+sortableColumn . So now I'm trying to find a way to use the transients in a way similar to: Employee.withCriteria( max: 10, offset: 30 ){ order 'lastName', 'asc' order 'firstName', 'asc' } The class is: class Employee { byte[] encryptedFirstName byte[] encryptedLastName static transients = [ 'firstName', 'lastName' ] String getFirstName(){ decrypt("encryptedFirstName") } void setFirstName(String item){ encrypt("encryptedFirstName",item) } String getLastName(){ decrypt("encryptedLastName") } void setLastName(String item){ encrypt("encryptedLastName",item) } }

    Read the article

  • Compressing large text data before storing into db?

    - by Steel Plume
    Hello, I have application which retrieves many large log files from a system LAN. Currently I put all log files on Postgresql, the table has a column type TEXT and I don't plan any search on this text column because I use another external process which nightly retrieves all files and scans for sensitive pattern. So the column value could be also a BLOB or a CLOB, but now my question is the following, the database has already its compression system, but could I improve this compression manually like with common compressor utilities? And above all WHAT IF I manually pre-compress the large file and then I put as binary into the data table, is it unuseful as database system provides its internal compression?

    Read the article

  • Why does my PDF ask for a password after being retrieved from Visual SourceSafe?

    - by Schnapple
    PREFACE: Yes we're moving away from VSS in the next few months. One of my web projects contains, as one of its files, a PDF. The PDF on our QA site is being pulled from VSS. A QA tester recently told me he's being prompted for a password when he tries to open it. VSS says the file I have on disk is different than the one it has, so I updated it, but afterwards it's still being shown as different. So basically VSS is mangling my PDF and the results are so wobbly that Adobe Acrobat Reader is confused and thinks it has a password. I've tried adding it as Auto-Detect and as Binary. Same results. Why does my PDF ask for a password after being retrieved from Visual SourceSafe and how can I prevent it?

    Read the article

  • What is a simple way to combine two Emacs major modes, or to change an existing mode?

    - by Winston C. Yang
    In Emacs, I'm working with a file that is a hybrid of two languages. Question 1: Is there a simple way to write a major mode file that combines two major modes? Details: The language is called "brew" (not the "BREW" of "Binary Runtime Environment for Wireless"). brew is made up of the languages R and Latex, whose modes are R-mode and latex-mode. The R code appears between the tags <% and %. Everything else is Latex. How can I write a brew-mode.el file? (Or is one already available?) One idea, which I got from this posting, is to use Latex mode, and treat the code of the form <% ... % as a comment. Question 2: How do you change the .emacs file or the latex.el file to have Latex mode treat code of the form <% ... % as a comment?

    Read the article

  • Algorithm for Source Control System?

    - by Michael Stum
    I need to write a simple source control system and wonder what algorithm I would use for file differences? I don't want to look into existing source code due to license concerns. I need to have it licensed under MPL so I can't look at any of the existing systems like CVS or Mercurial as they are all GPL licensed. Just to give some background, I just need some really simple functions - binary files in a folder. no subfolders and every file behaves like it's own repository. No Metadata except for some permissions. Overall really simple stuff, my single concern really is how to store only the differences of a file from revision to revision without wasting too much space but also without being too inefficient (Maybe store a full version every X changes, a bit like Keyframes in Videos?)

    Read the article

  • Haskell optimization of a function looking for a bytestring terminator

    - by me2
    Profiling of some code showed that about 65% of the time I was inside the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • How to package up a Python 3 script with modules

    - by Brian
    I created a small script that uses a few 3rd-party modules. I'm not sure how to distribute it. I tried Pyinstaller, but that doesn't seem to work. It can't find the modules. When I give the binary to a co-worker, it says it is looking for files in my home directory ( not his ) and dies. I have found that Pyinstaller is not able to find most modules. I am running Python 3 and installed Pyinstaller with pip from Python 2. It did not work trying to use pip from Python 3. When, I give it a path to my modules, it complains that they are python 3 modules. Just looking for some clarification. Ultimately, I'd like to run this on a linux or OS X box where python and my modules probably won't be installed. I just started Python yesterday and have a ton to learn.

    Read the article

  • Is there a distributed VCS that can manage large files?

    - by joelhardi
    Is there a distributed version control system (git, bazaar, mercurial, darcs etc.) that can handle files larger than available RAM? I need to be able to commit large binary files (i.e. datasets, source video/images, archives), but I don't need to be able to diff them, just be able to commit and then update when the file changes. I last looked at this about a year ago, and none of the obvious candidates allowed this, since they're all designed to diff in memory for speed. That left me with a VCS for managing code and something else ("asset management" software or just rsync and scripts) for large files, which is pretty ugly when the directory structures of the two overlap.

    Read the article

  • read first 1kb of a blob from oracle

    - by Angus
    Hi, I wish to extract just the first 1024 bytes of a stored blob and not the whole file. The reason for this is I want to just extract the metadata from a file as quickly as possible without having to select the whole blob. I understand the following: select dbms_lob.substr(file_blob, 16,1) from file_upload where file_upload_id=504; which returns it as hex. How may I do this so it returns it in binary data without selecting the whole blob? Thanks in advance.

    Read the article

  • C# parameters by reference and .net garbage collection

    - by Yarko
    I have been trying to figure out the intricacies of the .NET garbage collection system and I have a question related to C# reference parameters. If I understand correctly, variables defined in a method are stored on the stack and are not affected by garbage collection. So, in this example: public class Test { public Test() { } public int DoIt() { int t = 7; Increment(ref t); return t; } private int Increment(ref int p) { p++; } } the return value of DoIt() will be 8. Since the location of t is on the stack, then that memory cannot be garbage collected or compacted and the reference variable in Increment() will always point to the proper contents of t. However, suppose we have: public class Test { private int t = 7; public Test() { } public int DoIt() { Increment(ref t); return t; } private int Increment(ref int p) { p++; } } Now, t is stored on the heap as it is a value of a specific instance of my class. Isn't this possibly a problem if I pass this value as a reference parameter? If I pass t as a reference parameter, p will point to the current location of t. However, if the garbage collector moves this object during a compact, won't that mess up the reference to t in Increment()? Or does the garbage collector update even references created by passing reference parameters? Do I have to worry about this at all? The only mention of worrying about memory being compacted on MSDN (that I can find) is in relation to passing managed references to unmanaged code. Hopefully that's because I don't have to worry about any managed references in managed code. :)

    Read the article

  • C++ Beginner Delete Question

    - by Pooch
    Hi all, This is my first year learning C++ so bear with me. I am attempting to dynamically allocate memory to the heap and then delete the allocated memory. Below is the code that is giving me a hard time: // String.cpp #include "String.h" String::String() {} String::String(char* source) { this->Size = this->GetSize(source); this->CharArray = new char[this->Size + 1]; int i = 0; for (; i < this->Size; i++) this->CharArray[i] = source[i]; this->CharArray[i] = '\0'; } int String::GetSize(const char * source) { int i = 0; for (; source[i] != '\0'; i++); return i; } String::~String() { delete[] this->CharArray; } Here is the error I get when the compiler tries to delete the CharArray: 0xC0000005: Access violation reading location 0xccccccc0. And here is the last call on the stack: msvcr100d.dll!operator delete(void * pUserData) Line 52 + 0x3 bytes C++ I am fairly certain the error exists within this piece of code but will provide you with any other information needed. Oh yeah, using VS 2010 for XP. Thanks for any and all help!

    Read the article

  • How can I have a serializable struct that wraps it's self as an int32 implicitly? in C#?

    - by firoso
    Long story short, I have a struct (see below) that contains exactly one field: private int value; I've also implemented implicit conversion operators: public static implicit operator int(Outlet val) { return val.value; } public static implicit operator Outlet(int val) { return new Outlet(val); } I've implemented all of the following : IComparable, IComparable<Cart>, IComparable<int>, IConvertible, IEquatable<Cart>, IEquatable<int>, IFormattable I'm at a point where I really have no clue why, but whenever I serialize this object, I get no value. For instance, with XmlSerialization: <Outlet /> Also, I'm not solely concerned about XmlSerialization, I'm concerned about ALL serialization (binary for instance) How can I ensure that this serializes properly? NOTE: I did this because mapping an int,int dictionary seemed rather poorly typed to me when explicit objects with validation behavior were desired.

    Read the article

  • Array.BinarySearch does not find item using IComparable

    - by Sir Psycho
    If a binary search requires an array to be sorted before hand, why does the following code work? string[] strings = new[] { "z", "a", "y", "e", "v", "u" }; int pos = Array.BinarySearch(strings, "Y", StringComparer.OrdinalIgnoreCase); Console.WriteLine(pos); And why does this code result return -1? public class Person : IComparable<Person> { public string Name { get; set; } public int Age { get; set; } public int CompareTo(Person other) { return this.Age.CompareTo(other.Age) + this.Name.CompareTo(other.Name); } } var people = new[] { new Person { Age=5,Name="Tom"}, new Person { Age=1,Name="Tom"}, new Person { Age=2,Name="Tom"}, new Person { Age=1,Name="John"}, new Person { Age=1,Name="Bob"}, }; var s = new Person { Age = 1, Name = "Tom" }; // returns -1 Console.WriteLine( Array.BinarySearch(people, s) );

    Read the article

  • In App Purchase Problem

    - by Chris
    hi , please help me , i am getting product count 0 in response.product. i have followed all the steps of in app purchase in documentation . 1.Provisioning profile and i allowed in app purchase. 2.Dummy app binary uploaded to itunes and rejected that by myself. 3.Setup 2 products in itunes in app purchase and app identifier was selected. All the things seems fine but i am getting product count 0. Please let me know how can i solve this issue . Thanks

    Read the article

  • GA written in Java

    - by EnderMB
    I am attempting to write a Genetic Algorithm based on techniques I had picked up from the book "AI Techniques for Game Programmers" that uses a binary encoding and fitness proportionate selection (also known as roulette wheel selection) on the genes of the population that are randomly generated within the program in a two-dimensional array. I recently came across a piece of pseudocode and have tried to implement it, but have come across some problems with the specifics of what I need to be doing. I've checked a number of books and some open-source code and am still struggling to progress. I understand that I have to get the sum of the total fitness of the population, pick a random number between the sum and zero, then if the number is greater than the parents to overwrite it, but I am struggling with the implementation of these ideas. Any help in the implementation of these ideas would be very much appreciated as my Java is rusty.

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • How to organize code using an optional assembly reference?

    - by apoorv020
    I am working on a project and want to optionally use an assembly if available. This assembly is only available on WS 2008 R2, and my ideal product whould be a common binary for both computers with and without the assembly. However, I'm primarily developing on a Windows 7 machine, where I cannot install the assembly. How can I organize my code so that I can (with minimum changes) build my code on a machine without the assembly and secondly, how do I ensure that I call the assembly functions only when it is present. (NOTE : The only use of the optional assembly is to instantiate a class in the library and repeatedly call a (single) function of the class, which returns a boolean. The assembly is fsrmlib, which exposes advanced file system management operations on WS08R2.) I'm currently thinking of writing a wrapper class, which will always return true if the assembly is not present. Is this the right way to go about doing this?

    Read the article

  • Posting an image and textual based data to a wcf service

    - by James Hay
    I have a requirement to write a web service that allows me to post an image to a server along with some additional information about that image. I'm completely new to developing web services (normally client side dev) so I'm a little stumped as to what I need to look into and try. How do you post binary data and plain text into a servic? What RequestFormat should I use? It looks like my options are xml or json. Can I use either of these? Bit of a waffly question but I just need some direction rather than a solution as I can't seem to find much online.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >