Search Results

Search found 4214 results on 169 pages for 'binary serializer'.

Page 72/169 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • How to set default values to the properties of dynamically loaded types at runtime for XML serializa

    - by Erkan Y.
    I need to serialize dynamically loaded types' classes using XMLSerializer. When using XML serializer non initialized values are not being serialized. I dont have control over the assemblies I am working with so can not use XML attributes for specifying default values on properties. So I think I need to set all properties and sub properties to their default values recursively and then serialize. ( Please let me know if there is any better way ) Followed this : Activator.CreateInstance(propType); but above line complains about not having a parameterless constructor for some types. Tried this : subObject = FormatterServices.GetUninitializedObject(propType); but this one gives an error "value was invalid" with no inner exception. Please let me know if you need any further information.

    Read the article

  • What algorithm can I use to determine points within a semi-circle?

    - by khayman218
    I have a list of two-dimensional points and I want to obtain which of them fall within a semi-circle. Originally, the target shape was a rectangle aligned with the x and y axis. So the current algorithm sorts the pairs by their X coord and binary searches to the first one that could fall within the rectangle. Then it iterates over each point sequentially. It stops when it hits one that is beyond both the X and Y upper-bound of the target rectangle. This does not work for a semi-circle as you cannot determine an effective upper/lower x and y bounds for it. The semi-circle can have any orientation. Worst case, I will find the least value of a dimension (say x) in the semi-circle, binary search to the first point which is beyond it and then sequentially test the points until I get beyond the upper bound of that dimension. Basically testing an entire band's worth of points on the grid. The problem being this will end up checking many points which are not within the bounds.

    Read the article

  • Timing related crash when unloading a DLL?

    - by fbrereto
    I know I'm reaching for straws here, but this one is a mystery... any pointers or help would be most welcome, so I'm appealing to those more intelligent than I: We have a crash exhibited in our release binaries only. The crash takes place as the binary is bringing itself down and terminating sub-libraries upon which it depends. Its ability to be reproduced is dependent on the machine- some are 100% reliable in reproducing the crash, some don't exhibit the issue at all, and some are in between. The crash is deep within one of the sublibraries, and there is a good likelihood the stack is corrupt by the time the rubble can be brought into a debugger (MSVC 2008 SP1) to be examined. Running the binary under the debugger prevents the bug from happening, as does remote debugging, as does (of all things) connecting to the machine via VNC. We have tried to install the Microsoft Driver Development Kit, and doing so also squelches the bug. What would be the next best place to look? What tools would be best in this circumstance? Does it sound like a race condition, or something else?

    Read the article

  • Why should I use core.autocrlf in Git

    - by Rich
    I have a Git repository that is accessed from both Windows and OS X, and that I know already contains some files with CRLF line-endings. As far as I can tell, there are two ways to deal with this: Set core.autocrlf to false everywhere, Follow the instructions here (echoed on GitHub's help pages) to convert the repository to contain only LF line-endings, and thereafter set core.autocrlf to true on Windows and input on OS X. The problem with doing this is that if I have any binary files in the repository that: a). are not correctly marked as binary in gitattributes, and b). happen to contain both CRLFs and LFs, they will be corrupted. It is possible my repository contains such files. So why shouldn't I just turn off Git's line-ending conversion? There are a lot of vague warnings on the web about having core.autocrlf switched off causing problems, but very few specific ones; the only that I've found so far are that kdiff3 cannot handle CRLF endings (not a problem for me), and that some text editors have line-ending issues (also not a problem for me). The repository is internal to my company, and so I don't need to worry about sharing it with people with different autocrlf settings or line-ending requirements. Are there any other problems with just leaving line-endings as-is that I am unaware of?

    Read the article

  • BASH: How to count all the human readable files?

    - by user1687406
    I'm taking an intro course to UNIX and have a homework question that follows: How many files in the previous question are text files? A text file is any file containing human-readable content. (TRICK QUESTION. Run the file command on a file to see whether the file is a text file or a binary data file! If you simply count the number of files with the ".txt" extension you will get no points for this question.) The previous question simply asked how many regular files there were, which was easy to figure out by doing find . -type f | wc -l I'm just having trouble determining what "human readable content" is, since I'm assuming it means anything besides binary/assembly, but I thought that's what -type f displays. Maybe that's what the professor meant by saying "trick question"? This question has a follow up later that also asks "What text files contain the string "csc" in any mix of upper and lower case?". Obviously "text" is referring to more than just .txt files, but I need to figure out the first question to determine this!

    Read the article

  • Embedded Record is not getting loaded in Ember.js

    - by Venky
    Following is the JSON data I am trying to load using ember-data: { "product" : [ { "id" : 1, "name" : "product1", "master" : { "id" : 1, "name" : "product1", "images" : [ { "id" : 1, "productUrl" : "/images/product1_1.jpg" }, { "id" : 2, "productUrl" : "/images/product1_2.jpg" } ] } }, { "id" : 2, "name" : "product2", "master" : { "id" : 2, "name" : "product2", "images" : [ { "id" : 3, "productUrl" : "/images/product2_1.jpg" }, { "id" : 4, "productUrl" : "/images/product2_2.jpg" } ] } } ] } The models are as follows: App.Product = DS.Model.extend name: DS.attr('string') description: DS.attr('string') master: DS.belongsTo('master') App.Master = DS.Model.extend images: DS.hasMany('image') App.Image = DS.Model.extend productUrl: DS.attr('string') The Application Serializer code is as follows: App.ApplicationSerializer = DS.ActiveModelSerializer.extend(DS.EmbeddedRecordsMixin, attrs: { images: { embedded : 'always' } master: { embedded : 'always' } } ) The problem is that the "master" model records are being returned empty. I am not sure, where I am going wrong. I am using the following platform configuration: ember-source (1.4.0) ember-data-source (1.0.0.beta.7) ember-rails (0.15.0) Rails (4.1.0) Thanks

    Read the article

  • Error No 150 mySQL

    - by buddhamagnet
    Hi all. This is making me sweat - I am getting error 150 when I try and create a table in mySQL. I've scoured the forums to no avail. The statement uses foreign key constraints - both tables are InnoDB, all relevant columns have the same data type and both tables have the same charset and collation. Here's the CREATE TABLE and the original CREATE TABLE statement for the table that's being referenced. Any ideas? New table: CREATE TABLE `approval` ( `rev_id` int(10) UNSIGNED NOT NULL, `rev_page` int(10) UNSIGNED NOT NULL, `user_id` int(10) UNSIGNED NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`rev_id`,`rev_page`,`user_id`), KEY `FK_approval_user` (`user_id`), CONSTRAINT `FK_approval_revision` FOREIGN KEY (`rev_id`, `rev_page`) REFERENCES `revision` (`rev_id`, `rev_page`), CONSTRAINT `FK_approval_user` FOREIGN KEY (`user_id`) REFERENCES `user` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Referenced table: CREATE TABLE `revision` ( `rev_id` int(10) unsigned NOT NULL auto_increment, `rev_page` int(10) unsigned NOT NULL, `rev_text_id` int(10) unsigned NOT NULL, `rev_comment` tinyblob NOT NULL, `rev_user` int(10) unsigned NOT NULL default '0', `rev_user_text` varbinary(255) NOT NULL default '', `rev_timestamp` binary(14) NOT NULL default '\0\0\0\0\0\0\0\0\0\0\0\0\0\0', `rev_minor_edit` tinyint(3) unsigned NOT NULL default '0', `rev_deleted` tinyint(3) unsigned NOT NULL default '0', `rev_len` int(10) unsigned default NULL, `rev_parent_id` int(10) unsigned default NULL, PRIMARY KEY (`rev_id`), UNIQUE KEY `rev_page_id` (`rev_page`,`rev_id`), KEY `rev_timestamp` (`rev_timestamp`), KEY `page_timestamp` (`rev_page`,`rev_timestamp`), KEY `user_timestamp` (`rev_user`,`rev_timestamp`), KEY `usertext_timestamp` (`rev_user_text`,`rev_timestamp`) ) ENGINE=InnoDB AUTO_INCREMENT=4904 DEFAULT CHARSET=binary MAX_ROWS=10000000 AVG_ROW_LENGTH=1024;

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Collectable<T> serialization, Root Namespaces on T in .xml files.

    - by Stacey
    I have a Repository Class with the following method... public T Single<T>(Predicate<T> expression) { using (var list = (Models.Collectable<T>)System.Xml.Serializer.Deserialize(typeof(Models.Collectable<T>), FileName)) { return list.Find(expression); } } Where Collectable is defined.. [Serializable] public class Collectable<T> : List<T>, IDisposable { public Collectable() { } public void Dispose() { } } And an Item that uses it is defined.. [Serializable] [System.Xml.Serialization.XmlRoot("Titles")] public partial class Titles : Collectable<Title> { } The problem is when I call the method, it expects "Collectable" to be the XmlRoot, but the XmlRoot is "Titles" (all of object Title). I have several classes that are collected in .xml files like this, but it seems pointless to rewrite the basic methods for loading each up when the generic accessors do it - but how can I enforce the proper root name for each file without hard coding methods for each one? The [System.Xml.Serialization.XmlRoot] seems to be ignored.

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • How to create C++ istringstream from a char array with null(0) characters?

    - by Morpheus
    I have a char array which contains null characters at random locations. I tried to create an iStringStream using this array (encodedData_arr) as below, I use this iStringStream to insert binary data(imagedata of Iplimage) to a MySQL database blob field(using MySQL Connector/C++'s setBlob(istream *is) ) it only stores the characters upto the first null character. Is there a way to create an iStringStream using a char array with null characters? unsigned char *encodedData_arr = new unsigned char[data_vector_uchar->size()]; // Assign the data of vector<unsigned char> to the encodedData_arr for (int i = 0; i < vec_size; ++i) { cout<< data_vector_uchar->at(i)<< " : "<< encodedData_arr[i]<<endl; } // Here the content of the encodedData_arr is same as the data_vector_uchar // So char array is initializing fine. istream *is = new istringstream((char*)encodedData_arr, istringstream::in || istringstream::binary); prepStmt_insertImage->setBlob(1, is); // Here only part of the data is stored in the database blob field (Upto the first null character)

    Read the article

  • Where is mpx386.6 and start.c in Minix 3.2?

    - by John Bowlinger
    I'm trying to follow along in Operating Systems and Implementation 3rd edition and I'm now at the part in the book where Tanenbaum is discussing bootup and kernel process switching. He keeps referring to these 2 files (mpx386.s, start.c) that are supposedly in a directory called kernel, but I can't seem to find them. In the root directory, when I go to boot/minix/3.2.0/kernel, kernel just seems to be a binary file that is illegible in terminal. There also seems to be a bunch of mod01-mod12 gz binary files as well in the 3.2.0 directory. Am I in the wrong directory, or is there something I need to install and do to read kernel? I would like to follow along with the book to what's on my screen, instead of constantly flipping back and forth. I realize alot of files are completely different from this book published in 2006 and I accept that, but this seems to be a critical juncture of the book and the operating system as a whole. If it's any consolation, I'm running the OS in Virtualbox on a 64-bit Macbook.

    Read the article

  • Partially Modifying an XML serialized document.

    - by Stacey
    I have an XML document, several actually, that will be editable via a front-end UI. I've discovered a problem with this approach (other than the fact that it is using xml files instead of a database... but I cannot change that right now). If one user makes a change while another user is in the process of making a change, then the second one's changes will overwrite the first. I need to be able to request objects from the xml files, change them, and then submit the changes back to the xml file without re-writing the entire file. I've got my entire xml access class posted here (which was formed thanks to wonderful help from stackoverflow!) using System; using System.Linq; using System.Collections; using System.Collections.Generic; namespace Repositories { /// <summary> /// A file base repository represents a data backing that is stored in an .xml file. /// </summary> public partial class Repository<T> : IRepository { /// <summary> /// Default constructor for a file repository /// </summary> public Repository() { } /// <summary> /// Initialize a basic repository with a filename. This will have to be passed from a context to be mapped. /// </summary> /// <param name="filename"></param> public Repository(string filename) { FileName = filename; } /// <summary> /// Discovers a single item from this repository. /// </summary> /// <typeparam name="TItem">The type of item to recover.</typeparam> /// <typeparam name="TCollection">The collection the item belongs to.</typeparam> /// <param name="expression"></param> /// <returns></returns> public TItem Single<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem> { using (var list = List<TCollection>()) { return list.Single(i => expression(i)); } } /// <summary> /// Discovers a collection from the repository, /// </summary> /// <typeparam name="TCollection"></typeparam> /// <returns></returns> public TCollection List<TCollection>() where TCollection : IDisposable { using (var list = System.Xml.Serializer.Deserialize<TCollection>(FileName)) { return (TCollection)list; } } /// <summary> /// Discovers a single item from this repository. /// </summary> /// <typeparam name="TItem">The type of item to recover.</typeparam> /// <typeparam name="TCollection">The collection the item belongs to.</typeparam> /// <param name="expression"></param> /// <returns></returns> public List<TItem> Select<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem> { using (var list = List<TCollection>()) { return list.Where( i => expression(i) ).ToList<TItem>(); } } /// <summary> /// Attempts to save an entire collection. /// </summary> /// <typeparam name="TCollection"></typeparam> /// <param name="collection"></param> /// <returns></returns> public Boolean Save<TCollection>(TCollection collection) { try { // load the collection into an xml reader and try to serialize it. System.Xml.XmlDocument xDoc = new System.Xml.XmlDocument(); xDoc.LoadXml(System.Xml.Serializer.Serialize<TCollection>(collection)); // attempt to flush the file xDoc.Save(FileName); // assume success return true; } catch { return false; } } internal string FileName { get; private set; } } public interface IRepository { TItem Single<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem>; TCollection List<TCollection>() where TCollection : IDisposable; List<TItem> Select<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem>; Boolean Save<TCollection>(TCollection collection); } }

    Read the article

  • ASP.NET web services leak memory when (de)serializing disposable objects?

    - by Serilla
    In the following two cases, if Customer is disposable (implementing IDisposable), I believe it will not be disposed by ASP.NET, potentially being the cause of a memory leak: [WebMethod] public Customer FetchCustomer(int id) { return new Customer(id); } [WebMethod] public void SaveCustomer(Customer value) { // save it } This flaw applies to any IDisposable object. So returning a DataSet from a ASP.NET web service, for example, will also result in a memory leak - the DataSet will not be disposed. In my case, Customer opened a database connection which was cleaned up in Dispose - except Dispose was never called resulting in loads of unclosed database connections. I realise there a whole bunch of bad practices being followed here (its only an example anyway), but the point is that ASP.NET - the (de)serializer - is responsible for disposing these objects, so why doesn't it? This is an issue I was aware of for a while, but never got to the bottom of. I'm hoping somebody can confirm what I have found, and perhaps explain if there is a way of dealing with it.

    Read the article

  • Response as mail attachment

    - by el ninho
    using (MemoryStream stream = new MemoryStream()) { compositeLink.PrintingSystem.ExportToPdf(stream); Response.Clear(); Response.Buffer = false; Response.AppendHeader("Content-Type", "application/pdf"); Response.AppendHeader("Content-Transfer-Encoding", "binary"); Response.AppendHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.BinaryWrite(stream.GetBuffer()); Response.End(); } I got this working fine. Next step is to send this pdf file to mail, as attachment using (MemoryStream stream = new MemoryStream()) { compositeLink.PrintingSystem.ExportToPdf(stream); Response.Clear(); Response.Buffer = false; Response.AppendHeader("Content-Type", "application/pdf"); Response.AppendHeader("Content-Transfer-Encoding", "binary"); Response.AppendHeader("Content-Disposition", "attachment; filename=test.pdf"); Response.BinaryWrite(stream.GetBuffer()); System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage(); message.To.Add("[email protected]"); message.Subject = "Subject"; message.From = new System.Net.Mail.MailAddress("[email protected]"); message.Body = "Body"; message.Attachments.Add(Response.BinaryWrite(stream.GetBuffer())); System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("192.168.100.100"); smtp.Send(message); Response.End(); } I have problem with this line: message.Attachments.Add(Response.BinaryWrite(stream.GetBuffer())); Any help how to get this to work? Thanks

    Read the article

  • Is it necessary to declare attribute [DataMember(Order=n)] on public method?

    - by veera
    In my solution, I have created public class to store value and already declare [DataContract/DataMember] attribute. For example, [DataContract] public class MeterSizeInfo { string _meterSizeId; [DataMember(Order = 1)] public string MeterSizeId { get { return this._meterSizeId; } set { this._meterSizeId = value; } } string _meterSizeName; [DataMember(Order = 2)] public string MeterSizeName { get { return this._meterSizeName; } set { this._meterSizeName = value; } } } Then I need to add another public method exposing to entire project. I wonder I have to add [DataMember(Order = 3)] for it or not. [DataMember(Order = 3)] //<--- must declare or not? public string DoSomething() { // do something... } I understand that if I want to use serializer in protobuf-net to serialize my data stored in, I have to declare those attribute. but I'm not sure about that on method. please help. Thank you in advance.

    Read the article

  • C++, Ifstream opens local file but not file on HTTP Server

    - by fammi
    Hi, I am using ifstream to open a file and then read from it. My program works fine when i give location of the local file on my system. for eg /root/Desktop/abc.xxx works fine But once the location is on the http server the file fails to open. for eg http://192.168.0.10/abc.xxx fails to open. Is there any alternate for ifstream when using a URL address? thanks. part of the code where having problem: bool readTillEof = (endIndex == -1) ? true : false; // Open the file in binary mode and seek to the end to determine file size ifstream file ( fileName.c_str ( ), ios::in|ios::ate|ios::binary ); if ( file.is_open ( ) ) { long size = (long) file.tellg ( ); long numBytesRead; if ( readTillEof ) { numBytesRead = size - startIndex; } else { numBytesRead = endIndex - startIndex + 1; } // Allocate a new buffer ptr to read in the file data BufferSptr buf (new Buffer ( numBytesRead ) ); mpStreamingClientEngine->SetResponseBuffer ( nextRequest, buf ); // Seek to the start index of the byte range // and read the data file.seekg ( startIndex, ios::beg ); file.read ( (char *)buf->GetData(), numBytesRead ); // Pass on the data to the SCE // and signal completion of request mpStreamingClientEngine->HandleDataReceived( nextRequest, numBytesRead); mpStreamingClientEngine->MarkRequestCompleted( nextRequest ); // Close the file file.close ( ); } else { // Report error to the Streaming Client Engine // as unable to open file AHS_ERROR ( ConnectionManager, " Error while opening file \"%s\"\n", fileName.c_str ( ) ); mpStreamingClientEngine->HandleRequestFailed( nextRequest, CONNECTION_FAILED ); } }

    Read the article

  • C++ - Distributing different headers than development

    - by Ben
    I was curious about doing this in C++: Lets say I have a small library that I distribute to my users. I give my clients both the binary and the associated header files that they need. For example, lets assume the following header is used in development: #include <string> ClassA { public: bool setString(const std::string & str); private: std::string str; }; Now for my question. For deployment, is there anything fundamentally wrong with me giving a 'reduced' header to my clients? For example, could I strip off the private section and simply give them this: #include <string> ClassA { public: bool setString(const std::string & str); }; My gut instinct says "yes, this is possible, but there are gotchas", so that is why I am asking this question here. If this is possible and also safe, it looks like a great way to hide private variables, and thus even avoid forward declaration in some cases. I am aware that the symbols will still be there in the binary itself, and that this is just a visibility thing at the source code level. Thanks!

    Read the article

  • Could I return a FileStream as a generic interface to a file?

    - by Eric
    I'm writing a class interface that needs to return references to binary files. Typically I would provide a reference to a file as a file path. However, I'm considering storing some of the files (such as a small thumbnail) in a database directly rather then on a file system. In this case I don't want to add the extra step of reading the thumbnail out of the database onto the disc and then returning a path to the file for my program to read. I'd want to stream the image directly out of the database into my program and avoid writing anything to the disc unless the user explicit wants to save something. Would having my interface return a FileStreamor even a Imagemake sense? Then it would be up to the implementing class to determine if the source of the FileStream or Image is a file on a disc or binary data in a database. public interface MyInterface { string Thumbnail {get;} string Attachment {get;} } vs public interface MyInterface { Image Thumbnail {get;} FileStream Attachment {get;} }

    Read the article

  • How to read/write from erlang to a named pipe ?

    - by cstar
    I need my erlang application to read and write through a named pipe. Opening the named pipe as a file will fail with eisdir. I wrote the following module, but it is fragile and feels wrong in many ways. Also it fails on reading after a while. Is there a way to make it more ... elegant ? -module(port_forwarder). -export([start/2, forwarder/2]). -include("logger.hrl"). start(From, To)-> spawn(fun() -> forwarder(From, To) end). forwarder(FromFile, ToFile) -> To = open_port({spawn,"/bin/cat > " ++ ToFifo}, [binary, out, eof,{packet, 4}]), From = open_port({spawn,"/bin/cat " ++ FromFifo}, [binary, in, eof, {packet, 4}]), forwarder(From, To, nil). forwarder(From, To, Pid) -> receive {Manager, {command, Bin}} -> ?ERROR("Sending : ~p", [Bin]), To ! {self(), {command, Bin}}, forwarder(From, To, Manager); {From ,{data,Data}} -> Pid ! {self(), {data, Data}}, forwarder(From, To, Pid); E -> ?ERROR("Quitting, first message not understood : ~p", [E]) end. As you may have noticed, it's mimicking the port format in what it accepts or returns. I want it to replace a C code that will be reading the other ends of the pipes and being launched from the debugger.

    Read the article

  • Image.Save(..) throws a GDI+ exception because the memory stream is closed.

    - by Pure.Krome
    Hi folks, i've got some binary data which i want to save as an image. When i try to save the image, it throws an exception if the memory stream used to create the image, was closed before the save. The reason i do this is because i'm dynamically creating images and as such .. i need to use a memory stream. this is the code: [TestMethod] public void TestMethod1() { // Grab the binary data. byte[] data = File.ReadAllBytes("Chick.jpg"); // Read in the data but do not close, before using the stream. Stream originalBinaryDataStream = new MemoryStream(data); Bitmap image = new Bitmap(originalBinaryDataStream); image.Save(@"c:\test.jpg"); originalBinaryDataStream.Dispose(); // Now lets use a nice dispose, etc... Bitmap2 image2; using (Stream originalBinaryDataStream2 = new MemoryStream(data)) { image2 = new Bitmap(originalBinaryDataStream2); } image2.Save(@"C:\temp\pewpew.jpg"); // This throws the GDI+ exception. } Does anyone have any suggestions to how i could save an image with the stream closed? I cannot rely on the developers to remember to close the stream after the image is saved. In fact, the developer would have NO IDEA that the image was generated using a memory stream (because it happens in some other code, elsewhere). I'm really confused :(

    Read the article

  • Save objects to a database?

    - by Eric
    So far in my .Net coding adventures I've only had a need to save information to files. So I've used XmlSerializer and DataContractSerializer to serialize attributed classes to XML files. My next project, however, requires that I save and retrieve information from a SQL server database. I'm wondering what my options are for doing this. The current version of the app, which was not created by me, uses a lot of hard coded SQL commands. But now I'm trying to avoid doing anything where I have to read or write individual fields to or from the database or objects. I especially want to avoid a lot of hard coded SQL in my code. I like how the serializer classes just figure out how to read and write XML files based on the attributes and or public properties of the class. Is there something similar for a database rather then XML?

    Read the article

  • upload new file first check if this file exist already in database or not then if not exist save that in database

    - by Hala Qaseam
    I'm trying to create sql database that contains Image Id (int) Imagename (varchar(50)) Image (image) and in aspx write in upload button this code: protected void btnUpload_Click(object sender, EventArgs e) { //Condition to check if the file uploaded or not if (fileuploadImage.HasFile) { //getting length of uploaded file int length = fileuploadImage.PostedFile.ContentLength; //create a byte array to store the binary image data byte[] imgbyte = new byte[length]; //store the currently selected file in memeory HttpPostedFile img = fileuploadImage.PostedFile; //set the binary data img.InputStream.Read(imgbyte, 0, length); string imagename = txtImageName.Text; //use the web.config to store the connection string SqlConnection connection = new SqlConnection(strcon); connection.Open(); SqlCommand cmd = new SqlCommand("INSERT INTO Image (ImageName,Image) VALUES (@imagename,@imagedata)", connection); cmd.Parameters.Add("@imagename", SqlDbType.VarChar, 50).Value = imagename; cmd.Parameters.Add("@imagedata", SqlDbType.Image).Value = imgbyte; int count = cmd.ExecuteNonQuery(); connection.Close(); if (count == 1) { BindGridData(); txtImageName.Text = string.Empty; ScriptManager.RegisterStartupScript(this, this.GetType(), "alertmessage", "javascript:alert('" + imagename + " image inserted successfully')", true); } } } When I'm uploading a new image I need to first check if this image already exists in database and if it doesn't exist save that in database. Please how I can do that?

    Read the article

  • Why is function's length information of other shared lib in ELF?

    - by minastaros
    Our project (C++, Linux, gcc, PowerPC) consists of several shared libraries. When releasing a new version of the package, only those libs should change whose source code was actually affected. With "change" I mean absolute binary identity (the checksum over the file is compared. Different checksum - different version according to the policy). (I should mention that the whole project is always built at once, no matter if any code has changed or not per library). Usually this can by achieved by hiding private parts of the included Header files and not changing the public ones. However, there was a case where just a delete was added to the destructor of a class TableManager (in the TableManager.cpp file!) of library libTableManager.so, and yet the binary/checksum of library libB.so (which uses class TableManager ) has changed. TableManager.h: class TableManager { public: TableManager(); ~TableManager(); private: int* myPtr; } TableManager.cpp: TableManager::~TableManager() { doSomeCleanup(); delete myPtr; // this delete has been added } By inspecting libB.so with readelf --all libB.so, looking at the .dynsym section, it turned out that the length of all functions, even the dynamically used ones from other libraries, are stored in libB! It looks like this (length is the 668 in the 3rd column): 527: 00000000 668 FUNC GLOBAL DEFAULT UND _ZN12TableManagerD1Ev So my questions are: Why is the length of a function actually stored in the client lib? Wouldn't a start address be sufficient? Can this be suppressed somehow when compiling/linking of libB.so (kind of "stripping")? We would really like to reduce this degree of dependency...

    Read the article

  • Generating all unique crossword puzzle grids

    - by heydenberk
    I want to generate all unique crossword puzzle grids of a certain grid size (4x4 is a good size). All possible puzzles, including non-unique puzzles, are represented by a binary string with the length of the grid area (16 in the case of 4x4), so all possible 4x4 puzzles are represented by the binary forms of all numbers in the range 0 to 2^16. Generating these is easy, but I'm curious if anyone has a good solution for how to programmatically eliminate invalid and duplicate cases. For example, all puzzles with a single column or single row are functionally identical, hence eliminating 7 of those 8 cases. Also, according to crossword puzzle conventions, all squares must be contiguous. I've had success removing all duplicate structures, but my solution took several minutes to execute and probably was not ideal. I'm at something of a loss for how to detect contiguity so if anyone has ideas on this it'd be much appreciated. I'd prefer solutions in python but write in whichever language you prefer. If anyone wants, I can post my python code for generating all grids and removing duplicates, slow as it may be.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >