Search Results

Search found 4838 results on 194 pages for 'binary heap'.

Page 48/194 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • C# How can i remove newline characters from binary?

    - by Tom
    Basically i have binary data, i dont mind if it's unreadable but im writing it to a file which is parsed and so it's importance newline characters are taken out. I thought i had done the right thing when i converted to string.... byte[] b = (byte[])SubKey.GetValue(v[i]); s = System.Text.ASCIIEncoding.ASCII.GetString(b); and then removed the newlines String t = s.replace("\n","") but its not working ?

    Read the article

  • Android: OutofMemoryError: bitmap size exceeds VM budget with no reason I can see.

    - by Meymann
    Hi. I am having an OutOfMemory exception with a gallery over 600x800 pixels JPEG's. The environment I've been using Gallery with JPG images around 600x800 pixels. Since my content may be a bit more complex than just images, I have set each view to be a RelativeLayout that wraps ImageView with the JPG. In order to "speed up" the user experience I have a simple cache of 4 slots that prefetches (in a looper) about 1 image left and 1 image right to the displayed image and keeps them in a 4 slot HashMap. The platform I am using AVD of 256 RAM and 128 Heap Size, with a 600x800 screen. It also happens on an Entourage Edge target, except that with the device it's harder to debug. The problem I have been getting an exception: OutofMemoryError: bitmap size exceeds VM budget And it happens when fetching the fifth image. I have tried to change the size of my image cache, and it is still the same. The strange thing: There should not be a memory problem In order to make sure the heap limit is very far away from what I need, I have defined a dummy 8MB array in the beginning, and left it unreferenced so it's immediately dispatched. It is a member of the activity thread and is defined as following static { @SuppressWarnings("unused") byte dummy[] = new byte[ 8*1024*1024 ]; } The result is that the heap size is nearly 11MB and it's all free. Note I have added that trick after it began to crash. It makes OutOfMemory less frequent. Now, I am using DDMS. Just before the crash (does not change much after the crash), DDMS shows: ID Heap Size Allocated Free %Used #Objects 1 11.195 MB 2.428 MB 8.767 MB 21.69% 47,156 And in the detail table it shows: Type Count Total Size Smallest Largest Median Average free 1,536 8.739MB 16B 7.750MB 24B 5.825KB The largest block is 7.7MB. And yet the LogCat says: ERROR/dalvikvm-heap(1923): 925200-byte external allocation too large for this process. If you mind the relation of the median and the average, it is plausible to assume that most of the available blocks are very small. However, there is a block large enough for the bitmap, it's 7.7M. How come it is still not enough? Note: I recorded a heap trace. When looking at the amount of data allocated, it does not feel like more than 2M is allocated. It does match the free memory report by DDMS. Could it be that I experience some problem like heap-fragmentation? How do I solve/workaround the problem? Is the heap shared to all threads? Could it be that I interpret the DDMS readout in a wrong way, and there is really no 900K block to allocate? If so, can anybody please tell me where I can see that? Thanks a lot Meymann

    Read the article

  • Does boost::asio makes excessive small heap allocations or am I wrong?

    - by Poni
    #include <cstdlib> #include <iostream> #include <boost/bind.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; class session { public: session(boost::asio::io_service& io_service) : socket_(io_service) { } tcp::socket& socket() { return socket_; } void start() { socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void handle_read(const boost::system::error_code& error, size_t bytes_transferred) { if (!error) { data_[bytes_transferred] = '\0'; if(NULL != strstr(data_, "quit")) { this->socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both); this->socket().close(); // how to make this dispatch "handle_read()" with a "disconnected" flag? } else { boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } } else { delete this; } } void handle_write(const boost::system::error_code& error) { if (!error) { // } else { delete this; } } private: tcp::socket socket_; enum { max_length = 1024 }; char data_[max_length]; }; class server { public: server(boost::asio::io_service& io_service, short port) : io_service_(io_service), acceptor_(io_service, tcp::endpoint(tcp::v4(), port)) { session* new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } void handle_accept(session* new_session, const boost::system::error_code& error) { if (!error) { new_session->start(); new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } else { delete new_session; } } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; }; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: async_tcp_echo_server <port>\n"; return 1; } boost::asio::io_service io_service; using namespace std; // For atoi. server s(io_service, atoi(argv[1])); io_service.run(); } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } While experimenting with boost::asio I've noticed that within the calls to async_write()/async_read_some() there is a usage of the C++ "new" keyword. Also, when stressing this echo server with a client (1 connection) that sends for example 100,000 times some data, the memory usage of this program is getting higher and higher. What's going on? Will it allocate memory for every call? Or am I wrong? Asking because it doesn't seem right that a server app will allocate, anything. Can I handle it, say with a memory pool? Another side-question: See the "this-socket().close();" ? I want it, as the comment right to it says, to dispatch that same function one last time, with a disconnection error. Need that to do some clean-up. How do I do that? Thank you all gurus (:

    Read the article

  • C#: The input stream is not a valid binary format.

    - by Mcoroklo
    I have a problem with deserializing in C#/ASP.NET, which gives the exact error: The input stream is not a valid binary format. The starting contents (in bytes) are: 41-41-45-41-41-41-44-2F-2F-2F-2F-2F-41-51-41-41-41 ... What I am trying to do I have a structure with 3 classes. I have a class A which is a base class, and then class B and C which are derived from A. I am trying to store random types of B and C in the database using LINQ to SQL, in a column with the type VARCHAR(MAX). I cannot use BINARY as the length is around 15.000. My code... Error is in the LAST codeblock C# Code in Business layer- Storing a record private void AddTraceToDatabase(FightTrace trace) { MemoryStream recieverStream = new MemoryStream(); MemoryStream firedStream = new MemoryStream(); MemoryStream moveStream = new MemoryStream(); BinaryFormatter binaryFormatter = new BinaryFormatter(); binaryFormatter.Serialize(recieverStream,trace.Reciever); binaryFormatter.Serialize(firedStream,trace.FiredBy); binaryFormatter.Serialize(moveStream,trace.Move); string reciever = Convert.ToBase64String(recieverStream.ToArray()); string fired = Convert.ToBase64String(firedStream.ToArray()); string move = Convert.ToBase64String(moveStream.ToArray()); this.dataAccess.AddFightTrace(trace.TraceType.ToString(),reciever,move,fired,trace.DateTime,this.FightId); } C# Code in Data access layer - Storing a record public void AddFightTrace(string type, string reciever, string Move, string firedBy, DateTime firedAt, int fightid) { GameDataContext db = new GameDataContext(); dbFightTrace trace = new dbFightTrace(); trace.TraceType = type; trace.Reciever = reciever; trace.Move = Move; trace.FiredBy = firedBy; trace.FiredAt = firedAt; trace.FightId = fightid; db.dbFightTraces.InsertOnSubmit(trace); db.SubmitChanges(); } C# Code getting the entry in the database public List<dbFightTrace> GetNewTraces(int fightid, DateTime lastUpdate) { GameDataContext db = new GameDataContext(); var data = from d in db.dbFightTraces where d.FightId==fightid && d.FiredAt > lastUpdate select d; return data.ToList(); } C# Factory, converting from LINQ to SQL class to my objects THIS IS HERE THE ERROR COMES public FightTrace CreateTrace(dbFightTrace trace) { TraceType traceType = (TraceType) Enum.Parse(typeof(TraceType), trace.TraceType); BinaryFormatter formatter = new BinaryFormatter(); System.Text.UTF8Encoding enc = new System.Text.UTF8Encoding(); MemoryStream recieverStream = new MemoryStream(enc.GetBytes(trace.Reciever)); recieverStream.Position = 0; MemoryStream firedStream = new MemoryStream(enc.GetBytes(trace.FiredBy)); firedStream.Position = 0; MemoryStream movedStream = new MemoryStream(enc.GetBytes(trace.Move)); movedStream.Position = 0; // THE NEXT LINE HERE CAUSES THE ERROR NPC reciever = formatter.Deserialize(recieverStream) as NPC; Player fired = formatter.Deserialize(firedStream) as Player; BaseAttack attack = formatter.Deserialize(movedStream) as BaseAttack; FightTrace t = new FightTrace(traceType,reciever,attack,fired); t.TraceId = trace.FightTraceId; t.DateTime = trace.FiredAt; return t; } So the error happends when the first Deserialize method is run, with the above error. I have tried several things but I am quite lost on this one.. Thanks! :-)

    Read the article

  • Check if a pointer points to allocated memory on the heap.

    - by Ugo
    Ok, I know this question seems to have been asked many times on stackoverflow. but please read Well the answer for any address is "No you can't" but the question here is to know if a pointer points to a piece of memory allocated with malloc/new. Actually I think it could be easily implemented overriding malloc/free and keeping track of allocated memory ranges. Do you know a memory management library providing this specific tool ?

    Read the article

  • How can I extract similarities/patterns from a collection of binary strings?

    - by JohnIdol
    I have a collection of binary strings of given size encoding effective solutions to a given problem. By looking at them, I can spot obvious similarities and intuitively see patterns of symmetry and periodicity. Are there mathematical/algorithmic tools I can "feed" this set of strings to and get results that might give me an idea of what this set of strings have in common? By doing so I would be able to impose a structure (or at least favor some features over others) on candidate solutions in order to greatly reduce the search space, maximizing chances to find optimal solutions for my problem (I am using genetic algorithms as the search tool - but this is not pivotal to the question). Any pointers/approaches appreciated.

    Read the article

  • Sending binary data with Indy through TCP\IP, how?

    - by Wodzu
    Hello. How to send a binary data with Indy components? Which of them is most suitable for this task? I've tried to use TIdTcpClient but it allows only to send strings. I've found one reponce for that problem here but I don't get it. It says about method Write(TIdBytes), but the answer is not clear for me. Does he meant Write to some instance of TIdBytes, and how to connect that instance with TIdTcpClient? Thanks for any help.

    Read the article

  • How to find a binary logarithm very fast? (O(1) at best)

    - by psihodelia
    Is there any very fast method to find a binary logarithm of an integer number? For example, given a number x=52656145834278593348959013841835216159447547700274555627155488768 such algorithm must find y=log(x,2) which is 215. x is always a power of 2. The problem seems to be really simple. All what is required is to find the position of the most significant 1 bit. There is a well-known method FloorLog, but it is not very fast especially for the very long multi-words integers. What is the fastest method?

    Read the article

  • Can I use ruby rest-client to POST a binary file to an http API?

    - by Angela
    I have been using rest-client in ruby in post XML to a third-party API. I need to be able to include a binary image that's uploaded. How do I do that? Uploading Attachments Both letters and postcards will, in most cases, require the attachment of documents. Those attachments might be PDFs in the case of letters or images in the case of postcards. To uploading an attachment, submit a POST to: http://www.postful.com/service/upload Be sure to include the Content-Type and Content-Length headers and the attachment itself as the body of the request. POST /upload HTTP/1.0 Content-Type: application/octet-stream Content-Length: 301456 ... file content here ... If the upload is successful, you will receive a response like the following: 290797321.waltershandy.2

    Read the article

  • Why isn't "String or Binary data would be truncated" a more descriptive error?

    - by rwmnau
    To start: I understand what this error means - I'm not attempting to resolve an instance of it. This error is notoriously difficult to troubleshoot, because if you get it inserting a million rows into a table 100 columns wide, there's virtually no way to determine what column of what row is causing the error - you have to modify your process to insert one row at a time, and then see which one fails. That's a pain, to put it mildly. Is there any reason that the error doesn't look more like this? String or Binary data would be truncated Error inserting value "Some 18 char value" into SomeTable.SomeColumn VARCHAR(10) That would make it a lot easier to find and correct the value, if not the table structure itself. If seeing the table data is a security concern, then maybe something generic, like giving the length of the attempted value and the name of the failing column?

    Read the article

  • "tailing" a binary file based on string location using bash?

    - by ilitirit
    I've got a bunch of binary files, each containing an embedded string near the end of the file but at different places (only occurs once in each file). I need to extract the part of the file starting at the location of the string till the end of the file and dump it into a new file. eg. If the file's contents is "AWREDEDEDEXXXERESSDSDS" and the string of interest is "XXX", then the part of the file I need is "XXXERESSDSDS". What's the easiest way to do this in bash?

    Read the article

  • iTunes Connect rejects my binary because I used a pre-release version of the SDK, what should I do?

    - by Prairiedogg
    NOTE - I'm posting this question to answer myself as a service to the community. Stay tuned for the answer. I downloaded a pre-release version of the iPhone SDK and tried to update one of my existing apps using a binary I built with it. Obviously you are not supposed to do this but I had forgotten about the warning when I installed the pre-release SDK. Anyway - I have two questions: Can I simply set the base SDK to an earlier version in the build settings and get around this problem? If not, then what should I do?

    Read the article

  • Important Security Issue: Is it possible to put binary image data into html markup code and then get

    - by Joern Akkermann
    Hi, it's an important security issue and I'm sure this should be possible. A simple example: You run a community portal. Users are registered and upload their pictures. Your application gives security rules wenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone elses uploaded pictures. Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks. If it's possible to put the binary data of an image directly into the html markup, you can restrict the user access of your image dirs the user and group your web application runs of and pass the image data to your apache user and group directly in the html. The only possible weakness then is the password of the user that your web app runs as. Is there already a possibility? Yours, Joern.

    Read the article

  • Is it possible to put binary image data into html markup and then get the image displayed as usual i

    - by Joern Akkermann
    It's an important security issue and I'm sure this should be possible. A simple example: You run a community portal. Users are registered and upload their pictures. Your application gives security rules whenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone else's uploaded pictures. Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks. If it's possible to put the binary data of an image directly into the HTML markup, you can restrict the user access of your image dirs the user and group your web application runs of and pass the image data to your Apache user and group directly in the HTML. The only possible weakness then is the password of the user that your web app runs as. Is there already a possibility?

    Read the article

  • What could prevent from running a binary on linux distribution compiled on a different platform ?

    - by yves Baumes
    We have 2 different compilation machine: red hat as4 and as5. Our architects require us, developers, to compile our program on those 2 platforms each time before copying them on their respective machine in production. What could prevent us from compiling our application on one machine only (let say the red has as 4 for instance) and deploy that binary on all target platform instead ? What is your point of view and could you pinpoint specific issues you've encountered with doing so ? What issues may I face with doing so ?

    Read the article

  • How to upload binary (audio) data from a Flash AS3 client to .NET server (WCF/REST/HTTP/?)?

    - by Bobby
    Simply stated: I'm trying to record audio in a browser, and get that data back up to the server. I originally tried to capture, encode and upload the audio using Silverlight, but because of the lack of suitable client-side encoding options, I'm now giving Flash a shot (Flash has baked-in support for encoding to Speex). I think I've figured out how to capture and encode the audio... But now what was easy in Silverlight, is the challenge in Flash. My server-side is .NET: MVC2- I'm open to receiving the audio in whatever manner is best- REST, WCF.. So that's my question: How could one upload binary data from Flash, to a .NET server-side endpoint. If the answer is WCF: then how would one setup the client-side proxies to communicate with the service? If the answer is REST or HTTP Post, then how would one construct this HTTP request and pass along the data? I've been reading up on AS3, but am new to Flash dev... Thanks for any help!

    Read the article

  • Python - converting wide-char strings from a binary file to Python unicode strings...

    - by Mikesname
    It's been a long day and I'm a bit stumped. I'm reading a binary file that contains lots of wide-char strings and I want to dump these out as Python unicode strings. (To unpack the non-string data I'm using the struct module, but I don't how to do the same with the strings.) For example, reading the word "Series": myfile = open("test.lei", "rb") myfile.seek(44) data = myfile.read(12) # data is now 'S\x00e\x00r\x00i\x00e\x00s\x00' How can I encode that raw wide-char data as a Python string? Edit: I'm using Python 2.6

    Read the article

  • Delphi: how to efficently read a big binary file, converting it to hexadecimal for passing it as a v

    - by user193655
    I need to convert a binary file (a zip file) into hexadecimal representation, to then send it to sql-server as a varbinary(max) function parameter. A full example (using a very small file!) is: 1) my file contains the following bits 000011110000111 2) I need a procedure to QUICKLY convert it to 0F0F 3) I will call a sql server function passing 0x0F0F as parameter The problem is that I have large files (up to 100MB, even if average file size is 100KB files are possible), so I need the fastest way to do this. Otherwise stated: I need to create the string '0x'+BinaryDataInHexadecimalRepresentation in the most efficient way. Related question: passing hexadecimal data to sql server

    Read the article

  • Should I include the binary in a Rails plugin or not?

    - by Nick Gorbikoff
    Hello. I'm trying to roll out a little Rails plugin that is basically is just a wrapper to a 7zip archiver. Should I include the 7zip binaries for windows, mac and linux with it or explain to user that it's a dependency and they need to get it working. I know it's not that difficult to install a 7zip, but what is the best practice in general. The reason I'm asking is cause I've ran so many times into gems that need some sort of dependency that doesn't compile properly or is not available in ready form for the OS in question and then I end up spending have a day hunting down for a binary or a way to compile the program. (Have happened to me both on Mac and Windows, not on Debian so far. )

    Read the article

  • How to load an RSA key from binary data to an RSA structure using the OpenSSL C Library?

    - by Andreas Bonini
    Currently I have my private key saved in a file, private.key, and I use the following function to load it: RSA *r = PEM_read_RSAPrivateKey("private.key", NULL, NULL, NULL); This works perfectly but I'm not happy with the file-based format; I want to save my key in pure binary form (ie, no base64 or similar) in a char* variable and load/save the key from/to it. This way I have much more freedom: I'll be able to store the key directly into the application const char key[] { 0x01, 0x02, ... };, send it over a network socket, etc. Unfortunately though I haven't found a way to do that. The only way to save and load a key I know of reads/saves it to a file directly.

    Read the article

  • I'm getting the error in iTunes connect: The binary you uploaded was invalid. The signature was inva

    - by Joshua
    I went through the dev portal provisioning process twice now trying to get it to work, but to no avail. I don't think it's the second half (signature is invalid), I think it actually may have to with my binary. I have a warning in xcode that isn't helping me because I don't know what to do about it. And honestly I don't know how relevant this information even is. But it says: "Check Dependencies: Warning: The copy bundle resources build phase contains target's info.plist" The app runs perfectly in the simulator, and I haven't made any changes to the info.plist since I submitted the app to Apple last week. (this is an update) any suggestions?

    Read the article

  • is it possible to include a binary in my project and use it?

    - by Adam S
    I have a binary that's used as a command-line tool to manipulate some files - tool.exe. I would like to include this in my Visual Studio 2008 project and use it from my C# code. I have it in a folder called "Resources" which also has some other files my project uses. I would like to do something like Process myproc = Process.Start("Resources/tool.exe"); but I believe C# has an issue with this because it's looking in the file system rather than the project. How can this be done?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >