Search Results

Search found 3117 results on 125 pages for 'buffer'.

Page 112/125 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • waveInProc / Windows audio question...

    - by BTR
    I'm using the Windows API to get audio input. I've followed all the steps on MSDN and managed to record audio to a WAV file. No problem. I'm using multiple buffers and all that. I'd like to do more with the buffers than simply write to a file, so now I've got a callback set up. It works great and I'm getting the data, but I'm not sure what to do with it once I have it. Here's my callback... everything here works: // Media API callback void CALLBACK AudioRecorder::waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { // Data received if (uMsg == WIM_DATA) { // Get wav header LPWAVEHDR mBuffer = (WAVEHDR *)dwParam1; // Now what? for (unsigned i = 0; i != mBuffer->dwBytesRecorded; ++i) { // I can see the char, how do get them into my file and audio buffers? cout << mBuffer->lpData[i] << "\n"; } // Re-use buffer mResultHnd = waveInAddBuffer(hWaveIn, mBuffer, sizeof(mInputBuffer[0])); // mInputBuffer is a const WAVEHDR * } } // waveInOpen cannot use an instance method as its callback, // so we create a static method which calls the instance version void CALLBACK AudioRecorder::staticWaveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2) { // Call instance version of method reinterpret_cast<AudioRecorder *>(dwParam1)->waveInProc(hWaveIn, uMsg, dwInstance, dwParam1, dwParam2); } Like I said, it works great, but I'm trying to do the following: Convert the data to short and copy into an array Convert the data to float and copy into an array Copy the data to a larger char array which I'll write into a WAV Relay the data to an arbitrary output device I've worked with FMOD a lot and I'm familiar with interleaving and all that. But FMOD dishes everything out as floats. In this case, I'm going the other way. I guess I'm basically just looking for resources on how to go from LPSTR to short, float, and unsigned char. Thanks much in advance!

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

  • Python file iterator over a binary file with newer idiom.

    - by drewk
    In Python, for a binary file, I can write this: buf_size=1024*64 # this is an important size... with open(file, "rb") as f: while True: data=f.read(buf_size) if not data: break # deal with the data.... With a text file that I want to read line-by-line, I can write this: with open(file, "r") as file: for line in file: # deal with each line.... Which is shorthand for: with open(file, "r") as file: for line in iter(file.readline, ""): # deal with each line.... This idiom is documented in PEP 234 but I have failed to locate a similar idiom for binary files. I have tried this: >>> with open('dups.txt','rb') as f: ... for chunk in iter(f.read,''): ... i+=1 >>> i 1 # 30 MB file, i==1 means read in one go... I tried putting iter(f.read(buf_size),'') but that is a syntax error because of the parens after the callable in iter(). I know I could write a function, but is there way with the default idiom of for chunk in file: where I can use a buffer size versus a line oriented? Thanks for putting up with the Python newbie trying to write his first non-trivial and idiomatic Python script.

    Read the article

  • How do I redirect standard output to a file in Perl? [closed]

    - by rockyurock
    I want to send standard output to the file "my_output.txt" but failed. Here's the output: inside value loop ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.16.2 port 5001 connected with 192.168.16.1 port 3189 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 5.0 sec 2.14 MBytes 3.61 Mbits/sec 0.369 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed *************************************** When I enable the local *STDOUT; in below code then I could see the above output on command prompt display (ofcourse server is sending some data): my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file #local *STDOUT; open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "D:\\IOT_AUTOMATION_UTILITY\\_SATURDAY_09-04-10\\adb_cmd.bat", "adb shell /data/app/iperf -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); #$alarm_time = $IPERF_RUN_TIME+10; #20sec #$ProcessObj->Wait(40); #print"inside value loop2\n"; #sleep $alarm_time; sleep 40; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); }

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • Why I can't get all UDP packets?

    - by Jack
    My program use UdpClient to try to receive 27 responses from 27 hosts. The size of the response is 10KB. My broadband incoming bandwidth is 150KB/s. The 27 responses are sent from the hosts almost at the same time and for every 10 secs. However, I can only receive 8 - 17 responses each time. The number of responses that I can receive is quite dynamic but within the range. Can anyone tell me why? why can't I receive all? I understand UDP is not reliable. but I tried receiving 5 - 10 responses at the same time, it worked. I guess the network links are not so bad. The code is very simple. ON the 27 hosts, I just use UdpClient to send 10KB to my machine. On my machine, I have one UdpClient receive datagrams. Each time I get a data, I create a thread to handle it (basically handling it means just print out "I received 10KB", but it runs in a thread). listener = new UDPListener(Port); listener.Start(); while (true) { try { UDPContext context = listener.Accept(); ThreadPool.QueueUserWorkItem(new WaitCallback(HandleMessage), context); } catch (Exception) { } } If I reduce the size of the response down to 3KB, the case gets much better that roughly 25 responses can be received. Any more idea? UDP buffer problems???

    Read the article

  • Convenient way to do "wrong way rebase" in git?

    - by Kaz
    I want to pull in newer commits from master into topic, but not in such a way that topic changes are replayed over top of master, but rather vice versa. I want the new changes from master to be played on top of topic, and the result to be installed as the new topic head. I can get exactly the right object if I rebase master to topic, the only problem being that the object is installed as the new head of master rather than topic. Is there some nice way to do this without manually shuffling around temporary head pointers? Edit: Here is how it can be achieved using a temporary branch head, but it's clumsy: git checkout master git checkout -b temp # temp points to master git rebase topic # topic is brought into temp, temp changes played on top Now we have the object we want, and it's pointed at by temp. git checkout topic git reset --hard temp Now topic has it; and all that is left is to tidy up by deleting temp: git branch -d temp Another way is to to do away with temp and just rebase master, and then reset topic to master. Finally, reset master back to what it was by pulling its old head from the reflog, or a cut-and-paste buffer.

    Read the article

  • Splitting Code into Headers/Source files

    - by cam
    I took the following code from the examples page on Asio class tcp_connection : public boost::enable_shared_from_this<tcp_connection> { public: typedef boost::shared_ptr<tcp_connection> pointer; static pointer create(boost::asio::io_service& io_service) { return pointer(new tcp_connection(io_service)); } tcp::socket& socket() { return socket_; } void start() { message_ = make_daytime_string(); boost::asio::async_write(socket_, boost::asio::buffer(message_), boost::bind(&tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } private: tcp_connection(boost::asio::io_service& io_service) : socket_(io_service) { } void handle_write(const boost::system::error_code& /*error*/, size_t /*bytes_transferred*/) { } tcp::socket socket_; std::string message_; }; I'm relatively new to C++ (from a C# background), and from what I understand, most people would split this into header and source files (declaration/implementation, respectively). Is there any reason I can't just leave it in the header file if I'm going to use it across many source files? If so, are there any tools that will automatically convert it to declaration/implementation for me? Can someone show me what this would look like split into header/source file for an example (or just part of it, anyway)? I get confused around weird stuff like thistypedef boost::shared_ptr<tcp_connection> pointer; Do I include this in the header or the source? Same with tcp::socket& socket() I've read many tutorials, but this has always been something that has confused me about C++.

    Read the article

  • JavaCard monitoring folder

    - by GxG
    I want to write a two way application: applet for javacard and an application in C#. I've got the C# covered but i want to know if with JavaCard i can monitor a folder on the memory and how would i go about doing that. I have a shared folder let's call it temp in which i want to store buffer information between the simulated smartcard and the C# application. The C# application will only read from that folder and display the information, but also it will write requests towards the smartcard. For example i simulate entering the PIN for the card. The applet will write a file containing available options and the C# application will read that file and display those options; from the C# app i will chose and option and write a request file in the same folder. This is when the smartcard which is monitoring that folder will read the request and issue a response. Can i make the smartcard monitor that folder? I was thinking of using encrypted XML files for the request/response operations. But simple .txt files are good to. I am limited to using JavaCard v2.2.1, and every operation has to be encrypted/decrypted. (with the ciphering i have no problem)

    Read the article

  • C++: Maybe you know this fitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

  • switch from glOrtho to gluPerspective

    - by Knitex
    I have a car draw at (0,0) and some obstacles set up but right now my main concern is switching from glPerspective to glOrtho and vice-versa. All that i get when i switch from perspective to ortho is a black screen. void myinit(){ glClearColor(0.0,0.0,0.0,0.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(60,ww/wh,1,100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(-5,5,3,backcarx,topcarx,0,0,0,1); } void menu(int id){ /*menu selects if you want to view it in ortho or perspective*/ if(id == 1){ glClear(GL_DEPTH_BUFFER_BIT); glViewport(0,0,ww,wh); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-2,100,-2,100,-1,1); glMatrixMode(GL_MODELVIEW); glutPostRedisplay(); } if(id == 2){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(60,ww/wh,1,100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); viewx = backcarx - 10; viewy = backcary - 10; gluLookAt(viewx,viewy,viewz,backcarx,topcarx,0,0,0,1); } } i've tried using the clear depth buffer and still doesnt work.

    Read the article

  • Qt XQuery into a QStringList

    - by Stewart
    Hi, I'm trying to use QtXmlQuery to extract some data from XML. I'd like to put this into a QStringList. I try the following: QByteArray in = "this is where my xml lives"; QBuffer received; received.setData(in); received.open(QIODevice::ReadOnly); QXmlQuery query; query.bindVariable("data", &received); query.setQuery(NAMESPACE //contains definition of the t namespace "doc($data)//t:service/t:serviceID/text()"); QBuffer outb; outb.open(QIODevice::ReadWrite); QXmlSerializer s(query, &outb); query.evaluateTo(&s); qDebug() << "buffer" << outb.data(); //This works perfectly! QStringList * queryResult = new QStringList(); query.evaluateTo(queryResult); qDebug() << "string list" << *queryResult; //This gives me no output! As you can see, everything works fine when I send the output into a QBuffer via a QXmlSerializer... but I get nothing when I try to use a QStringList.

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • OpenGL Pixel Format Attributes (NSOpenGLPixelFormatAttibutes) explanation?

    - by nacho4d
    Hi, I am not new to OpenGL, but not an expert. Many tutorials teach how to draw, 3D, 2D, projections, orthogonal, etc, but How about setting a the view? (NSOpenGLView in Cocoa, Macs). For example I have this: - (id) initWithFrame: (NSRect) frame { GLuint attribs[] = { //PF: PixelAttibutes NSOpenGLPFANoRecovery, NSOpenGLPFAWindow, NSOpenGLPFAAccelerated, NSOpenGLPFADoubleBuffer, NSOpenGLPFAColorSize, 24, NSOpenGLPFAAlphaSize, 8, NSOpenGLPFADepthSize, 24, NSOpenGLPFAStencilSize, 8, NSOpenGLPFAAccumSize, 0, 0 }; NSOpenGLPixelFormat* fmt = [[NSOpenGLPixelFormat alloc] initWithAttributes: (NSOpenGLPixelFormatAttribute*) attribs]; return self = [super initWithFrame:frame pixelFormat: [fmt autorelease]]; } And I don't understand very well their usage, specially when combining them. For example: If I want my view to be capable of full screen should I write NSOpenGLPFAFullScreen only ? or both? (by capable I mean not always in full screen) Regarding Double Buffer, what is this exactly? (Below: Apple's definition) If present, this attribute indicates that only double-buffered pixel formats are considered. Otherwise, only single-buffered pixel formats are considered Regarding Color: if NSOpenGLPFAColorSize is 24 and NSOpenGLPFAColorSize is 8 then it means that alpha and RGB components are treated differently? what happen if I set the former to 32 and the later to 0? Etc, etc,In general how do I learn to set my view from scratch? Thanks in advance. Ignacio.

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Linear Interpolation. How to implement this algorithm in C ? (Python version is given)

    - by psihodelia
    There exists one very good linear interpolation method. It performs linear interpolation requiring at most one multiply per output sample. I found its description in a third edition of Understanding DSP by Lyons. This method involves a special hold buffer. Given a number of samples to be inserted between any two input samples, it produces output points using linear interpolation. Here, I have rewritten this algorithm using Python: temp1, temp2 = 0, 0 iL = 1.0 / L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 *iL) where x contains input samples, L is a number of points to be inserted, y will contain output samples. My question is how to implement such algorithm in ANSI C in a most effective way, e.g. is it possible to avoid the second loop? NOTE: presented Python code is just to understand how this algorithm works. UPDATE: here is an example how it works in Python: x=[] y=[] hold=[] num_points=20 points_inbetween = 2 temp1,temp2=0,0 for i in range(num_points): x.append( sin(i*2.0*pi * 0.1) ) L = points_inbetween iL = 1.0/L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 * iL) Let's say x=[.... 10, 20, 30 ....]. Then, if L=1, it will produce [... 10, 15, 20, 25, 30 ...]

    Read the article

  • Fastest reliable way for Clojure (Java) and Ruby apps to communicate

    - by jkndrkn
    Hi There, We have cloud-hosted (RackSpace cloud) Ruby and Java apps that will interact as follows: Ruby app sends a request to Java app. Request consists of map structure containing strings, integers, other maps, and lists (analogous to JSON). Java app analyzes data and sends reply to Ruby App. We are interested in evaluating both messaging formats (JSON, Buffer Protocols, Thrift, etc.) as well as message transmission channels/techniques (sockets, message queues, RPC, REST, SOAP, etc.) Our criteria: Short round-trip time. Low round-trip-time standard deviation. (We understand that garbage collection pauses and network usage spikes can affect this value). High availability. Scalability (we may want to have multiple instances of Ruby and Java app exchanging point-to-point messages in the future). Ease of debugging and profiling. Good documentation and community support. Bonus points for Clojure support. What combination of message format and transmission method would you recommend? Why? I've gathered here some materials we have already collected for review: Comparison of various java serialization options Comparison of Thrift and Protocol Buffers (old) Comparison of various data interchange formats Comparison of Thrift and Protocol Buffers Fallacies of Protocol Buffers RPC features Discussion of RPC in the context of AMQP (Message-Queueing) Comparison of RPC and message-passing in distributed systems (pdf) Criticism of RPC from perspective of message-passing fan Overview of Avro from Ruby programmer perspective

    Read the article

  • How to throw meaningful data in exceptions ?

    - by ZeroCool
    I would like to throw exceptions with meaningful explanation ! I thought of using variable arguments worked fine but lately I figured out it's impossible to extend the class in order to implement other exception types. So I figured out using an std::ostreamstring as a an internal buffer to deal with formatting ... but doesn't compile! I guess it has something to deal with copy constructors. Anyhow here is my piece of code: class Exception: public std::exception { public: Exception(const char *fmt, ...); Exception(const char *fname, const char *funcname, int line, const char *fmt, ...); //std::ostringstream &Get() { return os_ ; } ~Exception() throw(); virtual const char *what() const throw(); protected: char err_msg_[ERRBUFSIZ]; //std::ostringstream os_; }; The variable argumens constructor can't be inherited from! this is why I thought of std::ostringstream! Any advice on how to implement such approach ?

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • Why does Go not seem to recognize size_t in a C header file?

    - by Graeme Perrow
    I am trying to write a go library that will act as a front-end for a C library. If one of my C structures contains a size_t, I get compilation errors. AFAIK size_t is a built-in C type, so why wouldn't go recognize it? My header file looks like: typedef struct mystruct { char * buffer; size_t buffer_size; size_t * length; } mystruct; and the errors I'm getting are: gcc failed: In file included from <stdin>:5: mydll.h:4: error: expected specifier-qualifier-list before 'size_t' on input: typedef struct { char *p; int n; } _GoString_; _GoString_ GoString(char *p); char *CString(_GoString_); #include "mydll.h" I've even tried adding either of // typedef unsigned long size_t or // #define size_t unsigned long in the .go file before the // #include, and then I get "gcc produced no output". I have seen these questions, and looked over the example with no success.

    Read the article

  • Audio Streaming Latency

    - by killianmcc
    I'm writing a UDP local area network video chat system and have got the video and audio streams working. However I'm experiencing a little latency (about half a second) in the audio and was wondering what codecs would provide the least latency. I'm using NAudio (http://naudio.codeplex.com/) which provides me access to the following codecs for streaming; Speex Narrow Band (VBR) Speex Wide Band (16kHz)(VBR) Speex Ultra Wide Band (32kHz)(VBR) DSP Group TrueSpeech (8.5kbps) GSM 6.10 (13kbps) Microsoft ADPCM (32.8kbps) G.711 a-law (64kbps) G.722 16kHz (64kbps) G.711 mu-law (64kbps) PCM 8kHz 16 bit uncompressed (128kbps) I've tried them out and I'm not noticing much difference. Is there any others that I should download and try to reduce latency? I'm only going to be sending voice over the connection but I'm not really worried about quality or background noises too much. UPDATE I'm sending the audio in blocks like so; waveIn = new WaveIn(); waveIn.BufferMilliseconds = 50; waveIn.DeviceNumber = inputDeviceNumber; waveIn.WaveFormat = codec.RecordFormat; waveIn.DataAvailable += waveIn_DataAvailable; void waveIn_DataAvailable(object sender, WaveInEventArgs e) { if (connected) { byte[] encoded = codec.Encode(e.Buffer, 0, e.BytesRecorded); udpSender.Send(encoded, encoded.Length); } }

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • Writing a generic function that can take a Writer as well as an OutputStream

    - by ebruchez
    I wrote a couple of functions that look like this: def myWrite(os: OutputStream) = {} def myWrite(w: Writer) = {} Now both are very similar and I thought I would try to write a single parametrized version of the function. I started with a type with the two methods that are common in the Java OutputStream and Writer: type Writable[T] = { def close() : Unit def write(cbuf: Array[T], off: Int, len: Int): Unit } One issue is that OutputStream writes Byte and Writer writes Char, so I parametrized the type with T. Then I write my function: def myWrite[T, A[T] <: Writable[T]](out: A[T]) = {} and try to use it: val w = new java.io.StringWriter() myWrite(w) Result: <console>:9: error: type mismatch; found : java.io.StringWriter required: ?A[ ?T ] Note that implicit conversions are not applicable because they are ambiguous: both method any2ArrowAssoc in object Predef of type [A](x: A)ArrowAssoc[A] and method any2Ensuring in object Predef of type [A](x: A)Ensuring[A] are possible conversion functions from java.io.StringWriter to ?A[ ?T ] myWrite(w) I tried a few other combinations of types and parameters, to no avail so far. My question is whether there is a way of achieving this at all, and if so how. (Note that the implementation of myWrite will need, internally, to know the type T that parametrizes the write() method, because it needs to create a buffer as in new ArrayT.)

    Read the article

  • Emacs: Often switching between Emacs and my IDE's editor, how can I 'synch' the file?

    - by WizardOfOdds
    I very often need to do some Emacs magic on some files and I need to go back and forth my IDE (IntelliJ IDEA) and Emacs. When a change is made under Emacs (and after I've saved the file) and I go back to IntelliJ the change appears immediately (if I recall correctly I configured IntelliJ to "always reload file when a modification is detected on disk" or something like that). I don't even need to reload: as soon as IntelliJ IDEA gains focus, it instantly reloads the file (and I hence have immediately access to the modifications I made from Emacs). So far, so very good. However "the other way round", it doesn't work yet. Can I configure Emacs so that everytime a file is changed on disk it reloads it? Or make Emacs, everytime it "gains focus", verify if any file currently opened has been modified on disk? I know I can start modifying the buffer under Emacs and it shall instantly warn that it has been modified, but I'd rather have it do it immediately (for example if I used my IDE to do some big change, when I come back to Emacs what I see may not be at all anymore what the file contains and it's a bit weird).

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >