Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 107/126 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Is there a better way of launching same app from GUI or Command line

    - by markhunte
    Hi All, I worked out a way of running my Cocoa (GUI) app. From either the normal double clicking it, Or from the CLI. I realised that when an app launches from a double click (GUI), it returns an argument count (argc) of 0. But when launched from the CLI it will have an argc of 1. So long as I do not put any arguments myself. Which means I can use if.. else.. to determine how the app was launched. This works fine for my app as I do not need to put arguments. But I wondered if there was a better way of doing it. Here is an example of the code in the main.m int main (int argc, const char * argv[]) { //This determins if the app is launched from the command line or app itself is opened. if (argc == 1) { //app was run from CLI // Create a object MyClass *mMyClass; mMyClass = [[MyClass alloc] init]; // Read the Buffer [mMyClass readBuffer]; // Write out file on disk [mMyClass createFile]; [mMyClass doMoreStuff]; [mMyClass release]; mMyClass = nil; return 0; } else { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; //app was doubled click, (Opened) return NSApplicationMain(argc, (const char **) argv); [pool drain]; // */ // return NSApplicationMain(argc, (const char **) argv); }} Many Thanks. M

    Read the article

  • SDL_image/C++ OpenGL Program: IMG_Load() produces fuzzy images

    - by Kami
    I'm trying to load an image file and use it as a texture for a cube. I'm using SDL_image to do that. I used this image because I've found it in various file formats (tga, tif, jpg, png, bmp) The code : SDL_Surface * texture; //load an image to an SDL surface (i.e. a buffer) texture = IMG_Load("/Users/Foo/Code/xcode/test/lena.bmp"); if(texture == NULL){ printf("bad image\n"); exit(1); } //create an OpenGL texture object glGenTextures(1, &textureObjOpenGLlogo); //select the texture object you need glBindTexture(GL_TEXTURE_2D, textureObjOpenGLlogo); //define the parameters of that texture object //how the texture should wrap in s direction glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //how the texture should wrap in t direction glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); //how the texture lookup should be interpolated when the face is smaller than the texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //how the texture lookup should be interpolated when the face is bigger than the texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //send the texture image to the graphic card glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, texture-> pixels); //clean the SDL surface SDL_FreeSurface(texture); The code compiles without errors or warnings ! I've tired all the files formats but this always produces that ugly result : I'm using : SDL_image 1.2.9 & SDL 1.2.14 with XCode 3.2 under 10.6.2 Does anyone knows how to fix this ?

    Read the article

  • Fast screen capture and lost Vsync

    - by user338759
    Hi, I'd like to generate a movie in real time with a self-made application doing fast screen captures with part of the screen occupied by a running 3D application. I'm aware that several applications already exist for this (like FRAPS or Taksi), and even dedicated DirectShow filters (like UScreenCapture), but i really need to make this with my own external application. When correctly setup (UScreenCapture + ffdshow), capturing an compressing a full screen does not consumes as much CPU as you would expect (about 15%), and does not impairs the performances of the 3D app. The problem of doing a capture from an external application is that the 3D application loses it's Vsync and creates a shaggy, difficult to use 3D application (3D app is only presented on a small part of the screen, the rest being GDI, DirectX) FRAPS solves this problem by allowing you to capture only one application at a time (the one with focus). Depending on the technology used (OpenGl, DirectX, GDI), it hooks the Vsync and does its capture (with glReadPixels,...), without perturbing it. Doing this does not solve my problem, since I want the full composed screen image (including 3D and the rest) AND a smooth 3D app. The UScreenCapture seems to use a fast DirectX call to capture the whole screen, but the openGL 3D app is still out of sync. Doing a BitBlt is too slow and CPU consumming to do real time 30 fps acquisition (at least under windows XP, not sure with 7) My question is to know if there is a way to achieve my goal with Windows 7 and it's brand new DirectX compositing engine? Windows 7 succeeds to show live VSynced duplicated previews of every app (in the taskbar), so there must be a way to access the currenlty displayed screen buffer without perturbing the rendering of the 3D OpenGL app ? Any other suggestion, technology ? thank you

    Read the article

  • HttpURLConnection: What is the minimum best-practice implementation?

    - by stormin986
    I've come across a lot of HttpURLConnection examples that are nothing more than openConnection(), getInputStream(), and then they just read the buffer and are done. It's simple but seems like it's not the best implementation ... it handles no problems. I don't yet know much about http, so I keep thinking I have everything covered until a new problem arises. I'm currently experiencing a similar problem to this one. Most times I try to read the same resource a second time (from a different HttpURLConnection object, after I .disconnect()'ed the previous one), the response code returns as -1 (but no exception is thrown!!). Before I knew to check the response code, I was baffled since I was throwing no exceptions. So, is there a minimum 'best practice' HttpURLConnection implementation? What are notable exceptions to handle? Request code checking? Any other error checks? What connection parameters do and don't need to be set (like doInput / doOutput, are these even necessary? Some examples have em, some don't). Etc. I realize this is kind of a broad question but I think it has potential to be a very useful resource if many of the common use cases and FAQs are addressed in one central place. This seems like the kind of thing a community wiki would be good for...

    Read the article

  • 2 questions. IDO mode not caching properly / forcing buffers to named windows.

    - by user112043
    1 My ido mode does not properly cache filenames / folders. It will list files inside a folder, but from a while ago without any of the newer files showing. Any suggestions ? 2 In jde, when I have multiple windows open, compiling on one window will create a corresponding "* name of the class *" that will open on the next window in order. This is fine if I only have one window open but can I get some help writing a function that I could use to : Name a window Force all buffers of JDE compile server to only ever open in that window if it exists Force all run windows from jde in the form of * name of the class * to open in the same window if it exists. Jde automatically names the buffer " * name of the class * ", I will probably dig around the codes to find an easy fix for that... so if the code could just force all windows using a regexp containing * jde run - filename * or something along the lines would also work. Thanks for your help, first post here as well. I really would like just some ideas on what may be going wrong with 1, 2 if anyone is feeling kind.

    Read the article

  • WCF Streaming not working at server

    - by Radhi
    hi, i have used WCF service to transfer large files in chunks to the server for that i have reference this article http://kjellsj.blogspot.com/2007/02/wcf-streaming-upload-files-over-http.html i have configured my application on IIS on my machine. its work fine here. it allows upto 64mb file upload but when we have published the site. it allows only maximum 30Mb file if i try to upload more than that i got error 404 - resource not found. here is the binding config i have used. <basicHttpBinding> <!-- buffer: 64KB; max size: 64MB --> <binding name="FileTransferServicesBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Streamed" messageEncoding="Mtom" maxBufferSize="65536" maxReceivedMessageSize="67108864"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> please suggest me where i am missing anything. and if required more code please let me know -thanks in advance

    Read the article

  • Reinterpret a CGImageRef using PyObjC in Python

    - by Michael Rondinelli
    Hi, I'm doing something that's a little complicated to sum up in the title, so please bear with me. I'm writing a Python module that provides an interface to my C++ library, which provides some specialized image manipulation functionality. It would be most convenient to be able to access image buffers as CGImageRefs from Python, so they could be manipulated further using Quartz (using PyObjC, which works well). So I have a C++ function that provides a CGImageRef representation from my own image buffers, like this: CGImageRef CreateCGImageRefForImageBuffer(shared_ptr<ImageBuffer> buffer); I'm using Boost::Python to create my Python bridge. What is the easiest way for me to export this function so that I can use the CGImageRef from Python? Problems: The CGImageRef type can't be exported directly because it is a pointer to an undefined struct. So I could make a wrapper function that wraps it in a PyCObject or something to get it to send the pointer to Python. But then how do I "cast" this object to a CGImageRef from Python? Is there a better way to go about this?

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • Malloc corrupting already malloc'd memory in C

    - by Kyte
    I'm currently helping a friend debug a program of his, which includes linked lists. His list structure is pretty simple: typedef struct nodo{ int cantUnos; char* numBin; struct nodo* sig; }Nodo; We've got the following code snippet: void insNodo(Nodo** lista, char* auxBin, int auxCantUnos){ printf("*******Insertando\n"); int i; if (*lista) printf("DecInt*%p->%p\n", *lista, (*lista)->sig); Nodo* insert = (Nodo*)malloc(sizeof(Nodo*)); if (*lista) printf("Malloc*%p->%p\n", *lista, (*lista)->sig); insert->cantUnos = auxCantUnos; insert->numBin = (char*)malloc(strlen(auxBin)*sizeof(char)); for(i=0 ; i<strlen(auxBin) ; i++) insert->numBin[i] = auxBin[i]; insert-numBin[i] = '\0'; insert-sig = NULL; Nodo* aux; [etc] (The lines with extra indentation were my addition for debug purposes) This yields me the following: *******Insertando DecInt*00341098->00000000 Malloc*00341098->2832B6EE (*lista)-sig is previously and deliberately set as NULL, which checks out until here, and fixed a potential buffer overflow (he'd forgotten to copy the NULL-terminator in insert-numBin). I can't think of a single reason why'd that happen, nor I've got any idea on what else should I provide as further info. (Compiling on latest stable MinGW under fully-patched Windows 7, friend's using MinGW under Windows XP. On my machine, at least, in only happens when GDB's not attached.) Any ideas? Suggestions? Possible exorcism techniques? (Current hack is copying the sig pointer to a temp variable and restore it after malloc. It breaks anyways. Turns out the 2nd malloc corrupts it too. Interestingly enough, it resets sig to the exact same value as the first one).

    Read the article

  • According to MSDN ReadFile() Win32 function may incorrectly report read operation completion. When?

    - by Martin Dobšík
    The MSDN states in its description of ReadFile() function (http://msdn.microsoft.com/en-us/library/aa365467%28VS.85%29.aspx): “If hFile is opened with FILE_FLAG_OVERLAPPED, the lpOverlapped parameter must point to a valid and unique OVERLAPPED structure, otherwise the function can incorrectly report that the read operation is complete.” I have some applications that are violating the above recommendation and I would like to know the severity of the problem. I mean the program uses named pipe that has been created with FILE_FLAG_OVERLAPPED, but it reads from it using the following call: ReadFile(handle, &buf, n, &n_read, NULL); That means it passes NULL as the lpOverlapped parameter. That call should not work correctly in some circumstances according to documentation. I have spent a lot of time trying to reproduce the problem, but I was unable to! I always got all data in right place at right time. I was testing only Named Pipes though. Would anybody know when can I expect that ReadFile() will incorrectly return and report successful completion even the data are not yet in the buffer? What would have to happen in order to reproduce the problem? Does it happen with files, pipes, sockets, consoles, or other devices? Do I have to use particular version of OS? Or particular version of reading (like register the handle to I/O completion port)? Or particular synchronization of reading and writing processes/threads? Or when would that fail? It works for me :/ Please help! With regards Martin

    Read the article

  • Parsing unicode XML with Python SAX on App Engine

    - by Derek Dahmer
    I'm using xml.sax with unicode strings of XML as input, originally entered in from a web form. On my local machine (python 2.5, using the default xmlreader expat, running through app engine), it works fine. However, the exact same code and input strings on production app engine servers fail with "not well-formed". For example, it happens with the code below: from xml import sax class MyHandler(sax.ContentHandler): pass handler = MyHandler() # Both of these unicode strings return 'not well-formed' # on app engine, but work locally xml.parseString(u"<a>b</a>",handler) xml.parseString(u"<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) # Both of these work, but output unicode xml.parseString("<a>b</a>",handler) xml.parseString("<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) resulting in the error: File "<string>", line 1, in <module> File "/base/python_dist/lib/python2.5/xml/sax/__init__.py", line 49, in parseString parser.parse(inpsrc) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 107, in parse xmlreader.IncrementalParser.parse(self, source) File "/base/python_dist/lib/python2.5/xml/sax/xmlreader.py", line 123, in parse self.feed(buffer) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 211, in feed self._err_handler.fatalError(exc) File "/base/python_dist/lib/python2.5/xml/sax/handler.py", line 38, in fatalError raise exception SAXParseException: <unknown>:1:1: not well-formed (invalid token) Any reason why app engine's parser, which also uses python2.5 and expat, would fail when inputting unicode?

    Read the article

  • Java NIO (Netty): How does Encryption or GZIPping work in theory (with filters)

    - by Tom
    Hello Experts, i would be very thankfull if you can explain to me, how in theory the "Interceptor/Filter" Pattern in ByteStreams (over Sockets/Channels) work (in Asynchronous IO with netty) in regard to encryption or compression of data. Given I have a Filter that does GZIPPING. How is this internally implemented? Does the Filter "collect" so many bytes form the channel, that this is a usefull number of bytes that can then be en/decoded? What is in general the minimal "blocksize(data to encode/decode in a chunk)" of socket based gzipping? Does this "blocksize" have to be negotiated in advance between server and client? What happens if the client does not send enough data to "fill" the blocksize (due to a network conquestion) but does not close the connection. Does this mean the other side will simply wait until it gets enough bytes to decode or until a timeout occoures...How is the Filter pattern the applied? The compression filter will de/compress the blocksize of bytes and then store them again in the same buffer would (in the case of netty) i normally be using the ChannelHanlderContext to pass the de/encoded data to the next filter?... Any explanations/links/tutorials (for beginners;-) will be very much appreciated to help me understand how for example encryption/compressing are implemented in socket based communication with filters/interceptor pattern. thank you very much tom

    Read the article

  • custom view on iphone's native media player(MPMoviePlayerController)

    - by sneha
    I am building an application that implements a custom view on iPhone’s native media player. I want your help in deciding directions to lay this effort. At present I have find out that iPhone SDK doesn’t support APIs to customize media player. I need these things in the player: I would like to have custom views i.e. want to change all control buttons on player like Play/Pause, seek bar etc. The background of player will also need to be different. The player has to play audio or video file from local/remote location. Can i use MPMoviePlayerController if it can be customized (How to do it ??). However, any other third party player approved by iPhone which has an ability to download and play the media file from local/remote location is also fine. It will be great to have an access to media player buffer so that it can be encrypted. I have following questions: 1.Any help in building/customizing player..... 2.Do you see issues in signing of application? 3.Does Apple have any restrictions on customizing media player? 4.Any sample iPhone application where media player is customized Any help in this regard is highly appreciated.

    Read the article

  • How to forward a 'saved' request stream to another Action within the same controller?

    - by Moe Howard
    We have a need to chunk-up large http requests sent by our mobile devices. These smaller chunk streams are merged to a file on the server. Once all chunks are received we need a way to submit the saved merged request to an another method(Action) within the same controller that will process this large http request. How can this be done? The code we tried below results in the service hanging. Is there a way to do this without a round-trip? //Open merged chunked file FileStream fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read); //Read steam support variables int bytesRead = 0; byte[] buffer = new byte[1024]; //Build New Web Request. The target Action is called "Upload", this method we are in is called "UploadChunk" HttpWebRequest webRequest; webRequest = (HttpWebRequest)WebRequest.Create(Request.Url.ToString().Replace("Chunk", string.Empty)); webRequest.Method = "POST"; webRequest.ContentType = "text/xml"; webRequest.KeepAlive = true; webRequest.Timeout = 600000; webRequest.ReadWriteTimeout = 600000; webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials; Stream webStream = webRequest.GetRequestStream(); //Hangs here, no errors, just hangs I have looked into using RedirectToAction and RedirecctToRoute but these methods don't fit well with what we are looking to do as we cannot edit the Request.InputStream (as it is read-only) to carry out large request stream. Thanks, Moe

    Read the article

  • Flash movies in inactive browser tabs pause or don't execute in real time

    - by ZenBlender
    I'm noticing some unexpected behavior. Some time in the last few months, a change in either Firefox, the Flash player, or both, has made it so that Flash movies that are in inactive browser tabs no longer execute in real time. They appear to still execute, but only in bursts, and not in a predictable way. This is a problem because I develop a Flash-based (Actionscript 2.0, Flash CS3) multiplayer game that maintains a network connection and allows players to chat, etc. Many of our players complain about Firefox crashing while playing the game. I have noticed it too, not too frequently, but it crashes several times a week. (Firefox crashes, I do not get a message from Flash player that indicates an infinite loop or problem in my code) My theory is that this new behavior is causing crashes when there is a lot of activity in my game, leading to lots of unhandled network traffic for my game getting buffered before Firefox/Flash will give it a chance to execute. Maybe this leads to a buffer overflow or missing packets, and as a result, something crashes. At times I will switch back to the tab that is running my game and discover a display bug, which looks as though Flash has simply failed to execute something that it was supposed to. I would assume this new behavior is on purpose, for example to prevent all the Flash-based advertisements in inactive tabs from executing and therefore killing performance. In a quick test on Chrome (5.0.342.9 beta), this "pausing" of Flash seems to be there as well, but somehow it seems much less of a problem. My users have only complained about Firefox crashing, not other browsers. My machine: Windows 7 x64 Firefox 3.6.3 Flash Player 10.1.50.426 My game: triplejack.com Any ideas? Ideally I'd like to disable this behavior for my Flash game so it can execute in real time even when in an inactive tab. Thanks for any help!

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • Learning about the low level

    - by Anoners
    I'm interested in learning more about the PC from a lower (machine) level. I graduated from a school which taught us concepts using the Java language which abstracted out that level almost completely. As a result I only learned a bit from the one required assembly language course. In order to cram in ASM and quite a few details about architecture, it was hard to get a very deep picture of what is going on there. At work I focus on unix socket programming in C, so i'm much closer to the hardware now, but I feel I should learn a bit more about what streams really are, how memory management and paging works, what goes on when you call "paint()" on a graphics buffer, etc. I missed out on a lot of this and i'm looking for a good resource to get me started. I've heard a lot about the "Pink Book" by Peter Norton (Programmer's Guide to the IBM PC, Programmer's Guide to inside the PC, etc). It seems like this is on the right track, however the original is quite out dated and the newer ones have had conflicting reviews, with many people saying to stay away from it. I'm not sure what the SO crowd thinks about this book or if they have some suggestions for similar books, online resources, etc that may be good primers for this sort of thing. Any suggestions would be appreciated.

    Read the article

  • AS3 microphone recording/saving works, in-flash PCM playback double speed

    - by Lowgain
    I have a working mic recording script in AS3 which I have been able to successfully use to save .wav files to a server through AMF. These files playback fine in any audio player with no weird effects. For reference, here is what I am doing to capture the mic's ByteArray: (within a class called AudioRecorder) public function startRecording():void { _rawData = new ByteArray(); _microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, _samplesCaptured, false, 0, true); } private function _samplesCaptured(e:SampleDataEvent):void { _rawData.writeBytes(e.data); } This works with no problems. After the recording is complete I can take the _rawData variable and run it through a WavWriter class, etc. However, if I run this same ByteArray as a sound using the following code which I adapted from the adobe cookbook: (within a class called WavPlayer) public function playSound(data:ByteArray):void { _wavData = data; _wavData.position = 0; _sound.addEventListener(SampleDataEvent.SAMPLE_DATA, _playSoundHandler); _channel = _sound.play(); _channel.addEventListener(Event.SOUND_COMPLETE, _onPlaybackComplete, false, 0, true); } private function _playSoundHandler(e:SampleDataEvent):void { if(_wavData.bytesAvailable <= 0) return; for(var i:int = 0; i < 8192; i++) { var sample:Number = 0; if(_wavData.bytesAvailable > 0) sample = _wavData.readFloat(); e.data.writeFloat(sample); } } The audio file plays at double speed! I checked recording bitrates and such and am pretty sure those are all correct, and I tried changing the buffer size and whatever other numbers I could think of. Could it be a mono vs stereo thing? Hope I was clear enough here, thanks!

    Read the article

  • SSIS 2005 Error while using script component Designer: "Cannot fetch a row from OLE DB provider "BUL

    - by user150541
    I am trying to debug a dts package in SSIS. I have a script component designer where I pass in the input variables to increment a counter. When I try to msgbox the counter value, I get the following error. Error: 0xC0202009 at STAGING1 to STAGING2, STAGING2 Destination [1056]: An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Reading from DTS buffer timed out.". Below is the part of the code within the script component designer : Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper Public Class ScriptMain Inherits UserComponent Dim iCounter As Integer Dim iCurrentVal As Integer Dim sCurrentOracleSeq As String Dim sSeqName As String Dim sSeqAltProcName As String Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) ' ' Add your code here ' Row.SEQIDNCASE = iCounter + iCurrentVal iCounter += 1 MsgBox(iCounter + iCurrentVal, MsgBoxStyle.Information, "Input0") End Sub Public Overrides Sub PreExecute() sCurrentOracleSeq = Me.Variables.VSEQIDCurVal iCurrentVal = CInt(sCurrentOracleSeq) MsgBox(iCurrentVal, MsgBoxStyle.Information, "No Title") iCounter = 0 sSeqName = Me.Variables.VSEQIDName sSeqAltProcName = Me.Variables.VSEQIDAlterProc End Sub Public Overrides Sub PostExecute() Me.Variables.VSEQIDUpdateSQL = "Begin " & sSeqAltProcName & "('" & sSeqName & "'," & (iCounter + iCurrentVal) & "); End;" End Sub End Class Note that the above part of code works perfectly fine if I comment out the lines that has Msgbox.

    Read the article

  • Lua Alien Module - Trouble using WriteProcessMemory function, unsure on types (unit32)

    - by jefferysanders
    require "alien" --the address im trying to edit in the Mahjong game on Win7 local SCOREREF = 0x0744D554 --this should give me full access to the process local ACCESS = 0x001F0FFF --this is my process ID for my open window of Mahjong local PID = 1136 --function to open proc local op = alien.Kernel32.OpenProcess op:types{ ret = "pointer", abi = "stdcall"; "int", "int", "int"} --function to write to proc mem local wm = alien.Kernel32.WriteProcessMemory wm:types{ ret = "long", abi = "stdcall"; "pointer", "pointer", "pointer", "long", "pointer" } local pRef = op(ACCESS, true, PID) local buf = alien.buffer("99") -- ptr,uint32,byte arr (no idea what to make this),int, ptr print( wm( pRef, SCOREREF, buf, 4, nil)) --prints 1 if success, 0 if failed So that is my code. I am not even sure if I have the types set correctly. I am completely lost and need some guidance. I really wish there was more online help/documentation for alien, it confuses my poor brain. What utterly baffles me is that it WriteProcessMemory will sometimes complete successfully (though it does nothing at all, to my knowledge) and will also sometimes fail to complete successfully. As I've stated, my brain hurts. Any help appreciated.

    Read the article

  • (PL/SQL)Oracle stored procedure parameter size limit(VARCHAR2)

    - by Jude Lee
    Hello, there. I have an problem with my oracle stored procedure as following. codes : CREATE OR REPLACE PROCEDURE APP.pr_ap_gateway ( aiov_req IN OUT VARCHAR2, aov_rep01 OUT VARCHAR2, aov_rep02 OUT VARCHAR2, aov_rep03 OUT VARCHAR2, aov_rep04 OUT VARCHAR2, aov_rep05 OUT VARCHAR2 ) IS v_header VARCHAR (100); v_case VARCHAR (4); v_err_no_case VARCHAR (5) := '00004'; BEGIN DBMS_OUTPUT.ENABLE (1000000); aov_rep01 := lpad(' ', 190, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 199, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 200, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 201, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); END pr_ap_gateway; / results : >> [190] >> [199] >> [200] and then error 'buffer overflow' ORA-06502: PL/SQL: ?? ?? ? ??: ??? ??? ?? ???? I know that VARCHAR2 type can contain 32KB in PL/SQL. But, in my test, VARCHAR2 parameter contains only 200 Bytes. What's wrong with this situation? This procedure will called by java daemon program. So, There's no declaration of parameters size before calling procedure. Thanks in advance for your reply.

    Read the article

  • Do overlays/tooltips work correctly in Emacs for Windows?

    - by Cheeso
    I'm using Flymake on C# code, emacs v22.2.1 on Windows. The Flymake stuff has been working well for me. For those who don't know, you can read an overview of flymake, but the quick story is that flymake repeatedly builds the source file you are currently working on in the background, for the purpose of doing syntax checking. It then highlights the compiler warnings and erros in the current buffer. Flymake didn't work for C# initially, but I "monkey-patched it" and it works nicely now. If you edit C# in emacs, I highly recommend using flymake. The only problem I have is with the UI. Flymake highlights the errors and warnings nicely, and then inserts "overlays" with the full error or warning text. IF I hover the mouse pointer over the highlighted line in code, the overlay pops up. But as you can see, the overlay (tooltip) is truncated, and it doesn't display correctly. Flymake seems to be doing the right thing, it's the overlay part that seems broken., and overlay seems to do the right thing. It's the tooltip that is displayed incorrectly. Do overlays tooltips work correctly in emacs for Windows? Where do I look to fix this?

    Read the article

  • iphone encryption not working

    - by Futur
    I have this encryption/decryption implemented and executes with no error, but i am unable to decrypt the data and im not sure whether the data is encrypted, kindly help. - (NSData *)AES256EncryptWithKey:(NSString *)key { char keyPtr[kCCKeySizeAES256+1]; bzero(keyPtr, sizeof(keyPtr)); [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [strData length]; data = [[NSData alloc] init]; data = [strData dataUsingEncoding:NSUTF8StringEncoding]; size_t bufferSize = dataLength + kCCBlockSizeAES128; void *pribuffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL /* initialization vector (optional) */, [data bytes], dataLength, /* input */ [data bytes], bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { return [NSData dataWithBytesNoCopy:data length:numBytesEncrypted]; }} The decryption is : -(NSData *)AES256DecryptWithKey:(NSString *)key andForData:(NSData *)objDataObject { char keyPtr[kCCKeySizeAES256+1]; bzero(keyPtr, sizeof(keyPtr)); [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [objDataObject length]; size_t bufferSize = dataLength + kCCBlockSizeAES128; void *buffer = malloc(bufferSize); size_t numBytesDecrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCDecrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL, [objDataObject bytes], dataLength, [objDataObject bytes], bufferSize, &numBytesDecrypted); if (cryptStatus == kCCSuccess) { return [NSData dataWithBytesNoCopy:objDataObject length:numBytesDecrypted]; } } i call the above methods like this: CryptoClass *obj = [CryptoClass new]; NSData *objData = [obj AES256EncryptWithKey:@"hell"]; NSData *val = [obj AES256DecryptWithKey:@"hell" andForData:objData ]; NSLog(@"decrypted string is : %@ AND LENGTH IS %d",[val description],[val length]); Decryption doesnt seem to happen at all and the encryption - i am not sure about it, kindly help me here.

    Read the article

  • Project Euler 7 Scala Problem

    - by Nishu
    I was trying to solve Project Euler problem number 7 using scala 2.8 First solution implemented by me takes ~8 seconds def problem_7:Int = { var num = 17; var primes = new ArrayBuffer[Int](); primes += 2 primes += 3 primes += 5 primes += 7 primes += 11 primes += 13 while (primes.size < 10001){ if (isPrime(num, primes)) primes += num if (isPrime(num+2, primes)) primes += num+2 num += 6 } return primes.last; } def isPrime(num:Int, primes:ArrayBuffer[Int]):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(num) for (i <- primes){ if(i <= r ){ if (num % i == 0) return false; } } return true; } Later I tried the same problem without storing prime numbers in array buffer. This take .118 seconds. def problem_7_alt:Int = { var limit = 10001; var count = 6; var num:Int = 17; while(count < limit){ if (isPrime2(num)) count += 1; if (isPrime2(num+2)) count += 1; num += 6; } return num; } def isPrime2(n:Int):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(n) var f = 5; while (f <= r){ if (n % f == 0) { return false; } else if (n % (f+2) == 0) { return false; } f += 6; } return true; } I tried using various mutable array/list implementations in Scala but was not able to make solution one faster. I do not think that storing Int in a array of size 10001 can make program slow. Is there some better way to use lists/arrays in scala?

    Read the article

  • Retrieve Flash file post in ASP.NET

    - by Quandary
    Question: In ASP.NET, I retrieve a JPEG-file as Flash post data like this Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest context.Response.ContentType = "text/plain" ' Retrieve a bytearray from the post buffer Dim myBuffer As Byte() = context.Request.BinaryRead(Convert.ToInt32(context.Request.InputStream.Length)) System.IO.File.WriteAllBytes("c:\temp\test.jpg", myBuffer) End Sub In Flash, I send it to an asp.net handler like this var jpgSource:BitmapData = cPrint.TakeSnapshot(MovieClip(cGlobals.ccPlanZoomView)); var bmpThisBitmap:Bitmap = new Bitmap(jpgSource); var nQuality:Number = 100; var jpgEncoder:JPGEncoder = new JPGEncoder(nQuality); var jpgStream:ByteArray = jpgEncoder.encode(jpgSource); var header:URLRequestHeader = new URLRequestHeader ("Content-type", "application/octet-stream"); // Make sure to use the correct path to jpg_encoder_download.php var strFileName:String="test.jpg"; var jpgURLRequest:URLRequest = new URLRequest("http://localhost/raumplaner_new/raumplaner_new/cgi-bin/SavePDF.ashx"); //var scriptVars:URLVariables = new URLVariables(); //scriptVars.fn = strFileName; //var myarr:Array= new Array(); //myarr.push(jpgStream); //scriptVars.Files = myarr; jpgURLRequest.requestHeaders.push(header); jpgURLRequest.method = URLRequestMethod.POST; //jpgURLRequest.data = scriptVars; jpgURLRequest.data = jpgStream; var loader:URLLoader = new URLLoader(); loader.dataFormat = URLLoaderDataFormat.BINARY; loader.load(jpgURLRequest); It works but I want to send a few additional variables along, via scriptVars (commented out here). How do I retrieve the JPEG file in that case ? Because if I use parameters, there is no more BinaryRead... Aspecially, how would I read an array of jpeg files (several files) ?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >