Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 49/151 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • List<element> initialization fires "Process is terminated due to StackOverflowException"

    - by netmajor
    I have structs like below and when I do that initialization: ArrayList nodesMatrix = null; List<vertex> vertexMatrix = null; List<bool> odwiedzone = null; List<element> priorityQueue = null; vertexMatrix = new List<vertex>(nodesNr + 1); nodesMatrix = new ArrayList(nodesNr + 1); odwiedzone = new List<bool>(nodesNr + 1); priorityQueue = new List<element>(); arr.NodesMatrix = nodesMatrix; arr.VertexMatrix = vertexMatrix; arr.Odwiedzone = odwiedzone; arr.PriorityQueue = priorityQueue; //only here i have exception debuger fires Process is terminated due to StackOverflowException :/ Some idea why this collection fires this exception ? private struct arrays { ArrayList nodesMatrix; public ArrayList NodesMatrix { get { return nodesMatrix; } set { nodesMatrix = value; } } List<vertex> vertexMatrix; public List<vertex> VertexMatrix { get { return vertexMatrix; } set { vertexMatrix = value; } } List<bool> odwiedzone; public List<bool> Odwiedzone { get { return odwiedzone; } set { odwiedzone = value; } } public List<element> PriorityQueue { get { return PriorityQueue; } set { PriorityQueue = value; } } } public struct element : IComparable { public double priority { get { return priority; } set { priority = value; } } public int node { get { return node; } set { node = value; } } public element(double _prio, int _node) { priority = _prio; node = _node; } #region IComparable Members public int CompareTo(object obj) { element elem = (element)obj; return priority.CompareTo(elem.priority); } #endregion

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • .NET C# Filestream writing to file and reading the bfile

    - by pythonrg7
    I have a web service that checks a dictionary to see if a file exists and then if it does exist it reads the file, otherwise it saves to the file. This is from a web app. I wonder what is the best way to do this because I occasionally get a FileNotFoundException exception if the same file is accessed at the same time. Here's the relevant parts of the code: String signature; signature = "FILE," + value1 + "," + value2 + "," + value3 + "," + value4; // this is going to be the filename string result; MultipleRecordset mrSummary = new MultipleRecordset(); // MultipleRecordset is an object that retrieves data from a sql server database if (mrSummary.existsFile(signature)) { result = mrSummary.retrieveFile(signature); } else { result = mrSummary.getMultipleRecordsets(System.Configuration.ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString.ToString(), value1, value2, value3, value4); mrSummary.saveFile(signature, result); } Here's the code to see if the file already exists: private static Dictionary dict = new Dictionary(); public bool existsFile(string signature) { if (dict.ContainsKey(signature)) { return true; } else { return false; } } Here's what I use to retrieve if it already exists: try { byte[] buffer; FileStream fileStream = new FileStream(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename, FileMode.Open, FileAccess.Read, FileShare.Read); try { int length = 0x8000; // get file length buffer = new byte[length]; // create buffer int count; // actual number of bytes read JSONstring = ""; while ((count = fileStream.Read(buffer, 0, length)) > 0) { JSONstring += System.Text.ASCIIEncoding.ASCII.GetString(buffer, 0, count); } } finally { fileStream.Close(); } } catch (Exception e) { JSONstring = "{\"error\":\"" + e.ToString() + "\"}"; } If the file doesn't previously exist it saves the JSON to the file: try { if (dict.ContainsKey(filename) == false) { dict.Add(filename, true); } else { this.retrieveFile(filename, ipaddress); } } catch { } try { TextWriter tw = new StreamWriter(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename); tw.WriteLine(JSONstring); tw.Close(); } catch { } Here are the details to the exception I sometimes get from running the above code: System.IO.FileNotFoundException: Could not find file 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75'. File name: 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at com.myname.business.MultipleRecordset.retrieveFile(String filename, String ipaddress)

    Read the article

  • OpenGL suppresses exceptions in MFC dialog-based application

    - by Mikhail
    Hello. I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this: int* i = 0; *i = 3; I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior. Now I add some OpenGL initialization code in the OnInitDialog() method: HDC DC = GetDC(GetSafeHwnd()); static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd 1, // version number PFD_DRAW_TO_WINDOW | // support window PFD_SUPPORT_OPENGL | // support OpenGL PFD_DOUBLEBUFFER, // double buffered PFD_TYPE_RGBA, // RGBA type 24, // 24-bit color depth 0, 0, 0, 0, 0, 0, // color bits ignored 0, // no alpha buffer 0, // shift bit ignored 0, // no accumulation buffer 0, 0, 0, 0, // accum bits ignored 32, // 32-bit z-buffer 0, // no stencil buffer 0, // no auxiliary buffer PFD_MAIN_PLANE, // main layer 0, // reserved 0, 0, 0 // layer masks ignored }; int pixelformat = ChoosePixelFormat(DC, &pfd); SetPixelFormat(DC, pixelformat, &pfd); HGLRC hrc = wglCreateContext(DC); ASSERT(hrc != NULL); wglMakeCurrent(DC, hrc); Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen. It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle. If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards. I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases. Well, hope good guys will help me :) PS The problem originally where posted under this topic.

    Read the article

  • Inserting newlines into a GtkTextView widget (GTK+ programming)

    - by Mark Roberts
    I've got a button which when clicked copies and appends the text from a GtkEntry widget into a GtkTextView widget. (This code is a modified version of an example found in the "The Text View Widget" chapter of Foundations of GTK+ Development.) I'm looking to insert a newline character before the text which gets copied and appended, such that each line of text will be on its own line in the GtkTextView widget. How would I do this? I'm brand new to GTK+. Here's the code sample: #include <gtk/gtk.h> typedef struct { GtkWidget *entry, *textview; } Widgets; static void insert_text (GtkButton*, Widgets*); int main (int argc, char *argv[]) { GtkWidget *window, *scrolled_win, *hbox, *vbox, *insert; Widgets *w = g_slice_new (Widgets); gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_window_set_title (GTK_WINDOW (window), "Text Iterators"); gtk_container_set_border_width (GTK_CONTAINER (window), 10); gtk_widget_set_size_request (window, -1, 200); w->textview = gtk_text_view_new (); w->entry = gtk_entry_new (); insert = gtk_button_new_with_label ("Insert Text"); g_signal_connect (G_OBJECT (insert), "clicked", G_CALLBACK (insert_text), (gpointer) w); scrolled_win = gtk_scrolled_window_new (NULL, NULL); gtk_container_add (GTK_CONTAINER (scrolled_win), w->textview); hbox = gtk_hbox_new (FALSE, 5); gtk_box_pack_start_defaults (GTK_BOX (hbox), w->entry); gtk_box_pack_start_defaults (GTK_BOX (hbox), insert); vbox = gtk_vbox_new (FALSE, 5); gtk_box_pack_start (GTK_BOX (vbox), scrolled_win, TRUE, TRUE, 0); gtk_box_pack_start (GTK_BOX (vbox), hbox, FALSE, TRUE, 0); gtk_container_add (GTK_CONTAINER (window), vbox); gtk_widget_show_all (window); gtk_main(); return 0; } /* Insert the text from the GtkEntry into the GtkTextView. */ static void insert_text (GtkButton *button, Widgets *w) { GtkTextBuffer *buffer; GtkTextMark *mark; GtkTextIter iter; const gchar *text; buffer = gtk_text_view_get_buffer (GTK_TEXT_VIEW (w->textview)); text = gtk_entry_get_text (GTK_ENTRY (w->entry)); mark = gtk_text_buffer_get_insert (buffer); gtk_text_buffer_get_iter_at_mark (buffer, &iter, mark); gtk_text_buffer_insert (buffer, &iter, text, -1); } You can compile this command (assuming the file is named file.c): gcc file.c -o file `pkg-config --cflags --libs gtk+-2.0` Thanks everybody!

    Read the article

  • A two way minimum spanning tree of a directed graph

    - by mvid
    Given a directed graph with weighted edges, what algorithm can be used to give a sub-graph that has minimum weight, but allows movement from any vertex to any other vertex in the graph (under the assumption that paths between any two vertices always exist). Does such an algorithm exist?

    Read the article

  • Switch between speakerphone and headset on Android

    - by user210504
    Hi! I wish to know if there is a way, using which we can switch between the speaker and headset dynamically in an android application. I am using this sample code, I found online for my experiments final float frequency = 440; float increment = (float)(2*Math.PI) * frequency / 44100; // angular increment for each sample float angle = 0; AndroidAudioDevice device = new AndroidAudioDevice( ); AudioManager am = (AudioManager)getSystemService(AUDIO_SERVICE); am.setMode(AudioManager.MODE_IN_CALL); float samples[] = new float[1024]; int count = 0; while( count < 10 ) { count++; for( int i = 0; i < samples.length; i++ ) { samples[i] = (float)Math.sin( angle ) ; angle += increment; } device.writeSamples( samples ); } device.stop(); am.setMode(AudioManager.MODE_NORMAL); ---- next class public class AndroidAudioDevice { AudioTrack track; short[] buffer = new short[1024]; public AndroidAudioDevice( ) { int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT ); track = new AudioTrack( AudioManager.STREAM_VOICE_CALL, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, minSize, AudioTrack.MODE_STREAM); track.play(); } public void writeSamples(float[] samples) { fillBuffer( samples ); track.write( buffer, 0, samples.length ); } private void fillBuffer( float[] samples ) { if( buffer.length < samples.length ) buffer = new short[samples.length]; for( int i = 0; i < samples.length; i++ ) buffer[i] = (short)(samples[i] * Short.MAX_VALUE);; } public void stop() { track.stop(); } } As per my understanding this should play audio on headset, because we have not enabled the speaker phone. However, the audio is playing on the speaker phone. 1 Am I doing something wrong here? 2 What would be a way to switch between internal speaker and speaker phone dynamically for same code peice Any help will be appreciated.

    Read the article

  • HttpWebResponse get mixed up when used inside multiple threads

    - by Holli
    In my Application I have a few threads who will get data from a web service. Basically I just open an URL and get an XML output. I have a few threads who do this continuously but with different URLs. Sometimes the results are mixed up. The XML output doesn't belong to the URL of a thread but to the URL of another thread. In each thread I create an instance of the class GetWebPage and call the method Get from this instance. The method is very simple and based mostly on code from the MSDN documentation. (See below. I removed my error handling here!) public string Get(string userAgent, string url, string user, string pass, int timeout, int readwriteTimeout, WebHeaderCollection whc) { string buffer = string.Empty; HttpWebRequest myWebRequest = (HttpWebRequest)WebRequest.Create(url); if (!string.IsNullOrEmpty(userAgent)) myWebRequest.UserAgent = userAgent; myWebRequest.Timeout = timeout; myWebRequest.ReadWriteTimeout = readwriteTimeout; myWebRequest.Credentials = new NetworkCredential(user, pass); string[] headers = whc.AllKeys; foreach (string s in headers) { myWebRequest.Headers.Add(s, whc.Get(s)); } using (HttpWebResponse myWebResponse = (HttpWebResponse)myWebRequest.GetResponse()) { using (Stream ReceiveStream = myWebResponse.GetResponseStream()) { Encoding encode = Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(ReceiveStream, encode); // Read 1024 characters at a time. Char[] read = new Char[1024]; int count = readStream.Read(read, 0, 1024); int break_counter = 0; while (count > 0 && break_counter < 10000) { String str = new String(read, 0, count); buffer += str; count = readStream.Read(read, 0, 1024); break_counter++; } } } return buffer; As you can see I have no public properties or any other shared resources. At least I don't see any. The url is the service I call in the internet and buffer is the XML Output from the server. Like I said I have multiple instances of this class/method in a few threads (10 to 12) and sometimes buffer does not belong the the url of the same thread but another thread.

    Read the article

  • java double buffering problem

    - by russell
    Whats wrong with my applet code which does not render double buffering correctly.I am trying and trying.But failed to get a solution.Plz Plz someone tell me whats wrong with my code. import java.applet.* ; import java.awt.* ; import java.awt.event.* ; public class Ball extends Applet implements Runnable { // Initialisierung der Variablen int x_pos = 10; // x - Position des Balles int y_pos = 100; // y - Position des Balles int radius = 20; // Radius des Balles Image buffer=null; //Graphics graphic=null; int w,h; public void init() { Dimension d=getSize(); w=d.width; h=d.height; buffer=createImage(w,h); //graphic=buffer.getGraphics(); setBackground (Color.black); } public void start () { // Schaffen eines neuen Threads, in dem das Spiel l?uft Thread th = new Thread (this); // Starten des Threads th.start (); } public void stop() { } public void destroy() { } public void run () { // Erniedrigen der ThreadPriority um zeichnen zu erleichtern Thread.currentThread().setPriority(Thread.MIN_PRIORITY); // Solange true ist l?uft der Thread weiter while (true) { // Ver?ndern der x- Koordinate repaint(); x_pos++; y_pos++; //x2--; //y2--; // Neuzeichnen des Applets if(x_pos>410) x_pos=20; if(y_pos>410) y_pos=20; try { Thread.sleep (30); } catch (InterruptedException ex) { // do nothing } Thread.currentThread().setPriority(Thread.MAX_PRIORITY); } } public void paint (Graphics g) { Graphics screen=null; screen=g; g=buffer.getGraphics(); g.setColor(Color.red); g.fillOval(x_pos - radius, y_pos - radius, 2 * radius, 2 * radius); g.setColor(Color.green); screen.drawImage(buffer,0,0,this); } public void update(Graphics g) { paint(g); } } what change should i make.When offscreen image is drawn the previous image also remain in screen.How to erase the previous image from the screen??

    Read the article

  • passing pipe to threads

    - by alaamh
    I see it's easy to open pipe between two process using fork, but how we can passing open pipe to threads. Assume we need to pass out of PROGRAM A to PROGRAM B "may by more than one thread", PROGRAM B send his output to PROGRAM C #include <stdio.h> #include <stdlib.h> #include <pthread.h> struct targ_s { int fd_reader; }; void *thread1(void *arg) { struct targ_s *targ = (struct targ_s*) arg; int status, fd[2]; pid_t pid; pipe(fd); pid = fork(); if (pid == 0) { dup2(STDIN_FILENO, targ->fd_reader); close(fd[0]); dup2(fd[1], STDOUT_FILENO); close(fd[1]); execvp ("PROGRAM B", NULL); exit(1); } else { close(fd[1]); dup2(fd[0], STDIN_FILENO); close(fd[0]); execl("PROGRAM C", NULL); wait(&status); return NULL; } } int main(void) { FILE *fpipe; char *command = "PROGRAM A"; char buffer[1024]; if (!(fpipe = (FILE*) popen(command, "r"))) { perror("Problems with pipe"); exit(1); } char* outfile = "out.dat"; FILE* f = fopen (outfile, "wb"); int fd = fileno( f ); struct targ_s targ; targ.fd_reader = fd; pthread_t thid; if (pthread_create(&thid, NULL, thread1, &targ) != 0) { perror("pthread_create() error"); exit(1); } int len; while (read(fpipe, buffer, sizeof (buffer)) != 0) { len = strlen(buffer); write(fd, buffer, len); } pclose(fpipe); return (0); }

    Read the article

  • Creating a Haskell Empty Set

    - by mvid
    I am attempting to pass back a Node type from this function, but I get the error that empty is out of scope: import Data.Set (Set) import qualified Data.Set as Set data Node = Vertex String (Set Node) deriving Show toNode :: String -> Node toNode x = Vertex x empty What am I doing wrong?

    Read the article

  • Jumping into argv?

    - by jth
    Hi, I`am experimenting with shellcode and stumbled upon the nop-slide technique. I wrote a little tool that takes buffer-size as a parameter and constructs a buffer like this: [ NOP | SC | RET ], with NOP taking half of the buffer, followed by the shellcode and the rest filled with the (guessed) return address. Its very similar to the tool aleph1 described in his famous paper. My vulnerable test-app is the same as in his paper: int main(int argc, char **argv) { char little_array[512]; if(argc>1) strcpy(little_array,argv[1]); return 0; } I tested it and well, it works: jth@insecure:~/no_nx_no_aslr$ ./victim $(./exploit 604 0) $ exit But honestly, I have no idea why. Okay, the saved eip was overwritten as intended, but instead of jumping somewhere into the buffer, it jumped into argv, I think. gdb showed up the following addresses before strcpy() was called: (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x80483ed in main (victim.c:7); saved eip 0x154b56 source language c. Arglist at 0xbffff1e8, args: argc=2, argv=0xbffff294 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec Address of little_array: (gdb) print &little_array[0] $1 = 0xbfffefe8 "\020" After strcpy(): (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x804840d in main (victim.c:10); saved eip 0xbffff458 source language c. Arglist at 0xbffff1e8, args: argc=-1073744808, argv=0xbffff458 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec So, what happened here? I used a 604 byte buffer to overflow little_array, so he certainly overwrote saved ebp, saved eip and argc and also argv with the guessed address 0xbffff458. Then, after returning, EIP pointed at 0xbffff458. But little_buffer resides at 0xbfffefe8, that`s a difference of 1136 byte, so he certainly isn't executing little_array. I followed execution with the stepi command and well, at 0xbffff458 and onwards, he executes NOPs and reaches the shellcode. I'am not quite sure why this is happening. First of all, am I correct that he executes my shellcode in argv, not little_array? And where does the loader(?) place argv onto the stack? I thought it follows immediately after argc, but between argc and 0xbffff458, there is a gap of 620 bytes. How is it possible that he successfully "lands" in the NOP-Pad at Address 0xbffff458, which is way above the saved eip at 0xbffff1ec? Can someone clarify this? I have actually no idea why this is working. My test-machine is an Ubuntu 9.10 32-Bit Machine without ASLR. victim has an executable stack, set with execstack -s. Thanks in advance.

    Read the article

  • I need help converting a C# string from one character encoding to another?

    - by Handleman
    According to Spolsky I can't call myself a developer, so there is a lot of shame behind this question... Scenario: From a C# application, I would like to take a string value from a SQL db and use it as the name of a directory. I have a secure (SSL) FTP server on which I want to set the current directory using the string value from the DB. Problem: Everything is working fine until I hit a string value with a "special" character - I seem unable to encode the directory name correctly to satisfy the FTP server. The code example below uses "special" character é as an example uses WinSCP as an external application for the ftps comms does not show all the code required to setup the Process "_winscp". sends commands to the WinSCP exe by writing to the process standardinput for simplicity, does not get the info from the DB, but instead simply declares a string (but I did do a .Equals to confirm that the value from the DB is the same as the declared string) makes three attempts to set the current directory on the FTP server using different string encodings - all of which fail makes an attempt to set the directory using a string that was created from a hand-crafted byte array - which works Process _winscp = new Process(); byte[] buffer; string nameFromString = "Sinéad O'Connor"; _winscp.StandardInput.WriteLine("cd \"" + nameFromString + "\""); buffer = Encoding.UTF8.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.UTF8.GetString(buffer) + "\""); buffer = Encoding.ASCII.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.ASCII.GetString(buffer) + "\""); byte[] nameFromBytes = new byte[] { 83, 105, 110, 130, 97, 100, 32, 79, 39, 67, 111, 110, 110, 111, 114 }; _winscp.StandardInput.WriteLine("cd \"" + Encoding.Default.GetString(nameFromBytes) + "\""); The UTF8 encoding changes é to 101 (decimal) but the FTP server doesn't like it. The ASCII encoding changes é to 63 (decimal) but the FTP server doesn't like it. When I represent é as value 130 (decimal) the FTP server is happy, except I can't find a method that will do this for me (I had to manually contruct the string from explicit bytes). Anyone know what I should do to my string to encode the é as 130 and make the FTP server happy and finally elevate me to level 1 developer by explaining the only single thing a developer should understand?

    Read the article

  • Thread feeding other MultiThreading

    - by alaamh
    I see it's easy to open pipe between two process using fork, but how we can passing open pipe to threads. Assume we need to pass out of PROGRAM A to PROGRAM B "may by more than one thread", PROGRAM B send his output to PROGRAM C #include <stdio.h> #include <stdlib.h> #include <pthread.h> struct targ_s { char* reader; }; void *thread1(void *arg) { struct targ_s *targ = (struct targ_s*) arg; int status, fd[2]; pid_t pid; pipe(fd); pid = fork(); if (pid == 0) { int fd = fileno( targ->fd_reader ); dup2(STDIN_FILENO, fd); close(fd[0]); dup2(fd[1], STDOUT_FILENO); close(fd[1]); execvp ("PROGRAM B", NULL); exit(1); } else { close(fd[1]); dup2(fd[0], STDIN_FILENO); close(fd[0]); execl("PROGRAM C", NULL); wait(&status); return NULL; } } int main(void) { FILE *fpipe; char *command = "PROGRAM A"; char buffer[1024]; if (!(fpipe = (FILE*) popen(command, "r"))) { perror("Problems with pipe"); exit(1); } char* outfile = "out.dat"; FILE* f = fopen (outfile, "wb"); int fd = fileno( f ); struct targ_s targ; targ.fd_reader = outfile; pthread_t thid; if (pthread_create(&thid, NULL, thread1, &targ) != 0) { perror("pthread_create() error"); exit(1); } int len; while (read(fpipe, buffer, sizeof (buffer)) != 0) { len = strlen(buffer); write(fd, buffer, len); } pclose(fpipe); return (0); }

    Read the article

  • Missing part of the image when taking screenshot while supporting Retina Display

    - by Spaft
    I'm currently working on enabling support for retina display for my game. In the game, we have a feature that the user can take screenshot. We are using these part of code we found online a while ago and it's working fine when we are not supporting retina display: CCDirector* director = [CCDirector sharedDirector]; CGSize size = [director winSizeInPixels]; //Create buffer for pixels GLuint bufferLength = size.width * size.height * 4; GLubyte* buffer = (GLubyte*)malloc(bufferLength); //Read Pixels from OpenGL glReadPixels(0, 100, size.width, size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); //Make data provider with data. CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL); //Configure image int bitsPerComponent = 8; int bitsPerPixel = 32; int bytesPerRow = 4 * size.width; CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault; CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; CGImageRef iref = CGImageCreate(size.width, size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent); uint32_t* pixels = (uint32_t*)malloc(bufferLength); CGContextRef context = CGBitmapContextCreate(pixels, size.width, size.height, 8, size.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextTranslateCTM(context, 0, size.height); CGContextScaleCTM(context, 1.0f, -1.0f); switch (director.deviceOrientation) { case CCDeviceOrientationPortrait: break; case CCDeviceOrientationPortraitUpsideDown: CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180)); CGContextTranslateCTM(context, -size.width, -size.height); break; case CCDeviceOrientationLandscapeLeft: CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90)); CGContextTranslateCTM(context, -size.width, 0); break; case CCDeviceOrientationLandscapeRight: CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90)); CGContextTranslateCTM(context, size.width * 0.5f, -size.height); break; } CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, size.width, size.height), iref); UIImage *outputImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)]; //Dealloc CGDataProviderRelease(provider); CGImageRelease(iref); CGContextRelease(context); free(buffer); free(pixels); return outputImage; But when we enabled retina display in cocos 0.99.5. This functionality is a little messed up since it will miss a little left part of the image while the high is still correct. So I'm wondering if there is anything wrong with the code or am I doing anything wrong here? Thank you in advance for any reply!

    Read the article

  • Getting level values from PCM raw data using Core Audio

    - by John
    I am trying to extract level data from a PCM audio file using core audio. I have gotten as far as (I believe) getting the raw data into a byte array (UInt8) but it is 16 bit PCM data and I am having trouble reading the data out. The input is from the iPhone microphone, which I have set as: [recordSetting setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey]; [recordSetting setValue:[NSNumber numberWithInt:1] forKey:AVNumberOfChannelsKey]; [recordSetting setValue:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; which is obviously 16 bits. I am then trying to just print out a few values to see if they look reasonable for debug purposes below, and they do not look reasonable (many 0's). ExtAudioFileRef inputFile = NULL; ExtAudioFileOpenURL(track.location, &inputFile); AudioStreamBasicDescription inputFileFormat; UInt32 dataSize = (UInt32)sizeof(inputFileFormat); ExtAudioFileGetProperty(inputFile, kExtAudioFileProperty_FileDataFormat, &dataSize, &inputFileFormat); UInt8 *buffer = malloc(BUFFER_SIZE); AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0].mNumberChannels = 1; bufferList.mBuffers[0].mData = buffer; //pointer to buffer of audio data bufferList.mBuffers[0].mDataByteSize = BUFFER_SIZE; //number of bytes in the buffer while(true) { UInt32 frameCount = (bufferList.mBuffers[0].mDataByteSize / inputFileFormat.mBytesPerFrame); // Read a chunk of input OSStatus status = ExtAudioFileRead(inputFile, &frameCount, &bufferList); // If no frames were returned, conversion is finished if(0 == frameCount) break; NSLog(@"---"); int16_t *bufferl = &buffer; for(int i=0;i<100;i++){ //const int16_t *bufferl = bufferl[i]; NSLog(@"%d",bufferl[i]); } } Not sure what I am doing wrong, I think it has to do with reading the byte array. Sorry for the long code post...

    Read the article

  • Best way to render Tesselated Objects (OpenGL)

    - by user146780
    I'm using the GLUTesselator for Polygons. Right now the vertex callback does glvertex2f and gltex2f. Would it be better simply to collect the verticies from the vertex callback in a std::vector then use gldrawarrays()? Or would this actually be less efficient since it has to put the verts and texture coordinates in a vector? Thanks

    Read the article

  • c++ program debugged well with Cygwin4 (under Netbeans 7.2) but not with MinGW (under QT 4.8.1)

    - by GoldenAxe
    I have a c++ program which take a map text file and output it to a graph data structure I have made, I am using QT as I needed cross-platform program and GUI as well as visual representation of the map. I have several maps in different sizes (8x8 to 4096x4096). I am using unordered_map with a vector as key and vertex as value, I'm sending hash(1) and equal functions which I wrote to the unordered_map in creation. Under QT I am debugging my program with QT 4.8.1 for desktop MinGW (QT SDK), the program works and debug well until I try the largest map of 4096x4096, then the program stuck with the following error: "the inferior stopped because it received a signal from operating system", when debugging, the program halt at the hash function which used inside the unordered_map and not as part of the insertion state, but at a getter(2). Under Netbeans IDE 7.2 and Cygwin4 all works fine (debug and run). some code info: typedef std::vector<double> coordinate; typedef std::unordered_map<coordinate const*, Vertex<Element>*, container_hash, container_equal> vertexsContainer; vertexsContainer *m_vertexes (1) hash function: struct container_hash { size_t operator()(coordinate const *cord) const { size_t sum = 0; std::ostringstream ss; for ( auto it = cord->begin() ; it != cord->end() ; ++it ) { ss << *it; } sum = std::hash<std::string>()(ss.str()); return sum; } }; (2) the getter: template <class Element> Vertex<Element> *Graph<Element>::getVertex(const coordinate &cord) { try { Vertex<Element> *v = m_vertexes->at(&cord); return v; } catch (std::exception& e) { return NULL; } } I was thinking maybe it was some memory issue at the beginning, so before I was thinking of trying Netbeans I checked it with QT on my friend pc with a 16GB RAM and got the same error. Thanks.

    Read the article

  • Why do my pyramids fade black and then back to colour again

    - by geminiCoder
    I have the following vertecies and norms GLfloat verts[36] = { -0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 1, 0, -0.5, 0, 0.5, 0, 0, -0.5, 0, 1, 0, 0.5, 0, 0.5, -0.5, 0, 0.5, 0, 1, 0 }; GLfloat norms[36] = { 0, -1, 0, 0, -1, 0, 0, -1, 0, -1, 0.25, 0.5, -1, 0.25, 0.5, -1, 0.25, 0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 0, -0.5, -1, 0, -0.5, -1, 0, -0.5, -1 }; I am writing my fists Open GL game, But I need to know for sure if my Normals are correct as the colours aren't rendering correctly. my Pyramids are coloured then fade to black every half rotation then back again. My app so far is based on the boiler plate code provided by apple. heres my modified setUp Method [EAGLContext setCurrentContext:self.context]; [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); glGenVertexArraysOES(1, &_vertexArray); //create vertex array glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(norms), NULL, GL_STATIC_DRAW); //create vertex buffer big enough for both verts and norms and pass NULL as data.. uint8_t *ptr = (uint8_t *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES); //map buffer to pass data to it memcpy(ptr, verts, sizeof(verts)); //copy verts memcpy(ptr+sizeof(verts), norms, sizeof(norms)); //copy norms to position after verts glUnmapBufferOES(GL_ARRAY_BUFFER); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); //tell GL where verts are in buffer glEnableVertexAttribArray(GLKVertexAttribNormal); glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(verts))); //tell GL where norms are in buffer glBindVertexArrayOES(0); And the update method. - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } But providing I understand this correct one pyramid is using the GLKit base effect shaders and the other the shaders which are included in the project. So for both of them to have the same error, I thought it would be the Norms?

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • The server returned an address in response to the PASV command that is different than the address to

    - by senzacionale
    {System.Net.WebException: The server returned an address in response to the PASV command that is different than the address to which the FTP connection was made. at System.Net.FtpWebRequest.CheckError() at System.Net.FtpWebRequest.SyncRequestCallback(Object obj) at System.Net.CommandStream.Abort(Exception e) at System.Net.FtpWebRequest.FinishRequestStage(RequestStage stage) at System.Net.FtpWebRequest.GetRequestStream() at BackupDB.Program.FTPUploadFile(String serverPath, String serverFile, FileInfo LocalFile, NetworkCredential Cred) in D:\PROJEKTI\BackupDB\BackupDB\Program.cs:line 119} code: FTPMakeDir(new Uri(serverPath + "/"), Cred); FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverPath + serverFile); request.UsePassive = true; request.Method = WebRequestMethods.Ftp.UploadFile; request.Credentials = Cred; byte[] buffer = new byte[10240]; // Read/write 10kb using (FileStream sourceStream = new FileStream(LocalFile.ToString(), FileMode.Open)) { using (Stream requestStream = request.GetRequestStream()) { int bytesRead; do { bytesRead = sourceStream.Read(buffer, 0, buffer.Length); requestStream.Write(buffer, 0, bytesRead); } while (bytesRead > 0); } response = (FtpWebResponse)request.GetResponse(); response.Close(); }

    Read the article

  • Unit testing Monorail's RenderText method

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new FooController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( foo.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • Scrolling a Canvas smoothly in Android

    - by prepbgg
    I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >